Symbolic Knowledge Distillation: from General Language Models to Commonsense Models (Video)

Symbolic knowledge models are usually trained on human-generated corpora that are cumbersome and expensive to create. Such corpora consist of structured triples of symbolic knowledge. This paper takes a different approach and attempts to generate such a corpus by prompting GPT-3. Results show that clever prompting, combined with targeted small critic models trained on human ratings can outperform both human-generated data, as well as the teacher model (GPT-3) itself. The results of this paper give a general recipe for automatically building corpora for various NLP tasks by extracting samples from large language models.

OUTLINE:
0:00 - Intro & Overview
2:30 - Sponsor: Weights & Biases
4:15 - Commonsense Knowledge Graphs
7:50 - ATOMIC dataset
10:00 - Generating the corpus from a model
13:00 - Prompting GPT-3
15:30 - Generating Events
18:40 - Generating Inferences
23:00 - Evaluating the created dataset
26:45 - Introducing the critic
31:25 - Using the critic to filter the data
36:30 - Training a student on the generated data
41:00 - Key Findings
44:45 - Comments & Conclusion

Paper: [2110.07178] Symbolic Knowledge Distillation: from General Language Models to Commonsense Models
Code & Corpus: https://github.com/peterwestai2/symbo

Sponsor: Weights & Biases

Abstract:
The common practice for training commonsense models has gone from-human-to-corpus-to-machine: humans author commonsense knowledge graphs in order to train commonsense models. In this work, we investigate an alternative, from-machine-to-corpus-to-machine: general language models author these commonsense knowledge graphs to train commonsense models. Our study leads to a new framework, Symbolic Knowledge Distillation. As with prior art in Knowledge Distillation (Hinton et al., 2015), our approach uses larger models to teach smaller models. A key difference is that we distill knowledge symbolically-as text-in addition to the neural model. We also distill only one aspect-the commonsense of a general language model teacher, allowing the student to be a different type, a commonsense model. Altogether, we show that careful prompt engineering and a separately trained critic model allow us to selectively distill high-quality causal commonsense from GPT-3, a general language model. Empirical results demonstrate that, for the first time, a human-authored commonsense knowledge graph is surpassed by our automatically distilled variant in all three criteria: quantity, quality, and diversity. In addition, it results in a neural commonsense model that surpasses the teacher model’s commonsense capabilities despite its 100x smaller size. We apply this to the ATOMIC resource, and share our new symbolic knowledge graph and commonsense models.

Authors: Peter West, Chandra Bhagavatula, Jack Hessel, Jena D. Hwang, Liwei Jiang, Ronan Le Bras, Ximing Lu, Sean Welleck, Yejin Choi