Paper Explained - Does GPT-3 lie? Misinformation and fear-mongering around the TruthfulQA dataset (Full Video Analysis)

A new benchmark paper has created quite an uproar in the community. TruthfulQA is a dataset of 817 questions probing for imitative falsehoods where language models become less truthful, the larger they get. This surprising counter-intuitive finding validates many people’s criticisms of large language models, but is it really the correct conclusion?

0:00 - Intro
0:30 - Twitter Paper Announcement
4:10 - Large Language Models are to blame!
5:50 - How was the dataset constructed?
9:25 - The questions are adversarial
12:30 - Are you surprised?!

Paper: [2109.07958] TruthfulQA: Measuring How Models Mimic Human Falsehoods

1 Like