DREAM: Uncovering Mental Models behind Language Models

12/16/2021
by   Yuling Gu, et al.
0

To what extent do language models (LMs) build "mental models" of a scene when answering situated questions (e.g., questions about a specific ethical dilemma)? While cognitive science has shown that mental models play a fundamental role in human problem-solving, it is unclear whether the high question-answering performance of existing LMs is backed by similar model building - and if not, whether that can explain their well-known catastrophic failures. We observed that Macaw, an existing T5-based LM, when probed provides somewhat useful but inadequate mental models for situational questions (estimated accuracy=43 model that takes a situational question as input to produce a mental model elaborating the situation, without any additional task specific training data for mental models. It inherits its social commonsense through distant supervision from existing NLP resources. Our analysis shows that DREAM can produce significantly better mental models (estimated accuracy=67 usefulness=37 generated by DREAM can be used as additional context for situational QA tasks. This additional context improves the answer accuracy of a Macaw zero-shot model by between +1

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/07/2020

Knowledge-driven Self-supervision for Zero-shot Commonsense Question Answering

Recent developments in pre-trained neural language modeling have led to ...
research
03/29/2023

ChatGPT is a Knowledgeable but Inexperienced Solver: An Investigation of Commonsense Problem in Large Language Models

Large language models (LLMs) such as ChatGPT and GPT-4 have made signifi...
research
12/20/2022

Do language models have coherent mental models of everyday things?

When people think of everyday things like an "egg," they typically have ...
research
07/26/2023

Mental-LLM: Leveraging Large Language Models for Mental Health Prediction via Online Text Data

Advances in large language models (LLMs) have empowered a variety of app...
research
04/10/2021

Meta-tuning Language Models to Answer Prompts Better

Large pretrained language models like GPT-3 have acquired a surprising a...
research
02/18/2023

M-SENSE: Modeling Narrative Structure in Short Personal Narratives Using Protagonist's Mental Representations

Narrative is a ubiquitous component of human communication. Understandin...
research
09/06/2021

General-Purpose Question-Answering with Macaw

Despite the successes of pretrained language models, there are still few...

Please sign up or login with your details

Forgot password? Click here to reset