Unpacking Large Language Models with Conceptual Consistency

09/29/2022
by   Pritish Sahu, et al.
14

If a Large Language Model (LLM) answers "yes" to the question "Are mountains tall?" then does it know what a mountain is? Can you rely on it responding correctly or incorrectly to other questions about mountains? The success of Large Language Models (LLMs) indicates they are increasingly able to answer queries like these accurately, but that ability does not necessarily imply a general understanding of concepts relevant to the anchor query. We propose conceptual consistency to measure a LLM's understanding of relevant concepts. This novel metric measures how well a model can be characterized by finding out how consistent its responses to queries about conceptually relevant background knowledge are. To compute it we extract background knowledge by traversing paths between concepts in a knowledge base and then try to predict the model's response to the anchor query from the background knowledge. We investigate the performance of current LLMs in a commonsense reasoning setting using the CSQA dataset and the ConceptNet knowledge base. While conceptual consistency, like other metrics, does increase with the scale of the LLM used, we find that popular models do not necessarily have high conceptual consistency. Our analysis also shows significant variation in conceptual consistency across different kinds of relations, concepts, and prompts. This serves as a step toward building models that humans can apply a theory of mind to, and thus interact with intuitively.

READ FULL TEXT
research
07/23/2023

CommonsenseVIS: Visualizing and Understanding Commonsense Reasoning Capabilities of Natural Language Models

Recently, large pretrained language models have achieved compelling perf...
research
04/11/2020

Unsupervised Commonsense Question Answering with Self-Talk

Natural language understanding involves reading between the lines with i...
research
05/03/2018

Learning Conceptual Space Representations of Interrelated Concepts

Several recently proposed methods aim to learn conceptual space represen...
research
11/24/2014

Towards a Consistent, Sound and Complete Conceptual Knowledge

Knowledge is only good if it is sound, consistent and complete. The same...
research
04/07/2023

Probing Conceptual Understanding of Large Visual-Language Models

We present a novel framework for probing and improving relational, compo...
research
05/17/2023

From chocolate bunny to chocolate crocodile: Do Language Models Understand Noun Compounds?

Noun compound interpretation is the task of expressing a noun compound (...
research
08/04/2021

How to Query Language Models?

Large pre-trained language models (LMs) are capable of not only recoveri...

Please sign up or login with your details

Forgot password? Click here to reset