From chocolate bunny to chocolate crocodile: Do Language Models Understand Noun Compounds?

05/17/2023
by   Jordan Coil, et al.
0

Noun compound interpretation is the task of expressing a noun compound (e.g. chocolate bunny) in a free-text paraphrase that makes the relationship between the constituent nouns explicit (e.g. bunny-shaped chocolate). We propose modifications to the data and evaluation setup of the standard task (Hendrickx et al., 2013), and show that GPT-3 solves it almost perfectly. We then investigate the task of noun compound conceptualization, i.e. paraphrasing a novel or rare noun compound. E.g., chocolate crocodile is a crocodile-shaped chocolate. This task requires creativity, commonsense, and the ability to generalize knowledge about similar concepts. While GPT-3's performance is not perfect, it is better than that of humans – likely thanks to its access to vast amounts of knowledge, and because conceptual processing is effortful for people (Connell and Lynott, 2012). Finally, we estimate the extent to which GPT-3 is reasoning about the world vs. parroting its training data. We find that the outputs from GPT-3 often have significant overlap with a large web corpus, but that the parroting strategy is less beneficial for novel noun compounds.

READ FULL TEXT
research
05/13/2022

A Property Induction Framework for Neural Language Models

To what extent can experience from language contribute to our conceptual...
research
08/08/2019

Do Neural Language Representations Learn Physical Commonsense?

Humans understand language based on the rich background knowledge about ...
research
05/30/2023

Less Likely Brainstorming: Using Language Models to Generate Alternative Hypotheses

A human decision-maker benefits the most from an AI assistant that corre...
research
04/20/2021

Identify, Align, and Integrate: Matching Knowledge Graphs to Commonsense Reasoning Tasks

Integrating external knowledge into commonsense reasoning tasks has show...
research
09/29/2022

Unpacking Large Language Models with Conceptual Consistency

If a Large Language Model (LLM) answers "yes" to the question "Are mount...
research
05/30/2023

Does Conceptual Representation Require Embodiment? Insights From Large Language Models

Recent advances in large language models (LLM) have the potential to she...

Please sign up or login with your details

Forgot password? Click here to reset