How Do In-Context Examples Affect Compositional Generalization?

05/08/2023
by   Shengnan An, et al.
0

Compositional generalization–understanding unseen combinations of seen primitives–is an essential reasoning capability in human intelligence. The AI community mainly studies this capability by fine-tuning neural networks on lots of training samples, while it is still unclear whether and how in-context learning–the prevailing few-shot paradigm based on large language models–exhibits compositional generalization. In this paper, we present CoFe, a test suite to investigate in-context compositional generalization. We find that the compositional generalization performance can be easily affected by the selection of in-context examples, thus raising the research question what the key factors are to make good in-context examples for compositional generalization. We study three potential factors: similarity, diversity and complexity. Our systematic experiments indicate that in-context examples should be structurally similar to the test case, diverse from each other, and individually simple. Furthermore, two strong limitations are observed: in-context compositional generalization on fictional words is much weaker than that on commonly used ones; it is still critical that the in-context examples should cover required linguistic structures, even though the backbone model has been pre-trained on large corpus. We hope our analysis would facilitate the understanding and utilization of in-context learning paradigm.

READ FULL TEXT

page 8

page 9

research
08/01/2023

Skills-in-Context Prompting: Unlocking Compositionality in Large Language Models

We consider the problem of eliciting compositional generalization capabi...
research
05/23/2023

Prompting Language-Informed Distribution for Compositional Zero-Shot Learning

The compositional zero-shot learning (CZSL) task aims to recognize unsee...
research
06/09/2022

Defending Compositionality in Emergent Languages

Compositionality has traditionally been understood as a major factor in ...
research
05/26/2023

Im-Promptu: In-Context Composition from Image Prompts

Large language models are few-shot learners that can solve diverse tasks...
research
10/01/2019

Emergent Systematic Generalization in a Situated Agent

The question of whether deep neural networks are good at generalising be...
research
07/19/2018

Rearranging the Familiar: Testing Compositional Generalization in Recurrent Networks

Systematic compositionality is the ability to recombine meaningful units...
research
08/17/2022

Learning Transductions to Test Systematic Compositionality

Recombining known primitive concepts into larger novel combinations is a...

Please sign up or login with your details

Forgot password? Click here to reset