Skills-in-Context Prompting: Unlocking Compositionality in Large Language Models

08/01/2023
by   Jiaao Chen, et al.
0

We consider the problem of eliciting compositional generalization capabilities in large language models (LLMs) with a novel type of prompting strategy. Compositional generalization empowers the LLMs to solve problems that are harder than the ones they have seen (i.e., easy-to-hard generalization), which is a critical reasoning capability of human-like intelligence. However, even the current state-of-the-art LLMs still struggle with this form of reasoning. To bridge this gap, we propose skills-in-context (SKiC) prompting, which instructs LLMs how to compose basic skills to resolve more complex problems. We find that it is crucial to demonstrate both the skills and the compositional examples within the same prompting context. With as few as two examplars, our SKiC prompting initiates strong synergies between skills and their composition capabilities. Notably, it empowers LLMs to solve unseen problems that require innovative skill compositions, achieving near-perfect generalization on a broad range of challenging compositionality tasks. Intriguingly, SKiC prompting unlocks the latent potential of LLMs, enabling them to leverage pre-existing internal skills acquired during earlier pretraining and alignment stages, even when these skills are not explicitly presented in the prompting context. This results in the capability of LLMs to solve unseen complex problems by activating and composing these internal competencies.

READ FULL TEXT

page 8

page 9

research
12/16/2022

ALERT: Adapting Language Models to Reasoning Tasks

Current large language models can perform reasonably well on complex tas...
research
05/08/2023

How Do In-Context Examples Affect Compositional Generalization?

Compositional generalization–understanding unseen combinations of seen p...
research
11/15/2022

Teaching Algorithmic Reasoning via In-context Learning

Large language models (LLMs) have shown increasing in-context learning c...
research
05/26/2023

Im-Promptu: In-Context Composition from Image Prompts

Large language models are few-shot learners that can solve diverse tasks...
research
05/21/2022

Least-to-Most Prompting Enables Complex Reasoning in Large Language Models

We propose a novel prompting strategy, least-to-most prompting, that ena...
research
07/29/2023

A Theory for Emergence of Complex Skills in Language Models

A major driver of AI products today is the fact that new skills emerge i...
research
10/20/2022

Disentangling Reasoning Capabilities from Language Models with Compositional Reasoning Transformers

This paper presents ReasonFormer, a unified reasoning framework for mirr...

Please sign up or login with your details

Forgot password? Click here to reset