Scaling In-Context Demonstrations with Structured Attention

07/05/2023
by   Tianle Cai, et al.
0

The recent surge of large language models (LLMs) highlights their ability to perform in-context learning, i.e., "learning" to perform a task from a few demonstrations in the context without any parameter updates. However, their capabilities of in-context learning are limited by the model architecture: 1) the use of demonstrations is constrained by a maximum sentence length due to positional embeddings; 2) the quadratic complexity of attention hinders users from using more demonstrations efficiently; 3) LLMs are shown to be sensitive to the order of the demonstrations. In this work, we tackle these challenges by proposing a better architectural design for in-context learning. We propose SAICL (Structured Attention for In-Context Learning), which replaces the full-attention by a structured attention mechanism designed for in-context learning, and removes unnecessary dependencies between individual demonstrations, while making the model invariant to the permutation of demonstrations. We evaluate SAICL in a meta-training framework and show that SAICL achieves comparable or better performance than full attention while obtaining up to 3.4x inference speed-up. SAICL also consistently outperforms a strong Fusion-in-Decoder (FiD) baseline which processes each demonstration independently. Finally, thanks to its linear nature, we demonstrate that SAICL can easily scale to hundreds of demonstrations with continuous performance gains with scaling.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/25/2022

Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?

Large language models (LMs) are able to in-context learn – perform a new...
research
12/13/2022

Structured Prompting: Scaling In-Context Learning to 1,000 Examples

Large language models have exhibited intriguing in-context learning capa...
research
05/22/2023

Iterative Forward Tuning Boosts In-context Learning in Language Models

Large language models (LLMs) have exhibited an emergent in-context learn...
research
05/23/2023

Dr.ICL: Demonstration-Retrieved In-context Learning

In-context learning (ICL), teaching a large language model (LLM) to perf...
research
05/18/2023

Efficient Prompting via Dynamic In-Context Learning

The primary way of building AI applications is shifting from training sp...
research
05/24/2023

Coverage-based Example Selection for In-Context Learning

In-context learning (ICL), the ability of large language models to perfo...
research
05/24/2023

Prompt Optimization of Large Language Model for Interactive Tasks without Gradient and Demonstrations

Large language models (LLMs) have demonstrated remarkable language profi...

Please sign up or login with your details

Forgot password? Click here to reset