Complementary Explanations for Effective In-Context Learning

11/25/2022
by   Xi Ye, et al.
0

Large language models (LLMs) have exhibited remarkable capabilities in learning from explanations in prompts. Yet, there has been limited understanding of what makes explanations effective for in-context learning. This work aims to better understand the mechanisms by which explanations are used for in-context learning. We first study the impact of two different factors on prompting performance when using explanations: the computation trace (the way the solution is decomposed) and the natural language of the prompt. By perturbing explanations on three controlled tasks, we show that both factors contribute to the effectiveness of explanations, indicating that LLMs do faithfully follow the explanations to some extent. We further study how to form maximally effective sets of explanations for solving a given test query. We find that LLMs can benefit from the complementarity of the explanation set as they are able to fuse different reasoning specified by individual exemplars in prompts. Additionally, having relevant exemplars also contributes to more effective prompts. Therefore, we propose a maximal-marginal-relevance-based exemplar selection approach for constructing exemplar sets that are both relevant as well as complementary, which successfully improves the in-context learning performance across three real-world tasks on multiple LLMs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/13/2022

Explanations from Large Language Models Make Small Reasoners Better

Integrating free-text explanations to in-context learning of large langu...
research
02/09/2023

Explanation Selection Using Unlabeled Data for In-Context Learning

Recent work has addressed textual reasoning tasks by prompting large lan...
research
01/25/2023

ExaRanker: Explanation-Augmented Neural Ranker

Recent work has shown that inducing a large language model (LLM) to gene...
research
05/19/2023

OPT-R: Exploring the Role of Explanations in Finetuning and Prompting for Reasoning Skills of Large Language Models

In this paper, we conduct a thorough investigation into the reasoning ca...
research
09/01/2020

Learning explanations that are hard to vary

In this paper, we investigate the principle that `good explanations are ...
research
05/24/2023

Exploring Automatically Perturbed Natural Language Explanations in Relation Extraction

Previous research has demonstrated that natural language explanations pr...
research
05/19/2023

CCGen: Explainable Complementary Concept Generation in E-Commerce

We propose and study Complementary Concept Generation (CCGen): given a c...

Please sign up or login with your details

Forgot password? Click here to reset