DeepAI AI Chat
Log In Sign Up

Complementary Explanations for Effective In-Context Learning

by   Xi Ye, et al.
The University of Texas at Austin

Large language models (LLMs) have exhibited remarkable capabilities in learning from explanations in prompts. Yet, there has been limited understanding of what makes explanations effective for in-context learning. This work aims to better understand the mechanisms by which explanations are used for in-context learning. We first study the impact of two different factors on prompting performance when using explanations: the computation trace (the way the solution is decomposed) and the natural language of the prompt. By perturbing explanations on three controlled tasks, we show that both factors contribute to the effectiveness of explanations, indicating that LLMs do faithfully follow the explanations to some extent. We further study how to form maximally effective sets of explanations for solving a given test query. We find that LLMs can benefit from the complementarity of the explanation set as they are able to fuse different reasoning specified by individual exemplars in prompts. Additionally, having relevant exemplars also contributes to more effective prompts. Therefore, we propose a maximal-marginal-relevance-based exemplar selection approach for constructing exemplar sets that are both relevant as well as complementary, which successfully improves the in-context learning performance across three real-world tasks on multiple LLMs.


page 1

page 2

page 3

page 4


Explanations from Large Language Models Make Small Reasoners Better

Integrating free-text explanations to in-context learning of large langu...

Explanation Selection Using Unlabeled Data for In-Context Learning

Recent work has addressed textual reasoning tasks by prompting large lan...

ExaRanker: Explanation-Augmented Neural Ranker

Recent work has shown that inducing a large language model (LLM) to gene...

Learning explanations that are hard to vary

In this paper, we investigate the principle that `good explanations are ...

Exploring Automatically Perturbed Natural Language Explanations in Relation Extraction

Previous research has demonstrated that natural language explanations pr...

CCGen: Explainable Complementary Concept Generation in E-Commerce

We propose and study Complementary Concept Generation (CCGen): given a c...