DeepAI AI Chat
Log In Sign Up

ExaRanker: Explanation-Augmented Neural Ranker

by   Fernando Ferraretto, et al.

Recent work has shown that inducing a large language model (LLM) to generate explanations prior to outputting an answer is an effective strategy to improve performance on a wide range of reasoning tasks. In this work, we show that neural rankers also benefit from explanations. We use LLMs such as GPT-3.5 to augment retrieval datasets with explanations and train a sequence-to-sequence ranking model to output a relevance label and an explanation for a given query-document pair. Our model, dubbed ExaRanker, finetuned on a few thousand examples with synthetic explanations performs on par with models finetuned on 3x more examples without explanations. Furthermore, the ExaRanker model incurs no additional computational cost during ranking and allows explanations to be requested on demand.


page 1

page 2

page 3

page 4


Learning to Explain: Answering Why-Questions via Rephrasing

Providing plausible responses to why questions is a challenging but crit...

Complementary Explanations for Effective In-Context Learning

Large language models (LLMs) have exhibited remarkable capabilities in l...

Reasoning about Explanations for Negative Query Answers in DL-Lite

In order to meet usability requirements, most logic-based applications p...

When Can Models Learn From Explanations? A Formal Framework for Understanding the Roles of Explanation Data

Many methods now exist for conditioning model outputs on task instructio...

Faithfully Explaining Rankings in a News Recommender System

There is an increasing demand for algorithms to explain their outcomes. ...

Grounding Visual Explanations (Extended Abstract)

Existing models which generate textual explanations enforce task relevan...