DeepAI AI Chat
Log In Sign Up

Investigating the Effect of Natural Language Explanations on Out-of-Distribution Generalization in Few-shot NLI

by   Yangqiaoyu Zhou, et al.
The University of Chicago

Although neural models have shown strong performance in datasets such as SNLI, they lack the ability to generalize out-of-distribution (OOD). In this work, we formulate a few-shot learning setup and examine the effects of natural language explanations on OOD generalization. We leverage the templates in the HANS dataset and construct templated natural language explanations for each template. Although generated explanations show competitive BLEU scores against groundtruth explanations, they fail to improve prediction performance. We further show that generated explanations often hallucinate information and miss key elements that indicate the label.


page 1

page 2

page 3

page 4


Natural Language Inference with a Human Touch: Using Human Explanations to Guide Model Attention

Natural Language Inference (NLI) models are known to learn from biases a...

Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language?

Data collection for natural language (NL) understanding tasks has increa...

The Unreliability of Explanations in Few-Shot In-Context Learning

How can prompting a large language model like GPT-3 with explanations im...

What Gets Echoed? Understanding the "Pointers" in Explanations of Persuasive Arguments

Explanations are central to everyday life, and are a topic of growing in...

Exploring Automatically Perturbed Natural Language Explanations in Relation Extraction

Previous research has demonstrated that natural language explanations pr...

Faithfulness Tests for Natural Language Explanations

Explanations of neural models aim to reveal a model's decision-making pr...

XAI-Increment: A Novel Approach Leveraging LIME Explanations for Improved Incremental Learning

Explainability of neural network prediction is essential to understand f...