DeepAI AI Chat
Log In Sign Up

Investigating the Effect of Natural Language Explanations on Out-of-Distribution Generalization in Few-shot NLI

10/12/2021
by   Yangqiaoyu Zhou, et al.
The University of Chicago
0

Although neural models have shown strong performance in datasets such as SNLI, they lack the ability to generalize out-of-distribution (OOD). In this work, we formulate a few-shot learning setup and examine the effects of natural language explanations on OOD generalization. We leverage the templates in the HANS dataset and construct templated natural language explanations for each template. Although generated explanations show competitive BLEU scores against groundtruth explanations, they fail to improve prediction performance. We further show that generated explanations often hallucinate information and miss key elements that indicate the label.

READ FULL TEXT

page 1

page 2

page 3

page 4

04/16/2021

Natural Language Inference with a Human Touch: Using Human Explanations to Guide Model Attention

Natural Language Inference (NLI) models are known to learn from biases a...
10/08/2020

Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language?

Data collection for natural language (NL) understanding tasks has increa...
05/06/2022

The Unreliability of Explanations in Few-Shot In-Context Learning

How can prompting a large language model like GPT-3 with explanations im...
11/01/2019

What Gets Echoed? Understanding the "Pointers" in Explanations of Persuasive Arguments

Explanations are central to everyday life, and are a topic of growing in...
05/24/2023

Exploring Automatically Perturbed Natural Language Explanations in Relation Extraction

Previous research has demonstrated that natural language explanations pr...
05/29/2023

Faithfulness Tests for Natural Language Explanations

Explanations of neural models aim to reveal a model's decision-making pr...
11/02/2022

XAI-Increment: A Novel Approach Leveraging LIME Explanations for Improved Incremental Learning

Explainability of neural network prediction is essential to understand f...