Exploring Automatically Perturbed Natural Language Explanations in Relation Extraction

05/24/2023
by   Wanyun Cui, et al.
0

Previous research has demonstrated that natural language explanations provide valuable inductive biases that guide models, thereby improving the generalization ability and data efficiency. In this paper, we undertake a systematic examination of the effectiveness of these explanations. Remarkably, we find that corrupted explanations with diminished inductive biases can achieve competitive or superior performance compared to the original explanations. Our findings furnish novel insights into the characteristics of natural language explanations in the following ways: (1) the impact of explanations varies across different training styles and datasets, with previously believed improvements primarily observed in frozen language models. (2) While previous research has attributed the effect of explanations solely to their inductive biases, our study shows that the effect persists even when the explanations are completely corrupted. We propose that the main effect is due to the provision of additional context space. (3) Utilizing the proposed automatic perturbed context, we were able to attain comparable results to annotated explanations, but with a significant increase in computational efficiency, 20-30 times faster.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/16/2021

Natural Language Inference with a Human Touch: Using Human Explanations to Guide Model Attention

Natural Language Inference (NLI) models are known to learn from biases a...
research
05/05/2020

ExpBERT: Representation Engineering with Natural Language Explanations

Suppose we want to specify the inductive bias that married couples typic...
research
10/12/2021

Investigating the Effect of Natural Language Explanations on Out-of-Distribution Generalization in Few-shot NLI

Although neural models have shown strong performance in datasets such as...
research
11/25/2022

Complementary Explanations for Effective In-Context Learning

Large language models (LLMs) have exhibited remarkable capabilities in l...
research
11/04/2019

Learning to Annotate: Modularizing Data Augmentation for TextClassifiers with Natural Language Explanations

Deep neural networks usually require massive labeled data, which restric...
research
11/04/2019

Learning to Annotate: Modularizing Data Augmentation for Text Classifiers with Natural Language Explanations

Deep neural networks usually require massive labeled data, which restric...
research
07/25/2021

Hybrid Autoregressive Solver for Scalable Abductive Natural Language Inference

Regenerating natural language explanations for science questions is a ch...

Please sign up or login with your details

Forgot password? Click here to reset