Log In Sign Up

Few-Shot Out-of-Domain Transfer Learning of Natural Language Explanations

by   Yordan Yordanov, et al.

Recently, there has been an increasing interest in models that generate natural language explanations (NLEs) for their decisions. However, training a model to provide NLEs requires the acquisition of task-specific NLEs, which is time- and resource-consuming. A potential solution is the out-of-domain transfer of NLEs from a domain with a large number of NLEs to a domain with scarce NLEs but potentially a large number of labels, via few-shot transfer learning. In this work, we introduce three vanilla approaches for few-shot transfer learning of NLEs for the case of few NLEs but abundant labels, along with an adaptation of an existing vanilla fine-tuning approach. We transfer explainability from the natural language inference domain, where a large dataset of human-written NLEs exists (e-SNLI), to the domains of (1) hard cases of pronoun resolution, where we introduce a small dataset of NLEs on top of the WinoGrande dataset (small-e-WinoGrande), and (2) commonsense validation (ComVE). Our results demonstrate that the transfer of NLEs outperforms the single-task methods, and establish the best strategies out of the four identified training regimes. We also investigate the scalability of the best methods, both in terms of training data and model size.


page 1

page 2

page 3

page 4


A Systematic Evaluation of Transfer Learning and Pseudo-labeling with BERT-based Ranking Models

Due to high annotation costs, making the best use of existing human-crea...

Lessons from Natural Language Inference in the Clinical Domain

State of the art models using deep neural networks have become very good...

Few-Shot Self-Rationalization with Natural Language Prompts

Self-rationalization models that predict task labels and generate free-t...

Are Hard Examples also Harder to Explain? A Study with Human and Model-Generated Explanations

Recent work on explainable NLP has shown that few-shot prompting can ena...

e-SNLI-VE-2.0: Corrected Visual-Textual Entailment with Natural Language Explanations

The recently proposed SNLI-VE corpus for recognising visual-textual enta...

Counterfactual Detection meets Transfer Learning

We can consider Counterfactuals as belonging in the domain of Discourse ...

Code Repositories


These are the materials for the paper "Few-Shot Out-of-Domain Transfer Learning of Natural Language Explanations"

view repo