Self-training with Few-shot Rationalization: Teacher Explanations Aid Student in Few-shot NLU

09/17/2021
by   Meghana Moorthy Bhat, et al.
17

While pre-trained language models have obtained state-of-the-art performance for several natural language understanding tasks, they are quite opaque in terms of their decision-making process. While some recent works focus on rationalizing neural predictions by highlighting salient concepts in the text as justifications or rationales, they rely on thousands of labeled training examples for both task labels as well as an-notated rationales for every instance. Such extensive large-scale annotations are infeasible to obtain for many tasks. To this end, we develop a multi-task teacher-student framework based on self-training language models with limited task-specific labels and rationales, and judicious sample selection to learn from informative pseudo-labeled examples1. We study several characteristics of what constitutes a good rationale and demonstrate that the neural model performance can be significantly improved by making it aware of its rationalized predictions, particularly in low-resource settings. Extensive experiments in several bench-mark datasets demonstrate the effectiveness of our approach.

READ FULL TEXT
research
06/05/2023

Few Shot Rationale Generation using Self-Training with Dual Teachers

Self-rationalizing models that also generate a free-text explanation for...
research
05/09/2022

Attribution-based Task-specific Pruning for Multi-task Language Models

Multi-task language models show outstanding performance for various natu...
research
04/04/2023

Sociocultural knowledge is needed for selection of shots in hate speech detection tasks

We introduce HATELEXICON, a lexicon of slurs and targets of hate speech ...
research
11/16/2021

Few-Shot Self-Rationalization with Natural Language Prompts

Self-rationalization models that predict task labels and generate free-t...
research
10/07/2020

Adaptive Self-training for Few-shot Neural Sequence Labeling

Neural sequence labeling is an important technique employed for many Nat...
research
11/01/2020

MixKD: Towards Efficient Distillation of Large-scale Language Models

Large-scale language models have recently demonstrated impressive empiri...
research
09/17/2023

Mitigating Shortcuts in Language Models with Soft Label Encoding

Recent research has shown that large language models rely on spurious co...

Please sign up or login with your details

Forgot password? Click here to reset