Learning to Infer from Unlabeled Data: A Semi-supervised Learning Approach for Robust Natural Language Inference

11/05/2022
by   Mobashir Sadat, et al.
0

Natural Language Inference (NLI) or Recognizing Textual Entailment (RTE) aims at predicting the relation between a pair of sentences (premise and hypothesis) as entailment, contradiction or semantic independence. Although deep learning models have shown promising performance for NLI in recent years, they rely on large scale expensive human-annotated datasets. Semi-supervised learning (SSL) is a popular technique for reducing the reliance on human annotation by leveraging unlabeled data for training. However, despite its substantial success on single sentence classification tasks where the challenge in making use of unlabeled data is to assign "good enough" pseudo-labels, for NLI tasks, the nature of unlabeled data is more complex: one of the sentences in the pair (usually the hypothesis) along with the class label are missing from the data and require human annotations, which makes SSL for NLI more challenging. In this paper, we propose a novel way to incorporate unlabeled data in SSL for NLI where we use a conditional language model, BART to generate the hypotheses for the unlabeled sentences (used as premises). Our experiments show that our SSL framework successfully exploits unlabeled data and substantially improves the performance of four NLI datasets in low-resource settings. We release our code at: https://github.com/msadat3/SSL_for_NLI.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/11/2022

DoubleMatch: Improving Semi-Supervised Learning with Self-Supervision

Following the success of supervised learning, semi-supervised learning (...
research
02/07/2022

Diversify and Disambiguate: Learning From Underspecified Data

Many datasets are underspecified, which means there are several equally ...
research
10/13/2018

Mixture of Expert/Imitator Networks: Scalable Semi-supervised Learning Framework

The current success of deep neural networks (DNNs) in an increasingly br...
research
10/30/2022

Generate, Discriminate and Contrast: A Semi-Supervised Sentence Representation Learning Framework

Most sentence embedding techniques heavily rely on expensive human-annot...
research
07/31/2023

Predicting masked tokens in stochastic locations improves masked image modeling

Self-supervised learning is a promising paradigm in deep learning that e...
research
04/26/2020

Dual Learning for Semi-Supervised Natural Language Understanding

Natural language understanding (NLU) converts sentences into structured ...
research
03/02/2020

Learning from Positive and Unlabeled Data by Identifying the Annotation Process

In binary classification, Learning from Positive and Unlabeled data (LeP...

Please sign up or login with your details

Forgot password? Click here to reset