GAPX: Generalized Autoregressive Paraphrase-Identification X

10/05/2022
by   Yifei Zhou, et al.
5

Paraphrase Identification is a fundamental task in Natural Language Processing. While much progress has been made in the field, the performance of many state-of-the-art models often suffer from distribution shift during inference time. We verify that a major source of this performance drop comes from biases introduced by negative examples. To overcome these biases, we propose in this paper to train two separate models, one that only utilizes the positive pairs and the other the negative pairs. This enables us the option of deciding how much to utilize the negative model, for which we introduce a perplexity based out-of-distribution metric that we show can effectively and automatically determine how much weight it should be given during inference. We support our findings with strong empirical results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/28/2020

Defining and Evaluating Fair Natural Language Generation

Our work focuses on the biases that emerge in the natural language gener...
research
12/05/2018

Attention Boosted Sequential Inference Model

Attention mechanism has been proven effective on natural language proces...
research
09/13/2019

simple but effective techniques to reduce biases

There have been several studies recently showing that strong natural lan...
research
02/24/2021

PADA: A Prompt-based Autoregressive Approach for Adaptation to Unseen Domains

Natural Language Processing algorithms have made incredible progress rec...
research
04/07/2022

Mapping the Multilingual Margins: Intersectional Biases of Sentiment Analysis Systems in English, Spanish, and Arabic

As natural language processing systems become more widespread, it is nec...
research
12/02/2019

BLiMP: A Benchmark of Linguistic Minimal Pairs for English

We introduce The Benchmark of Linguistic Minimal Pairs (shortened to BLi...

Please sign up or login with your details

Forgot password? Click here to reset