SMoA: Sparse Mixture of Adapters to Mitigate Multiple Dataset Biases

02/28/2023
by   Yanchen Liu, et al.
0

Recent studies reveal that various biases exist in different NLP tasks, and over-reliance on biases results in models' poor generalization ability and low adversarial robustness. To mitigate datasets biases, previous works propose lots of debiasing techniques to tackle specific biases, which perform well on respective adversarial sets but fail to mitigate other biases. In this paper, we propose a new debiasing method Sparse Mixture-of-Adapters (SMoA), which can mitigate multiple dataset biases effectively and efficiently. Experiments on Natural Language Inference and Paraphrase Identification tasks demonstrate that SMoA outperforms full-finetuning, adapter tuning baselines, and prior strong debiasing methods. Further analysis indicates the interpretability of SMoA that sub-adapter can capture specific pattern from the training data and specialize to handle specific bias.

READ FULL TEXT
research
10/08/2020

An Empirical Study on Model-agnostic Debiasing Strategies for Robust Natural Language Inference

The prior work on natural language inference (NLI) debiasing mainly targ...
research
03/05/2020

HypoNLI: Exploring the Artificial Patterns of Hypothesis-only Bias in Natural Language Inference

Many recent studies have shown that for models trained on datasets for n...
research
10/07/2020

Improving QA Generalization by Concurrent Modeling of Multiple Biases

Existing NLP datasets contain various biases that models can easily expl...
research
08/29/2021

Behind the Scenes: An Exploration of Trigger Biases Problem in Few-Shot Event Classification

Few-Shot Event Classification (FSEC) aims at developing a model for even...
research
02/10/2020

Adversarial Filters of Dataset Biases

Large neural models have demonstrated human-level performance on languag...
research
09/13/2019

simple but effective techniques to reduce biases

There have been several studies recently showing that strong natural lan...
research
10/14/2022

A Survey of Parameters Associated with the Quality of Benchmarks in NLP

Several benchmarks have been built with heavy investment in resources to...

Please sign up or login with your details

Forgot password? Click here to reset