Revisiting Out-of-distribution Robustness in NLP: Benchmark, Analysis, and LLMs Evaluations

06/07/2023
by   Lifan Yuan, et al.
0

This paper reexamines the research on out-of-distribution (OOD) robustness in the field of NLP. We find that the distribution shift settings in previous studies commonly lack adequate challenges, hindering the accurate evaluation of OOD robustness. To address these issues, we propose a benchmark construction protocol that ensures clear differentiation and challenging distribution shifts. Then we introduce BOSS, a Benchmark suite for Out-of-distribution robustneSS evaluation covering 5 tasks and 20 datasets. Based on BOSS, we conduct a series of experiments on pre-trained language models for analysis and evaluation of OOD robustness. First, for vanilla fine-tuning, we examine the relationship between in-distribution (ID) and OOD performance. We identify three typical types that unveil the inner learning mechanism, which could potentially facilitate the forecasting of OOD robustness, correlating with the advancements on ID datasets. Then, we evaluate 5 classic methods on BOSS and find that, despite exhibiting some effectiveness in specific cases, they do not offer significant improvement compared to vanilla fine-tuning. Further, we evaluate 5 LLMs with various adaptation paradigms and find that when sufficient ID data is available, fine-tuning domain-specific models outperform LLMs on ID examples significantly. However, in the case of OOD instances, prioritizing LLMs with in-context learning yields better results. We identify that both fine-tuned small models and LLMs face challenges in effectively addressing downstream tasks. The code is public at <https://github.com/lifan-yuan/OOD_NLP>.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/22/2023

Is Fine-tuning Needed? Pre-trained Language Models Are Near Perfect for Out-of-Domain Detection

Out-of-distribution (OOD) detection is a critical task for reliable pred...
research
09/15/2023

Bridging Topic, Domain, and Language Shifts: An Evaluation of Comprehensive Out-of-Distribution Scenarios

Language models (LMs) excel in in-distribution (ID) scenarios where trai...
research
03/06/2023

Masked Images Are Counterfactual Samples for Robust Fine-tuning

Deep learning models are challenged by the distribution shift between th...
research
10/12/2022

Are Sample-Efficient NLP Models More Robust?

Recent work has observed that pre-trained models have higher out-of-dist...
research
06/03/2023

Benchmarking Robustness of Adaptation Methods on Pre-trained Vision-Language Models

Various adaptation methods, such as LoRA, prompts, and adapters, have be...
research
10/11/2022

A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models

Despite the remarkable success of pre-trained language models (PLMs), th...
research
09/20/2023

Are Large Language Models Really Robust to Word-Level Perturbations?

The swift advancement in the scale and capabilities of Large Language Mo...

Please sign up or login with your details

Forgot password? Click here to reset