Assessing Out-of-Domain Language Model Performance from Few Examples

10/13/2022
by   Prasann Singhal, et al.
0

While pretrained language models have exhibited impressive generalization capabilities, they still behave unpredictably under certain domain shifts. In particular, a model may learn a reasoning process on in-domain training data that does not hold for out-of-domain test data. We address the task of predicting out-of-domain (OOD) performance in a few-shot fashion: given a few target-domain examples and a set of models with similar training performance, can we understand how these models will perform on OOD test data? We benchmark the performance on this task when looking at model accuracy on the few-shot examples, then investigate how to incorporate analysis of the models' behavior using feature attributions to better tackle this problem. Specifically, we explore a set of "factors" designed to reveal model agreement with certain pathological heuristics that may indicate worse generalization capabilities. On textual entailment, paraphrase recognition, and a synthetic classification task, we show that attribution-based factors can help rank relative model OOD performance. However, accuracy on a few-shot test set is a surprisingly strong baseline, particularly when the system designer does not have in-depth prior knowledge about the domain shift.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/15/2022

Evaluating the Text-to-SQL Capabilities of Large Language Models

We perform an empirical evaluation of Text-to-SQL capabilities of the Co...
research
05/24/2023

Estimating Large Language Model Capabilities without Labeled Test Data

Large Language Models (LLMs) have exhibited an impressive ability to per...
research
10/26/2020

Word Frequency Does Not Predict Grammatical Knowledge in Language Models

Neural language models learn, to varying degrees of accuracy, the gramma...
research
10/29/2021

MetaICL: Learning to Learn In Context

We introduce MetaICL (Meta-training for In-Context Learning), a new meta...
research
08/11/2021

Zero-Shot Domain Adaptation with a Physics Prior

We explore the zero-shot setting for day-night domain adaptation. The tr...
research
06/14/2023

Revealing the structure of language model capabilities

Building a theoretical understanding of the capabilities of large langua...
research
12/22/2020

Uncertainty and Surprisal Jointly Deliver the Punchline: Exploiting Incongruity-Based Features for Humor Recognition

Humor recognition has been widely studied as a text classification probl...

Please sign up or login with your details

Forgot password? Click here to reset