Estimating Large Language Model Capabilities without Labeled Test Data

05/24/2023
by   Harvey Yiyun Fu, et al.
16

Large Language Models (LLMs) have exhibited an impressive ability to perform in-context learning (ICL) from only a few examples, but the success of ICL varies widely from task to task. Thus, it is important to quickly determine whether ICL is applicable to a new task, but directly evaluating ICL accuracy can be expensive in situations where test data is expensive to annotate – the exact situations where ICL is most appealing. In this paper, we propose the task of ICL accuracy estimation, in which we predict the accuracy of an LLM when doing in-context learning on a new task given only unlabeled data for that task. To perform ICL accuracy estimation, we propose a method that trains a meta-model using LLM confidence scores as features. We compare our method to several strong accuracy estimation baselines on a new benchmark that covers 4 LLMs and 3 task collections. On average, the meta-model improves over all baselines and achieves the same estimation performance as directly evaluating on 40 labeled test examples per task, across the total 12 settings. We encourage future work to improve on our methods and evaluate on our ICL accuracy estimation benchmark to deepen our understanding of when ICL works.

READ FULL TEXT
research
05/22/2023

Meta-in-context learning in large language models

Large language models have shown tremendous performance in a variety of ...
research
10/13/2022

Assessing Out-of-Domain Language Model Performance from Few Examples

While pretrained language models have exhibited impressive generalizatio...
research
11/05/2015

Multinomial Loss on Held-out Data for the Sparse Non-negative Matrix Language Model

We describe Sparse Non-negative Matrix (SNM) language model estimation u...
research
10/15/2021

Meta-learning via Language Model In-context Tuning

The goal of meta-learning is to learn to adapt to a new task with only a...
research
11/28/2022

CoNAL: Anticipating Outliers with Large Language Models

In many task settings, text classification models are likely to encounte...
research
05/23/2023

RetICL: Sequential Retrieval of In-Context Examples with Reinforcement Learning

Many recent developments in large language models focus on prompting the...
research
09/07/2020

Measuring Massive Multitask Language Understanding

We propose a new test to measure a text model's multitask accuracy. The ...

Please sign up or login with your details

Forgot password? Click here to reset