Fly-Swat or Cannon? Cost-Effective Language Model Choice via Meta-Modeling

08/11/2023
by   Marija Sakota, et al.
0

Generative language models (LMs) have become omnipresent across data science. For a wide variety of tasks, inputs can be phrased as natural language prompts for an LM, from whose output the solution can then be extracted. LM performance has consistently been increasing with model size - but so has the monetary cost of querying the ever larger models. Importantly, however, not all inputs are equally hard: some require larger LMs for obtaining a satisfactory solution, whereas for others smaller LMs suffice. Based on this fact, we design a framework for Cost-Effective Language Model Choice (CELMOC). Given a set of inputs and a set of candidate LMs, CELMOC judiciously assigns each input to an LM predicted to do well on the input according to a so-called meta-model, aiming to achieve high overall performance at low cost. The cost-performance trade-off can be flexibly tuned by the user. Options include, among others, maximizing total expected performance (or the number of processed inputs) while staying within a given cost budget, or minimizing total cost while processing all inputs. We evaluate CELMOC on 14 datasets covering five natural language tasks, using four candidate LMs of vastly different size and cost. With CELMOC, we match the performance of the largest available LM while achieving a cost reduction of 63 practitioners can thus save large amounts of money without sacrificing performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/13/2021

GLaM: Efficient Scaling of Language Models with Mixture-of-Experts

Scaling language models with more data, compute and parameters has drive...
research
10/13/2022

Spontaneous Emerging Preference in Two-tower Language Model

The ever-growing size of the foundation language model has brought signi...
research
01/23/2022

An Application of Pseudo-Log-Likelihoods to Natural Language Scoring

Language models built using semi-supervised machine learning on large co...
research
05/22/2023

Farewell to Aimless Large-scale Pretraining: Influential Subset Selection for Language Model

Pretrained language models have achieved remarkable success in various n...
research
11/02/2022

Assessing Resource-Performance Trade-off of Natural Language Models using Data Envelopment Analysis

Natural language models are often summarized through a high-dimensional ...
research
07/18/2022

Word Play for Playing Othello (Reverses)

Language models like OpenAI's Generative Pre-Trained Transformers (GPT-2...
research
01/08/2023

InPars-Light: Cost-Effective Unsupervised Training of Efficient Rankers

We carried out a reproducibility study of InPars recipe for unsupervised...

Please sign up or login with your details

Forgot password? Click here to reset