Assessing Resource-Performance Trade-off of Natural Language Models using Data Envelopment Analysis

11/02/2022
by   Zachary Zhou, et al.
0

Natural language models are often summarized through a high-dimensional set of descriptive metrics including training corpus size, training time, the number of trainable parameters, inference times, and evaluation statistics that assess performance across tasks. The high dimensional nature of these metrics yields challenges with regard to objectively comparing models; in particular it is challenging to assess the trade-off models make between performance and resources (compute time, memory, etc.). We apply Data Envelopment Analysis (DEA) to this problem of assessing the resource-performance trade-off. DEA is a nonparametric method that measures productive efficiency of abstract units that consume one or more inputs and yield at least one output. We recast natural language models as units suitable for DEA, and we show that DEA can be used to create an effective framework for quantifying model performance and efficiency. A central feature of DEA is that it identifies a subset of models that live on an efficient frontier of performance. DEA is also scalable, having been applied to problems with thousands of units. We report empirical results of DEA applied to 14 different language models that have a variety of architectures, and we show that DEA can be used to identify a subset of models that effectively balance resource demands against performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/24/2018

Assessing Language Models with Scaling Properties

Language models have primarily been evaluated with perplexity. While per...
research
09/17/2021

Language Models as a Knowledge Source for Cognitive Agents

Language models (LMs) are sentence-completion engines trained on massive...
research
08/20/2023

Activation Addition: Steering Language Models Without Optimization

Reliably controlling the behavior of large language models (LLMs) is a p...
research
04/13/2022

Scalable Training of Language Models using JAX pjit and TPUv4

Modern large language models require distributed training strategies due...
research
04/28/2022

RobBERTje: a Distilled Dutch BERT Model

Pre-trained large-scale language models such as BERT have gained a lot o...
research
08/11/2023

Fly-Swat or Cannon? Cost-Effective Language Model Choice via Meta-Modeling

Generative language models (LMs) have become omnipresent across data sci...
research
12/04/2021

Resource Theories as Quantale Modules

We aim to counter the tendency for specialization in science by advancin...

Please sign up or login with your details

Forgot password? Click here to reset