Cheaply Evaluating Inference Efficiency Metrics for Autoregressive Transformer APIs

05/03/2023
by   Deepak Narayanan, et al.
0

Large language models (LLMs) power many state-of-the-art systems in natural language processing. However, these models are extremely computationally expensive, even at inference time, raising the natural question: when is the extra cost of deploying a larger model worth the anticipated boost in capabilities? Better understanding this tradeoff fundamentally could benefit from an inference efficiency metric that is both (i) easily comparable across models from different providers, and (ii) representative of the true cost of running queries in an isolated performance environment. Unfortunately, access to LLMs today is largely restricted to black-box text generation APIs and raw runtimes measured through this interface do not satisfy these desiderata: model providers can apply various software and hardware optimizations orthogonal to the model, and models served on shared infrastructure are susceptible to performance contention. To circumvent these problems, we propose a new metric for comparing inference efficiency across models. This metric puts models on equal footing as though they were served (i) on uniform hardware and software, and (ii) without performance contention. We call this metric the idealized runtime, and we propose a methodology to efficiently estimate this metric for autoregressive Transformer models. We also propose cost-aware variants that incorporate the number of accelerators needed to serve the model. Using these metrics, we compare ten state-of-the-art LLMs to provide the first analysis of inference efficiency-capability tradeoffs; we make several observations from this analysis, including the fact that the superior inference runtime performance of certain APIs is often a byproduct of optimizations within the API rather than the underlying model. Our methodology also facilitates the efficient comparison of different software and hardware stacks.

READ FULL TEXT
research
10/26/2020

FastFormers: Highly Efficient Transformer Models for Natural Language Understanding

Transformer-based models are the state-of-the-art for Natural Language U...
research
02/15/2023

Big Little Transformer Decoder

The recent emergence of Large Language Models based on the Transformer a...
research
02/12/2021

Optimizing Inference Performance of Transformers on CPUs

The Transformer architecture revolutionized the field of natural languag...
research
09/22/2022

DFX: A Low-latency Multi-FPGA Appliance for Accelerating Transformer-based Text Generation

Transformer is a deep learning language model widely used for natural la...
research
10/24/2020

CaM-Gen:Causally-aware Metric-guided Text Generation

Content is created for a well-defined purpose, often described by a metr...
research
10/16/2021

FrugalScore: Learning Cheaper, Lighter and Faster Evaluation Metricsfor Automatic Text Generation

Fast and reliable evaluation metrics are key to R D progress. While tr...
research
01/02/2020

SmartWatts: Self-Calibrating Software-Defined Power Meter for Containers

Fine-grained power monitoring of software activities becomes unavoidable...

Please sign up or login with your details

Forgot password? Click here to reset