The Right Tool for the Job: Matching Model and Instance Complexities

04/16/2020
by   Roy Schwartz, et al.
0

As NLP models become larger, executing a trained model requires significant computational resources incurring monetary and environmental costs. To better respect a given inference budget, we propose a modification to contextual representation fine-tuning which, during inference, allows for an early (and fast) "exit" from neural network calculations for simple instances, and late (and accurate) exit for hard instances. To achieve this, we add classifiers to different layers of BERT and use their calibrated confidence scores to make early exit decisions. We test our proposed modification on five different datasets in two tasks: three text classification datasets and two natural language inference benchmarks. Our method presents a favorable speed/accuracy tradeoff in almost all cases, producing models which are up to five times faster than the state of the art, while preserving their accuracy. Our method also requires almost no additional training resources (in either time or parameters) compared to the baseline BERT model. Finally, our method alleviates the need for costly retraining of multiple models at different levels of efficiency; we allow users to control the inference speed/accuracy tradeoff using a single trained model, by setting a single variable at inference time. We publicly release our code.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/08/2023

The EarlyBIRD Catches the Bug: On Exploiting Early Layers of Encoder Models for More Efficient Code Classification

The use of modern Natural Language Processing (NLP) techniques has shown...
research
06/04/2023

Finding the SWEET Spot: Analysis and Improvement of Adaptive Inference in Low Resource Settings

Adaptive inference is a simple method for reducing inference costs. The ...
research
07/26/2023

DPBERT: Efficient Inference for BERT based on Dynamic Planning

Large-scale pre-trained language models such as BERT have contributed si...
research
04/05/2020

FastBERT: a Self-distilling BERT with Adaptive Inference Time

Pre-trained language models like BERT have proven to be highly performan...
research
06/17/2022

Binary Early-Exit Network for Adaptive Inference on Low-Resource Devices

Deep neural networks have significantly improved performance on a range ...
research
08/01/2019

Simple and Effective Text Matching with Richer Alignment Features

In this paper, we present a fast and strong neural approach for general ...
research
04/13/2022

TangoBERT: Reducing Inference Cost by using Cascaded Architecture

The remarkable success of large transformer-based models such as BERT, R...

Please sign up or login with your details

Forgot password? Click here to reset