PoWER-BERT: Accelerating BERT inference for Classification Tasks

01/24/2020
by   Saurabh Goyal, et al.
10

BERT has emerged as a popular model for natural language understanding. Given its compute intensive nature, even for inference, many recent studies have considered optimization of two important performance characteristics: model size and inference time. We consider classification tasks and propose a novel method, called PoWER-BERT, for improving the inference time for the BERT model without significant loss in the accuracy. The method works by eliminating word-vectors (intermediate vector outputs) from the encoder pipeline. We design a strategy for measuring the significance of the word-vectors based on the self-attention mechanism of the encoders which helps us identify the word-vectors to be eliminated. Experimental evaluation on the standard GLUE benchmark shows that PoWER-BERT achieves up to 4.5x reduction in inference time over BERT with < 1 inference time reduction methods, PoWER-BERT offers better trade-off between accuracy and inference time. Lastly, we demonstrate that our scheme can also be used in conjunction with ALBERT (a highly compressed version of BERT) and can attain up to 6.8x factor reduction in inference time with < 1 accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/24/2022

Pretraining without Wordpieces: Learning Over a Vocabulary of Millions of Words

The standard BERT adopts subword-based tokenization, which may break a w...
research
07/28/2020

Improving Results on Russian Sentiment Datasets

In this study, we test standard neural network architectures (CNN, LSTM,...
research
05/31/2020

LRG at SemEval-2020 Task 7: Assessing the Ability of BERT and Derivative Models to Perform Short-Edits based Humor Grading

In this paper, we assess the ability of BERT and its derivative models (...
research
11/03/2021

BERT-DRE: BERT with Deep Recursive Encoder for Natural Language Sentence Matching

This paper presents a deep neural architecture, for Natural Language Sen...
research
03/16/2023

SmartBERT: A Promotion of Dynamic Early Exiting Mechanism for Accelerating BERT Inference

Dynamic early exiting has been proven to improve the inference speed of ...
research
06/04/2019

Open Sesame: Getting Inside BERT's Linguistic Knowledge

How and to what extent does BERT encode syntactically-sensitive hierarch...
research
01/10/2022

TiltedBERT: Resource Adjustable Version of BERT

In this paper, we proposed a novel adjustable finetuning method that imp...

Please sign up or login with your details

Forgot password? Click here to reset