Quantization Aware Training, ERNIE and Kurtosis Regularizer: a short empirical study

06/24/2021
by   Andrea Zanetti, et al.
0

Pre-trained language models like Ernie or Bert are currently used in many applications. These models come with a set of pre-trained weights typically obtained in unsupervised/self-supervised modality on a huge amount of data. After that, they are fine-tuned on a specific task. Applications then use these models for inference, and often some additional constraints apply, like low power-budget or low latency between input and output. The main avenue to meet these additional requirements for the inference settings, is to use low precision computation (e.g. INT8 rather than FP32), but this comes with a cost of deteriorating the functional performance (e.g. accuracy) of the model. Some approaches have been developed to tackle the problem and go beyond the limitations of the PTO (Post-Training Quantization), more specifically the QAT (Quantization Aware Training, see [4]) is a procedure that interferes with the training process in order to make it affected (or simply disturbed) by the quantization phase during the training itself. Besides QAT, recently Intel-Habana Labs have proposed an additional and more direct way to make the training results more robust to subsequent quantization which uses a regularizer, therefore changing the loss function that drives the training procedure. But their proposal does not work out-of-the-box for pre-trained models like Ernie, for example. In this short paper we show why this is not happening (for the Ernie case) and we propose a very basic way to deal with it, sharing as well some initial results (increase in final INT8 accuracy) that might be of interest to practitioners willing to use Ernie in their applications, in low precision regime.

READ FULL TEXT
research
10/14/2020

An Investigation on Different Underlying Quantization Schemes for Pre-trained Language Models

Recently, pre-trained language models like BERT have shown promising per...
research
05/29/2023

LLM-QAT: Data-Free Quantization Aware Training for Large Language Models

Several post-training quantization methods have been applied to large la...
research
09/30/2021

Towards Efficient Post-training Quantization of Pre-trained Language Models

Network quantization has gained increasing attention with the rapid grow...
research
07/12/2023

Self-Distilled Quantization: Achieving High Compression Rates in Transformer-Based Language Models

We investigate the effects of post-training quantization and quantizatio...
research
02/23/2023

Teacher Intervention: Improving Convergence of Quantization Aware Training for Ultra-Low Precision Transformers

Pre-trained Transformer models such as BERT have shown great success in ...
research
06/06/2023

Büyük dil modellerinin Türkçe verisetleri ile eğitilmesi ve ince ayarlanması

Large language models have advanced enormously, gained vast attraction a...
research
03/25/2019

Recognizing Arrow Of Time In The Short Stories

Recognizing arrow of time in short stories is a challenging task. i.e., ...

Please sign up or login with your details

Forgot password? Click here to reset