Intriguing Properties of Quantization at Scale

05/30/2023
by   Arash Ahmadian, et al.
10

Emergent properties have been widely adopted as a term to describe behavior not present in smaller models but observed in larger models. Recent work suggests that the trade-off incurred by quantization is also an emergent property, with sharp drops in performance in models over 6B parameters. In this work, we ask "are quantization cliffs in performance solely a factor of scale?" Against a backdrop of increased research focus on why certain emergent properties surface at scale, this work provides a useful counter-example. We posit that it is possible to optimize for a quantization friendly training recipe that suppresses large activation magnitude outliers. Here, we find that outlier dimensions are not an inherent product of scale, but rather sensitive to the optimization conditions present during pre-training. This both opens up directions for more efficient quantization, and poses the question of whether other emergent properties are inherent or can be altered and conditioned by optimization and architecture design choices. We successfully quantize models ranging in size from 410M to 52B with minimal degradation in performance.

READ FULL TEXT

page 10

page 24

page 26

page 28

research
06/04/2023

OWQ: Lessons learned from activation outliers for weight quantization in large language models

Large language models (LLMs) with hundreds of billions of parameters sho...
research
05/31/2020

Quantized Neural Networks: Characterization and Holistic Optimization

Quantized deep neural networks (QDNNs) are necessary for low-power, high...
research
09/27/2022

Outlier Suppression: Pushing the Limit of Low-bit Transformer Language Models

Transformer architecture has become the fundamental element of the wides...
research
10/07/2022

A Closer Look at Hardware-Friendly Weight Quantization

Quantizing a Deep Neural Network (DNN) model to be used on a custom acce...
research
08/15/2022

LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale

Large language models have been widely adopted but require significant G...
research
11/05/2019

Post-Training 4-bit Quantization on Embedding Tables

Continuous representations have been widely adopted in recommender syste...

Please sign up or login with your details

Forgot password? Click here to reset