LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale

08/15/2022
by   Tim Dettmers, et al.
0

Large language models have been widely adopted but require significant GPU memory for inference. We develop a procedure for Int8 matrix multiplication for feed-forward and attention projection layers in transformers, which cut the memory needed for inference by half while retaining full precision performance. With our method, a 175B parameter 16/32-bit checkpoint can be loaded, converted to Int8, and used immediately without performance degradation. This is made possible by understanding and working around properties of highly systematic emergent features in transformer language models that dominate attention and transformer predictive performance. To cope with these features, we develop a two-part quantization procedure, LLM.int8(). We first use vector-wise quantization with separate normalization constants for each inner product in the matrix multiplication, to quantize most of the features. However, for the emergent outliers, we also include a new mixed-precision decomposition scheme, which isolates the outlier feature dimensions into a 16-bit matrix multiplication while still more than 99.9 Using LLM.int8(), we show empirically it is possible to perform inference in LLMs with up to 175B parameters without any performance degradation. This result makes such models much more accessible, for example making it possible to use OPT-175B/BLOOM on a single server with consumer GPUs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/24/2023

Acceleration of Multiple Precision Matrix Multiplication using Ozaki scheme

Optimized multiple precision basic linear computation, especially matrix...
research
06/22/2018

BISMO: A Scalable Bit-Serial Matrix Multiplication Overlay for Reconfigurable Computing

Matrix-matrix multiplication is a key computational kernel for numerous ...
research
07/21/2020

SliceOut: Training Transformers and CNNs faster while using less memory

We demonstrate 10-40 EfficientNets, and Transformer models, with minimal...
research
01/02/2019

Optimizing Bit-Serial Matrix Multiplication for Reconfigurable Computing

Matrix-matrix multiplication is a key computational kernel for numerous ...
research
11/13/2015

Large Scale Artificial Neural Network Training Using Multi-GPUs

This paper describes a method for accelerating large scale Artificial Ne...
research
08/16/2023

FineQuant: Unlocking Efficiency with Fine-Grained Weight-Only Quantization for LLMs

Large Language Models (LLMs) have achieved state-of-the-art performance ...
research
05/30/2023

Intriguing Properties of Quantization at Scale

Emergent properties have been widely adopted as a term to describe behav...

Please sign up or login with your details

Forgot password? Click here to reset