eDKM: An Efficient and Accurate Train-time Weight Clustering for Large Language Models

09/02/2023
by   Minsik Cho, et al.
0

Since Large Language Models or LLMs have demonstrated high-quality performance on many complex language tasks, there is a great interest in bringing these LLMs to mobile devices for faster responses and better privacy protection. However, the size of LLMs (i.e., billions of parameters) requires highly effective compression to fit into storage-limited devices. Among many compression techniques, weight-clustering, a form of non-linear quantization, is one of the leading candidates for LLM compression, and supported by modern smartphones. Yet, its training overhead is prohibitively significant for LLM fine-tuning. Especially, Differentiable KMeans Clustering, or DKM, has shown the state-of-the-art trade-off between compression ratio and accuracy regression, but its large memory complexity makes it nearly impossible to apply to train-time LLM compression. In this paper, we propose a memory-efficient DKM implementation, eDKM powered by novel techniques to reduce the memory footprint of DKM by orders of magnitudes. For a given tensor to be saved on CPU for the backward pass of DKM, we compressed the tensor by applying uniquification and sharding after checking if there is no duplicated tensor previously copied to CPU. Our experimental results demonstrate that can fine-tune and compress a pretrained LLaMA 7B model from 12.6 GB to 2.5 GB (3bit/weight) with the Alpaca dataset by reducing the train-time memory footprint of a decoder layer by 130×, while delivering good accuracy on broader LLM benchmarks (i.e., 77.7% for PIQA, 66.1% for Winograde, and so on).

READ FULL TEXT
research
08/28/2021

DKM: Differentiable K-Means Clustering Layer for Neural Network Compression

Deep neural network (DNN) model compression for efficient on-device infe...
research
05/27/2023

Fine-Tuning Language Models with Just Forward Passes

Fine-tuning language models (LMs) has yielded success on diverse downstr...
research
07/02/2023

TensorGPT: Efficient Compression of the Embedding Layer in LLMs based on the Tensor-Train Decomposition

High-dimensional token embeddings underpin Large Language Models (LLMs),...
research
07/13/2023

In-context Autoencoder for Context Compression in a Large Language Model

We propose the In-context Autoencoder (ICAE) for context compression in ...
research
07/15/2023

CPET: Effective Parameter-Efficient Tuning for Compressed Large Language Models

Parameter-efficient tuning (PET) has been widely explored in recent year...
research
05/10/2019

Compressing Weight-updates for Image Artifacts Removal Neural Networks

In this paper, we present a novel approach for fine-tuning a decoder-sid...
research
06/01/2023

AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration

Large language models (LLMs) have shown excellent performance on various...

Please sign up or login with your details

Forgot password? Click here to reset