Dataset Quantization

08/21/2023
by   Daquan Zhou, et al.
0

State-of-the-art deep neural networks are trained with large amounts (millions or even billions) of data. The expensive computation and memory costs make it difficult to train them on limited hardware resources, especially for recent popular large language models (LLM) and computer vision models (CV). Recent popular dataset distillation methods are thus developed, aiming to reduce the number of training samples via synthesizing small-scale datasets via gradient matching. However, as the gradient calculation is coupled with the specific network architecture, the synthesized dataset is biased and performs poorly when used for training unseen architectures. To address these limitations, we present dataset quantization (DQ), a new framework to compress large-scale datasets into small subsets which can be used for training any neural network architectures. Extensive experiments demonstrate that DQ is able to generate condensed small datasets for training unseen network architectures with state-of-the-art compression ratios for lossless model training. To the best of our knowledge, DQ is the first method that can successfully distill large-scale datasets such as ImageNet-1k with a state-of-the-art compression ratio. Notably, with 60 instruction tuning data, the models can be trained with negligible or no performance drop for both vision tasks (including classification, semantic segmentation, and object detection) as well as language tasks (including instruction tuning tasks such as BBH and DROP).

READ FULL TEXT
research
05/16/2023

Maybe Only 0.5 Training Data Instruction Tuning

Instruction tuning for large language models (LLMs) has gained attention...
research
04/15/2016

Improving the Robustness of Deep Neural Networks via Stability Training

In this paper we address the issue of output instability of deep neural ...
research
06/11/2019

Data-Free Quantization through Weight Equalization and Bias Correction

We introduce a data-free quantization method for deep neural networks th...
research
07/18/2017

On the State of the Art of Evaluation in Neural Language Models

Ongoing innovations in recurrent neural network architectures have provi...
research
09/08/2023

Towards Mitigating Architecture Overfitting in Dataset Distillation

Dataset distillation methods have demonstrated remarkable performance fo...
research
05/28/2023

Using Caterpillar to Nibble Small-Scale Images

Recently, MLP-based models have become popular and attained significant ...
research
12/13/2018

ELASTIC: Improving CNNs with Instance Specific Scaling Policies

Scale variation has been a challenge from traditional to modern approach...

Please sign up or login with your details

Forgot password? Click here to reset