DREAM: Efficient Dataset Distillation by Representative Matching

02/28/2023
by   Yanqing Liu, et al.
0

Dataset distillation aims to generate small datasets with little information loss as large-scale datasets for reducing storage and training costs. Recent state-of-the-art methods mainly constrain the sample generation process by matching synthetic images and the original ones regarding gradients, embedding distributions, or training trajectories. Although there are various matching objectives, currently the method for selecting original images is limited to naive random sampling. We argue that random sampling inevitably involves samples near the decision boundaries, which may provide large or noisy matching targets. Besides, random sampling cannot guarantee the evenness and diversity of the sample distribution. These factors together lead to large optimization oscillations and degrade the matching efficiency. Accordingly, we propose a novel matching strategy named as Dataset distillation by REpresentAtive Matching (DREAM), where only representative original images are selected for matching. DREAM is able to be easily plugged into popular dataset distillation frameworks and reduce the matching iterations by 10 times without performance drop. Given sufficient training time, DREAM further provides significant improvements and achieves state-of-the-art performances.

READ FULL TEXT

page 8

page 13

research
11/19/2022

Scaling Up Dataset Distillation to ImageNet-1K with Constant Memory

Dataset distillation methods aim to compress a large dataset into a smal...
research
05/29/2023

Towards Efficient Deep Hashing Retrieval: Condensing Your Data via Feature-Embedding Matching

The expenses involved in training state-of-the-art deep hashing retrieva...
research
11/20/2022

Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation

Model-based deep learning has achieved astounding successes due in part ...
research
03/08/2023

DiM: Distilling Dataset into Generative Model

Dataset distillation reduces the network training cost by synthesizing s...
research
07/30/2022

Delving into Effective Gradient Matching for Dataset Condensation

As deep learning models and datasets rapidly scale up, network training ...
research
05/28/2023

Distill Gold from Massive Ores: Efficient Dataset Distillation via Critical Samples Selection

Data-efficient learning has drawn significant attention, especially give...
research
03/16/2022

Learning to Generate Synthetic Training Data using Gradient Matching and Implicit Differentiation

Using huge training datasets can be costly and inconvenient. This articl...

Please sign up or login with your details

Forgot password? Click here to reset