Dataset Condensation with Gradient Matching

06/10/2020
by   Bo Zhao, et al.
0

Efficient training of deep neural networks is an increasingly important problem in the era of sophisticated architectures and large-scale datasets. This paper proposes a training set synthesis technique, called Dataset Condensation, that learns to produce a small set of informative samples for training deep neural networks from scratch in a small fraction of the required computational cost on the original data while achieving comparable results. We rigorously evaluate its performance in several computer vision benchmarks and show that it significantly outperforms the state-of-the-art methods. Finally we show promising applications of our method in continual learning and domain adaptation.

READ FULL TEXT

page 7

page 12

page 13

research
10/08/2021

Dataset Condensation with Distribution Matching

Computational cost to train state-of-the-art deep models in many learnin...
research
02/02/2023

Avalanche: A PyTorch Library for Deep Continual Learning

Continual learning is the problem of learning from a nonstationary strea...
research
06/02/2023

Overcoming the Stability Gap in Continual Learning

In many real-world applications, deep neural networks are retrained from...
research
04/15/2016

Improving the Robustness of Deep Neural Networks via Stability Training

In this paper we address the issue of output instability of deep neural ...
research
02/16/2021

Dataset Condensation with Differentiable Siamese Augmentation

In many machine learning problems, large-scale datasets have become the ...
research
03/02/2020

Energy-efficient and Robust Cumulative Training with Net2Net Transformation

Deep learning has achieved state-of-the-art accuracies on several comput...
research
10/07/2021

A Data-Centric Approach for Training Deep Neural Networks with Less Data

While the availability of large datasets is perceived to be a key requir...

Please sign up or login with your details

Forgot password? Click here to reset