CAFE: Learning to Condense Dataset by Aligning Features

03/03/2022
by   Kai Wang, et al.
23

Dataset condensation aims at reducing the network training effort through condensing a cumbersome training set into a compact synthetic one. State-of-the-art approaches largely rely on learning the synthetic data by matching the gradients between the real and synthetic data batches. Despite the intuitive motivation and promising results, such gradient-based methods, by nature, easily overfit to a biased set of samples that produce dominant gradients, and thus lack global supervision of data distribution. In this paper, we propose a novel scheme to Condense dataset by Aligning FEatures (CAFE), which explicitly attempts to preserve the real-feature distribution as well as the discriminant power of the resulting synthetic set, lending itself to strong generalization capability to various architectures. At the heart of our approach is an effective strategy to align features from the real and synthetic data across various scales, while accounting for the classification of real samples. Our scheme is further backed up by a novel dynamic bi-level optimization, which adaptively adjusts parameter updates to prevent over-/under-fitting. We validate the proposed CAFE across various datasets, and demonstrate that it generally outperforms the state of the art: on the SVHN dataset, for example, the performance gain is up to 11 and analyses verify the effectiveness and necessity of proposed designs.

READ FULL TEXT

page 4

page 8

research
08/29/2023

Few-Shot Object Detection via Synthetic Features with Optimal Transport

Few-shot object detection aims to simultaneously localize and classify t...
research
11/20/2022

Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation

Model-based deep learning has achieved astounding successes due in part ...
research
06/15/2022

Condensing Graphs via One-Step Gradient Matching

As training deep learning models on large dataset takes a lot of time an...
research
08/08/2023

From Fake to Real (FFR): A two-stage training pipeline for mitigating spurious correlations with synthetic data

Visual recognition models are prone to learning spurious correlations in...
research
03/20/2023

Constructing Bayesian Pseudo-Coresets using Contrastive Divergence

Bayesian Pseudo-Coreset (BPC) and Dataset Condensation are two parallel ...
research
05/30/2022

Dataset Condensation via Efficient Synthetic-Data Parameterization

The great success of machine learning with massive amounts of data comes...
research
07/30/2022

Delving into Effective Gradient Matching for Dataset Condensation

As deep learning models and datasets rapidly scale up, network training ...

Please sign up or login with your details

Forgot password? Click here to reset