Dataset Distillation with Convexified Implicit Gradients

02/13/2023
by   Noel Loo, et al.
0

We propose a new dataset distillation algorithm using reparameterization and convexification of implicit gradients (RCIG), that substantially improves the state-of-the-art. To this end, we first formulate dataset distillation as a bi-level optimization problem. Then, we show how implicit gradients can be effectively used to compute meta-gradient updates. We further equip the algorithm with a convexified approximation that corresponds to learning on top of a frozen finite-width neural tangent kernel. Finally, we improve bias in implicit gradients by parameterizing the neural network to enable analytical computation of final-layer parameters given the body parameters. RCIG establishes the new state-of-the-art on a diverse series of dataset distillation tasks. Notably, with one image per class, on resized ImageNet, RCIG sees on average a 108 distillation algorithm. Similarly, we observed a 66 Tiny-ImageNet and 37

READ FULL TEXT

page 24

page 25

page 26

page 27

page 28

page 29

page 30

research
10/21/2022

Efficient Dataset Distillation Using Random Feature Approximation

Dataset distillation compresses large datasets into smaller synthetic co...
research
06/01/2022

Dataset Distillation using Neural Feature Regression

Dataset distillation aims to learn a small synthetic dataset that preser...
research
10/30/2020

Dataset Meta-Learning from Kernel Ridge-Regression

One of the most fundamental aspects of any machine learning algorithm is...
research
06/03/2021

Implicit gradients based novel finite volume scheme for compressible single and multi-component flows

This paper introduces a novel approach to compute the numerical fluxes a...
research
11/01/2022

Reduce, Reuse, Recycle: Improving Training Efficiency with Distillation

Methods for improving the efficiency of deep network training (i.e. the ...
research
05/02/2019

Full-Jacobian Representation of Neural Networks

Non-linear functions such as neural networks can be locally approximated...
research
03/16/2022

Learning to Generate Synthetic Training Data using Gradient Matching and Implicit Differentiation

Using huge training datasets can be costly and inconvenient. This articl...

Please sign up or login with your details

Forgot password? Click here to reset