Dataset Meta-Learning from Kernel Ridge-Regression

10/30/2020
by   Timothy Nguyen, et al.
29

One of the most fundamental aspects of any machine learning algorithm is the training data used by the algorithm. We introduce the novel concept of ϵ-approximation of datasets, obtaining datasets which are much smaller than or are significant corruptions of the original training data while maintaining similar model performance. We introduce a meta-learning algorithm called Kernel Inducing Points (KIP) for obtaining such remarkable datasets, inspired by the recent developments in the correspondence between infinitely-wide neural networks and kernel ridge-regression (KRR). For KRR tasks, we demonstrate that KIP can compress datasets by one or two orders of magnitude, significantly improving previous dataset distillation and subset selection methods while obtaining state of the art results for MNIST and CIFAR-10 classification. Furthermore, our KIP-learned datasets are transferable to the training of finite-width neural networks even beyond the lazy-training regime, which leads to state of the art results for neural network dataset distillation with potential applications to privacy-preservation.

READ FULL TEXT

page 2

page 19

page 23

page 24

research
10/21/2022

Efficient Dataset Distillation Using Random Feature Approximation

Dataset distillation compresses large datasets into smaller synthetic co...
research
07/27/2021

Dataset Distillation with Infinitely Wide Convolutional Networks

The effectiveness of machine learning algorithms arises from being able ...
research
06/01/2022

Dataset Distillation using Neural Feature Regression

Dataset distillation aims to learn a small synthetic dataset that preser...
research
03/09/2023

Kernel Regression with Infinite-Width Neural Networks on Millions of Examples

Neural kernels have drastically increased performance on diverse and non...
research
07/16/2023

Dataset Distillation Meets Provable Subset Selection

Deep learning has grown tremendously over recent years, yielding state-o...
research
02/13/2023

Dataset Distillation with Convexified Implicit Gradients

We propose a new dataset distillation algorithm using reparameterization...
research
05/23/2023

On the Size and Approximation Error of Distilled Sets

Dataset Distillation is the task of synthesizing small datasets from lar...

Please sign up or login with your details

Forgot password? Click here to reset