Dataset Distillation with Infinitely Wide Convolutional Networks

07/27/2021
by   Timothy Nguyen, et al.
0

The effectiveness of machine learning algorithms arises from being able to extract useful features from large amounts of data. As model and dataset sizes increase, dataset distillation methods that compress large datasets into significantly smaller yet highly performant ones will become valuable in terms of training efficiency and useful feature extraction. To that end, we apply a novel distributed kernel based meta-learning framework to achieve state-of-the-art results for dataset distillation using infinitely wide convolutional neural networks. For instance, using only 10 datapoints (0.02 original dataset), we obtain over 64 classification task, a dramatic improvement over the previous best test accuracy of 40 for MNIST, Fashion-MNIST, CIFAR-10, CIFAR-100, and SVHN. Furthermore, we perform some preliminary analyses of our distilled datasets to shed light on how they differ from naturally occurring data.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset