Learning to Generate Synthetic Training Data using Gradient Matching and Implicit Differentiation

03/16/2022
by   Dmitry Medvedev, et al.
0

Using huge training datasets can be costly and inconvenient. This article explores various data distillation techniques that can reduce the amount of data required to successfully train deep networks. Inspired by recent ideas, we suggest new data distillation techniques based on generative teaching networks, gradient matching, and the Implicit Function Theorem. Experiments with the MNIST image classification problem show that the new methods are computationally more efficient than previous ones and allow to increase the performance of models trained on distilled data.

READ FULL TEXT
research
03/21/2017

Knowledge distillation using unlabeled mismatched images

Current approaches for Knowledge Distillation (KD) either directly use t...
research
10/19/2020

New Properties of the Data Distillation Method When Working With Tabular Data

Data distillation is the problem of reducing the volume oftraining data ...
research
03/20/2023

A closer look at the training dynamics of knowledge distillation

In this paper we revisit the efficacy of knowledge distillation as a fun...
research
06/02/2023

Privacy Distillation: Reducing Re-identification Risk of Multimodal Diffusion Models

Knowledge distillation in neural networks refers to compressing a large ...
research
11/09/2021

On Training Implicit Models

This paper focuses on training implicit models of infinite layers. Speci...
research
02/13/2023

Dataset Distillation with Convexified Implicit Gradients

We propose a new dataset distillation algorithm using reparameterization...
research
02/28/2023

DREAM: Efficient Dataset Distillation by Representative Matching

Dataset distillation aims to generate small datasets with little informa...

Please sign up or login with your details

Forgot password? Click here to reset