Learning to Generate Synthetic Training Data using Gradient Matching and Implicit Differentiation

03/16/2022
by   Dmitry Medvedev, et al.
0

Using huge training datasets can be costly and inconvenient. This article explores various data distillation techniques that can reduce the amount of data required to successfully train deep networks. Inspired by recent ideas, we suggest new data distillation techniques based on generative teaching networks, gradient matching, and the Implicit Function Theorem. Experiments with the MNIST image classification problem show that the new methods are computationally more efficient than previous ones and allow to increase the performance of models trained on distilled data.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset