Data Cleansing for Deep Neural Networks with Storage-efficient Approximation of Influence Functions

03/22/2021
by   Kenji Suzuki, et al.
57

Identifying the influence of training data for data cleansing can improve the accuracy of deep learning. An approach with stochastic gradient descent (SGD) called SGD-influence to calculate the influence scores was proposed, but, the calculation costs are expensive. It is necessary to temporally store the parameters of the model during training phase for inference phase to calculate influence sores. In close connection with the previous method, we propose a method to reduce cache files to store the parameters in training phase for calculating inference score. We only adopt the final parameters in last epoch for influence functions calculation. In our experiments on classification, the cache size of training using MNIST dataset with our approach is 1.236 MB. On the other hand, the previous method used cache size of 1.932 GB in last epoch. It means that cache size has been reduced to 1/1,563. We also observed the accuracy improvement by data cleansing with removal of negatively influential data using our approach as well as the previous method. Moreover, our simple and general proposed method to calculate influence scores is available on our auto ML tool without programing, Neural Network Console. The source code is also available.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/18/2019

Dynamics of stochastic gradient descent for two-layer neural networks in the teacher-student setup

Deep neural networks achieve stellar generalisation even when they have ...
research
12/06/2021

Scaling Up Influence Functions

We address efficient calculation of influence functions for tracking pre...
research
01/27/2017

Reinforced stochastic gradient descent for deep neural network learning

Stochastic gradient descent (SGD) is a standard optimization method to m...
research
12/24/2022

Visualizing Information Bottleneck through Variational Inference

The Information Bottleneck theory provides a theoretical and computation...
research
10/27/2014

Parallel training of DNNs with Natural Gradient and Parameter Averaging

We describe the neural-network training framework used in the Kaldi spee...
research
02/04/2021

HYDRA: Hypergradient Data Relevance Analysis for Interpreting Deep Neural Networks

The behaviors of deep neural networks (DNNs) are notoriously resistant t...
research
02/17/2021

Estimate Three-Phase Distribution Line Parameters With Physics-Informed Graphical Learning Method

Accurate estimates of network parameters are essential for modeling, mon...

Please sign up or login with your details

Forgot password? Click here to reset