Reducing Flipping Errors in Deep Neural Networks

03/16/2022
by   Xiang Deng, et al.
1

Deep neural networks (DNNs) have been widely applied in various domains in artificial intelligence including computer vision and natural language processing. A DNN is typically trained for many epochs and then a validation dataset is used to select the DNN in an epoch (we simply call this epoch "the last epoch") as the final model for making predictions on unseen samples, while it usually cannot achieve a perfect accuracy on unseen samples. An interesting question is "how many test (unseen) samples that a DNN misclassifies in the last epoch were ever correctly classified by the DNN before the last epoch?". In this paper, we empirically study this question and find on several benchmark datasets that the vast majority of the misclassified samples in the last epoch were ever classified correctly before the last epoch, which means that the predictions for these samples were flipped from "correct" to "wrong". Motivated by this observation, we propose to restrict the behavior changes of a DNN on the correctly-classified samples so that the correct local boundaries can be maintained and the flipping error on unseen samples can be largely reduced. Extensive experiments on different benchmark datasets with different modern network architectures demonstrate that the proposed flipping error reduction (FER) approach can substantially improve the generalization, the robustness, and the transferability of DNNs without introducing any additional network parameters or inference cost, only with a negligible training overhead.

READ FULL TEXT
research
12/24/2020

Learning with Retrospection

Deep neural networks have been successfully deployed in various domains ...
research
10/01/2019

Leveraging Model Interpretability and Stability to increase Model Robustness

State of the art Deep Neural Networks (DNN) can now achieve above human ...
research
05/01/2020

Computing the Testing Error without a Testing Set

Deep Neural Networks (DNNs) have revolutionized computer vision. We now ...
research
07/04/2018

SGAD: Soft-Guided Adaptively-Dropped Neural Network

Deep neural networks (DNNs) have been proven to have many redundancies. ...
research
08/21/2020

A Survey on Assessing the Generalization Envelope of Deep Neural Networks at Inference Time for Image Classification

Deep Neural Networks (DNNs) achieve state-of-the-art performance on nume...
research
11/18/2020

Positive-Congruent Training: Towards Regression-Free Model Updates

Reducing inconsistencies in the behavior of different versions of an AI ...
research
01/19/2019

Overfitting Mechanism and Avoidance in Deep Neural Networks

Assisted by the availability of data and high performance computing, dee...

Please sign up or login with your details

Forgot password? Click here to reset