Learning with Retrospection

12/24/2020
by   Xiang Deng, et al.
0

Deep neural networks have been successfully deployed in various domains of artificial intelligence, including computer vision and natural language processing. We observe that the current standard procedure for training DNNs discards all the learned information in the past epochs except the current learned weights. An interesting question is: is this discarded information indeed useless? We argue that the discarded information can benefit the subsequent training. In this paper, we propose learning with retrospection (LWR) which makes use of the learned information in the past epochs to guide the subsequent training. LWR is a simple yet effective training framework to improve accuracies, calibration, and robustness of DNNs without introducing any additional network parameters or inference cost, only with a negligible training overhead. Extensive experiments on several benchmark datasets demonstrate the superiority of LWR for training DNNs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/16/2022

Reducing Flipping Errors in Deep Neural Networks

Deep neural networks (DNNs) have been widely applied in various domains ...
research
09/17/2020

Deep Collective Learning: Learning Optimal Inputs and Weights Jointly in Deep Neural Networks

It is well observed that in deep learning and computer vision literature...
research
11/01/2020

An Embarrassingly Simple Approach to Training Ternary Weight Networks

Deep neural networks (DNNs) have achieved great successes in various dom...
research
09/23/2020

Deep Neural Networks with Short Circuits for Improved Gradient Learning

Deep neural networks have achieved great success both in computer vision...
research
12/18/2018

Safety and Trustworthiness of Deep Neural Networks: A Survey

In the past few years, significant progress has been made on deep neural...
research
12/02/2020

DecisiveNets: Training Deep Associative Memories to Solve Complex Machine Learning Problems

Learning deep representations to solve complex machine learning tasks ha...
research
02/27/2018

How (Not) To Train Your Neural Network Using the Information Bottleneck Principle

In this theory paper, we investigate training deep neural networks (DNNs...

Please sign up or login with your details

Forgot password? Click here to reset