A Wholistic View of Continual Learning with Deep Neural Networks: Forgotten Lessons and the Bridge to Active and Open World Learning

09/03/2020
by   Martin Mundt, et al.
31

Current deep learning research is dominated by benchmark evaluation. A method is regarded as favorable if it empirically performs well on the dedicated test set. This mentality is seamlessly reflected in the resurfacing area of continual learning, where consecutively arriving sets of benchmark data are investigated. The core challenge is framed as protecting previously acquired representations from being catastrophically forgotten due to the iterative parameter updates. However, comparison of individual methods is nevertheless treated in isolation from real world application and typically judged by monitoring accumulated test set performance. The closed world assumption remains predominant. It is assumed that during deployment a model is guaranteed to encounter data that stems from the same distribution as used for training. This poses a massive challenge as neural networks are well known to provide overconfident false predictions on unknown instances and break down in the face of corrupted data. In this work we argue that notable lessons from open set recognition, the identification of statistically deviating data outside of the observed dataset, and the adjacent field of active learning, where data is incrementally queried such that the expected performance gain is maximized, are frequently overlooked in the deep learning era. Based on these forgotten lessons, we propose a consolidated view to bridge continual learning, active learning and open set recognition in deep neural networks. Our results show that this not only benefits each individual paradigm, but highlights the natural synergies in a common framework. We empirically demonstrate improvements when alleviating catastrophic forgetting, querying data in active learning, selecting task orders, while exhibiting robust open world application where previously proposed methods fail.

READ FULL TEXT

page 1

page 5

page 22

page 25

research
10/07/2019

Continual Learning in Neural Networks

Artificial neural networks have exceeded human-level performance in acco...
research
09/14/2020

CVPR 2020 Continual Learning in Computer Vision Competition: Approaches, Results, Current Challenges and Future Directions

In the last few years, we have witnessed a renewed and fast-growing inte...
research
10/09/2022

Few-Shot Continual Active Learning by a Robot

In this paper, we consider a challenging but realistic continual learnin...
research
02/27/2019

The Importance of Metric Learning for Robotic Vision: Open Set Recognition and Active Learning

State-of-the-art deep neural network recognition systems are designed fo...
research
06/04/2021

A Procedural World Generation Framework for Systematic Evaluation of Continual Learning

Several families of continual learning techniques have been proposed to ...
research
06/07/2021

Continual Active Learning for Efficient Adaptation of Machine Learning Models to Changing Image Acquisition

Imaging in clinical routine is subject to changing scanner protocols, ha...
research
11/25/2021

Continual Active Learning Using Pseudo-Domains for Limited Labelling Resources and Changing Acquisition Characteristics

Machine learning in medical imaging during clinical routine is impaired ...

Please sign up or login with your details

Forgot password? Click here to reset