Latent Replay for Real-Time Continual Learning

12/02/2019
by   Lorenzo Pellegrini, et al.
0

Training deep networks on light computational devices is nowadays very challenging. Continual learning techniques, where complex models are incrementally trained on small batches of new data, can make the learning problem tractable even for CPU-only edge devices. However, a number of practical problems need to be solved: catastrophic forgetting before anything else. In this paper we introduce an original technique named “Latent Replay” where, instead of storing a portion of past data in the input space, we store activations volumes at some intermediate layer. This can significantly reduce the computation and storage required by native rehearsal. To keep the representation stable and the stored activations valid we propose to slow-down learning at all the layers below the latent replay one, leaving the layers above free to learn at full pace. In our experiments we show that Latent Replay, combined with existing continual learning techniques, achieves state-of-the-art accuracy on a difficult benchmark such as CORe50 NICv2 with nearly 400 small and highly non-i.i.d. batches. Finally, we demonstrate the feasibility of nearly real-time continual learning on the edge through the porting of the proposed technique on a smartphone device.

READ FULL TEXT
research
06/23/2022

Sample Condensation in Online Continual Learning

Online Continual learning is a challenging learning scenario where the m...
research
06/03/2019

Continual Learning of New Sound Classes using Generative Replay

Continual learning consists in incrementally training a model on a seque...
research
05/24/2021

Continual Learning at the Edge: Real-Time Training on Smartphone Devices

On-device training for personalized learning is a challenging research p...
research
05/05/2021

Continual Learning on the Edge with TensorFlow Lite

Deploying sophisticated deep learning models on embedded devices with th...
research
07/04/2022

Progressive Latent Replay for efficient Generative Rehearsal

We introduce a new method for internal replay that modulates the frequen...
research
10/20/2021

A TinyML Platform for On-Device Continual Learning with Quantized Latent Replays

In the last few years, research and development on Deep Learning models ...
research
05/23/2022

KRNet: Towards Efficient Knowledge Replay

The knowledge replay technique has been widely used in many tasks such a...

Please sign up or login with your details

Forgot password? Click here to reset