Continual Learning of Predictive Models in Video Sequences via Variational Autoencoders

06/02/2020
by   Damian Campo, et al.
0

This paper proposes a method for performing continual learning of predictive models that facilitate the inference of future frames in video sequences. For a first given experience, an initial Variational Autoencoder, together with a set of fully connected neural networks are utilized to respectively learn the appearance of video frames and their dynamics at the latent space level. By employing an adapted Markov Jump Particle Filter, the proposed method recognizes new situations and integrates them as predictive models avoiding catastrophic forgetting of previously learned tasks. For evaluating the proposed method, this article uses video sequences from a vehicle that performs different tasks in a controlled environment.

READ FULL TEXT

page 3

page 4

research
05/06/2019

Improving and Understanding Variational Continual Learning

In the continual learning setting, tasks are encountered sequentially. T...
research
06/23/2021

Multiband VAE: Latent Space Partitioning for Knowledge Consolidation in Continual Learning

We propose a new method for unsupervised continual knowledge consolidati...
research
04/12/2022

Continual Predictive Learning from Videos

Predictive learning ideally builds the world model of physical processes...
research
08/22/2023

Variational Density Propagation Continual Learning

Deep Neural Networks (DNNs) deployed to the real world are regularly sub...
research
08/30/2019

BooVAE: A scalable framework for continual VAE learning under boosting approach

Variational Auto Encoders (VAE) are capable of generating realistic imag...
research
05/28/2018

Keep and Learn: Continual Learning by Constraining the Latent Space for Knowledge Preservation in Neural Networks

Data is one of the most important factors in machine learning. However, ...

Please sign up or login with your details

Forgot password? Click here to reset