Effective prevention of semantic drift as angular distance in memory-less continual deep neural networks

12/16/2021
by   Khouloud Saadi, et al.
0

Lifelong machine learning or continual learning models attempt to learn incrementally by accumulating knowledge across a sequence of tasks. Therefore, these models learn better and faster. They are used in various intelligent systems that have to interact with humans or any dynamic environment e.g., chatbots and self-driving cars. Memory-less approach is more often used with deep neural networks that accommodates incoming information from tasks within its architecture. It allows them to perform well on all the seen tasks. These models suffer from semantic drift or the plasticity-stability dilemma. The existing models use Minkowski distance measures to decide which nodes to freeze, update or duplicate. These distance metrics do not provide better separation of nodes as they are susceptible to high dimensional sparse vectors. In our proposed approach, we use angular distance to evaluate the semantic drift in individual nodes that provide better separation of nodes and thus better balancing between stability and plasticity. The proposed approach outperforms state-of-the art models by maintaining higher accuracy on standard datasets.

READ FULL TEXT
research
03/06/2023

Centroid Distance Distillation for Effective Rehearsal in Continual Learning

Rehearsal, retraining on a stored small data subset of old tasks, has be...
research
04/24/2021

Class-Incremental Experience Replay for Continual Learning under Concept Drift

Modern machine learning systems need to be able to cope with constantly ...
research
12/08/2019

Nonparametric Bayesian Structure Adaptation for Continual Learning

Continual Learning is a learning paradigm where machine learning models ...
research
09/01/2023

New metrics for analyzing continual learners

Deep neural networks have shown remarkable performance when trained on i...
research
03/12/2021

Training Networks in Null Space of Feature Covariance for Continual Learning

In the setting of continual learning, a network is trained on a sequence...
research
09/18/2023

Adaptive Reorganization of Neural Pathways for Continual Learning with Hybrid Spiking Neural Networks

The human brain can self-organize rich and diverse sparse neural pathway...
research
06/06/2023

Learning Representations on the Unit Sphere: Application to Online Continual Learning

We use the maximum a posteriori estimation principle for learning repres...

Please sign up or login with your details

Forgot password? Click here to reset