-
A Neural Dirichlet Process Mixture Model for Task-Free Continual Learning
Despite the growing interest in continual learning, most of its contempo...
read it
-
Facilitating Bayesian Continual Learning by Natural Gradients and Stein Gradients
Continual learning aims to enable machine learning models to learn a gen...
read it
-
Nonparametric Bayesian Structure Adaptation for Continual Learning
Continual Learning is a learning paradigm where machine learning models ...
read it
-
Continual Learning via Neural Pruning
We introduce Continual Learning via Neural Pruning (CLNP), a new method ...
read it
-
Regularize, Expand and Compress: Multi-task based Lifelong Learning via NonExpansive AutoML
Lifelong learning, the problem of continual learning where tasks arrive ...
read it
-
Continual learning with hypernetworks
Artificial neural networks suffer from catastrophic forgetting when they...
read it
-
Training Binary Neural Networks using the Bayesian Learning Rule
Neural networks with binary weights are computation-efficient and hardwa...
read it
Bayesian Nonparametric Weight Factorization for Continual Learning
Naively trained neural networks tend to experience catastrophic forgetting in sequential task settings, where data from previous tasks are unavailable. A number of methods, using various model expansion strategies, have been proposed recently as possible solutions. However, determining how much to expand the model is left to the practitioner, and typically a constant schedule is chosen for simplicity, regardless of how complex the incoming task is. Instead, we propose a principled Bayesian nonparametric approach based on the Indian Buffet Process (IBP) prior, letting the data determine how much to expand the model complexity. We pair this with a factorization of the neural network's weight matrices. Such an approach allows us to scale the number of factors of each weight matrix to the complexity of the task, while the IBP prior imposes weight factor sparsity and encourages factor reuse, promoting positive knowledge transfer between tasks. We demonstrate the effectiveness of our method on a number of continual learning benchmarks and analyze how weight factors are allocated and reused throughout the training.
READ FULL TEXT
Comments
There are no comments yet.