DeepAI AI Chat
Log In Sign Up

Learn to Bind and Grow Neural Structures

by   Azhar Shaikh, et al.

Task-incremental learning involves the challenging problem of learning new tasks continually, without forgetting past knowledge. Many approaches address the problem by expanding the structure of a shared neural network as tasks arrive, but struggle to grow optimally, without losing past knowledge. We present a new framework, Learn to Bind and Grow, which learns a neural architecture for a new task incrementally, either by binding with layers of a similar task or by expanding layers which are more likely to conflict between tasks. Central to our approach is a novel, interpretable, parameterization of the shared, multi-task architecture space, which then enables computing globally optimal architectures using Bayesian optimization. Experiments on continual learning benchmarks show that our framework performs comparably with earlier expansion based approaches and is able to flexibly compute multiple optimal solutions with performance-size trade-offs.


page 1

page 2

page 3

page 4


Continual Class Incremental Learning for CT Thoracic Segmentation

Deep learning organ segmentation approaches require large amounts of ann...

Firefly Neural Architecture Descent: a General Approach for Growing Neural Networks

We propose firefly neural architecture descent, a general framework for ...

Toward Sustainable Continual Learning: Detection and Knowledge Repurposing of Similar Tasks

Most existing works on continual learning (CL) focus on overcoming the c...

Optimizing Reusable Knowledge for Continual Learning via Metalearning

When learning tasks over time, artificial neural networks suffer from a ...

Theoretical Understanding of the Information Flow on Continual Learning Performance

Continual learning (CL) is a setting in which an agent has to learn from...

Efficient Continual Learning with Modular Networks and Task-Driven Priors

Existing literature in Continual Learning (CL) has focused on overcoming...

PaRT: Parallel Learning Towards Robust and Transparent AI

This paper takes a parallel learning approach for robust and transparent...