Learn to Bind and Grow Neural Structures

11/21/2020
by   Azhar Shaikh, et al.
0

Task-incremental learning involves the challenging problem of learning new tasks continually, without forgetting past knowledge. Many approaches address the problem by expanding the structure of a shared neural network as tasks arrive, but struggle to grow optimally, without losing past knowledge. We present a new framework, Learn to Bind and Grow, which learns a neural architecture for a new task incrementally, either by binding with layers of a similar task or by expanding layers which are more likely to conflict between tasks. Central to our approach is a novel, interpretable, parameterization of the shared, multi-task architecture space, which then enables computing globally optimal architectures using Bayesian optimization. Experiments on continual learning benchmarks show that our framework performs comparably with earlier expansion based approaches and is able to flexibly compute multiple optimal solutions with performance-size trade-offs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/12/2020

Continual Class Incremental Learning for CT Thoracic Segmentation

Deep learning organ segmentation approaches require large amounts of ann...
research
02/17/2021

Firefly Neural Architecture Descent: a General Approach for Growing Neural Networks

We propose firefly neural architecture descent, a general framework for ...
research
01/11/2023

Continual Few-Shot Learning Using HyperTransformers

We focus on the problem of learning without forgetting from multiple tas...
research
06/26/2023

Parameter-Level Soft-Masking for Continual Learning

Existing research on task incremental learning in continual learning has...
research
04/26/2022

Theoretical Understanding of the Information Flow on Continual Learning Performance

Continual learning (CL) is a setting in which an agent has to learn from...
research
12/23/2020

Efficient Continual Learning with Modular Networks and Task-Driven Priors

Existing literature in Continual Learning (CL) has focused on overcoming...
research
01/24/2022

PaRT: Parallel Learning Towards Robust and Transparent AI

This paper takes a parallel learning approach for robust and transparent...

Please sign up or login with your details

Forgot password? Click here to reset