Learning to Grow Artificial Hippocampi in Vision Transformers for Resilient Lifelong Learning

03/14/2023
by   Chinmay Savadikar, et al.
0

Lifelong learning without catastrophic forgetting (i.e., resiliency) possessed by human intelligence is entangled with sophisticated memory mechanisms in the brain, especially the long-term memory (LM) maintained by Hippocampi. To a certain extent, Transformers have emerged as the counterpart “Brain" of Artificial Intelligence (AI), and yet leave the LM component under-explored for lifelong learning settings. This paper presents a method of learning to grow Artificial Hippocampi (ArtiHippo) in Vision Transformers (ViTs) for resilient lifelong learning. With a comprehensive ablation study, the final linear projection layer in the multi-head self-attention (MHSA) block is selected in realizing and growing ArtiHippo. ArtiHippo is represented by a mixture of experts (MoEs). Each expert component is an on-site variant of the linear projection layer, maintained via neural architecture search (NAS) with the search space defined by four basic growing operations – skip, reuse, adapt, and new in lifelong learning. The LM of a task consists of two parts: the dedicated expert components (as model parameters) at different layers of a ViT learned via NAS, and the mean class-tokens (as stored latent vectors for measuring task similarity) associated with the expert components. For a new task, a hierarchical task-similarity-oriented exploration-exploitation sampling based NAS is proposed to learn the expert components. The task similarity is measured based on the normalized cosine similarity between the mean class-token of the new task and those of old tasks. The proposed method is complementary to prompt-based lifelong learningwith ViTs. In experiments, the proposed method is tested on the challenging Visual Domain Decathlon (VDD) benchmark and the recently proposed 5-Dataset benchmark. It obtains consistently better performance than the prior art with sensible ArtiHippo learned continually.

READ FULL TEXT

page 6

page 9

page 17

page 21

page 22

research
03/08/2023

HyT-NAS: Hybrid Transformers Neural Architecture Search for Edge Devices

Vision Transformers have enabled recent attention-based Deep Learning (D...
research
11/13/2021

Full-attention based Neural Architecture Search using Context Auto-regression

Self-attention architectures have emerged as a recent advancement for im...
research
10/02/2022

DARTFormer: Finding The Best Type Of Attention

Given the wide and ever growing range of different efficient Transformer...
research
10/19/2019

NASIB: Neural Architecture Search withIn Budget

Neural Architecture Search (NAS) represents a class of methods to genera...
research
05/07/2019

Neural Architecture Refinement: A Practical Way for Avoiding Overfitting in NAS

Neural architecture search (NAS) is proposed to automate the architectur...

Please sign up or login with your details

Forgot password? Click here to reset