Efficient and robust multi-task learning in the brain with modular task primitives

05/28/2021
by   Christian David Marton, et al.
0

In a real-world setting biological agents do not have infinite resources to learn new things. It is thus useful to recycle previously acquired knowledge in a way that allows for faster, less resource-intensive acquisition of multiple new skills. Neural networks in the brain are likely not entirely re-trained with new tasks, but how they leverage existing computations to learn new tasks is not well understood. In this work, we study this question in artificial neural networks trained on commonly used neuroscience paradigms. Building on recent work from the multi-task learning literature, we propose two ingredients: (1) network modularity, and (2) learning task primitives. Together, these ingredients form inductive biases we call structural and functional, respectively. Using a corpus of nine different tasks, we show that a modular network endowed with task primitives allows for learning multiple tasks well while keeping parameter counts, and updates, low. We also show that the skills acquired with our approach are more robust to a broad range of perturbations compared to those acquired with other multi-task learning strategies. This work offers a new perspective on achieving efficient multi-task learning in the brain, and makes predictions for novel neuroscience experiments in which targeted perturbations are employed to explore solution spaces.

READ FULL TEXT

page 5

page 10

page 11

page 12

page 13

page 14

page 15

page 16

research
11/10/2021

Multi-Task Neural Processes

Neural processes have recently emerged as a class of powerful neural lat...
research
07/29/2021

Towards robust vision by multi-task learning on monkey visual cortex

Deep neural networks set the state-of-the-art across many tasks in compu...
research
10/10/2022

Continual task learning in natural and artificial agents

How do humans and other animals learn new tasks? A wave of brain recordi...
research
07/20/2020

Navigating the Trade-Off between Multi-Task Learning and Learning to Multitask in Deep Neural Networks

The terms multi-task learning and multitasking are easily confused. Mult...
research
02/13/2023

SubTuning: Efficient Finetuning for Multi-Task Learning

Finetuning a pretrained model has become a standard approach for trainin...
research
05/18/2020

Efficient Image Gallery Representations at Scale Through Multi-Task Learning

Image galleries provide a rich source of diverse information about a pro...
research
05/23/2023

When Does Aggregating Multiple Skills with Multi-Task Learning Work? A Case Study in Financial NLP

Multi-task learning (MTL) aims at achieving a better model by leveraging...

Please sign up or login with your details

Forgot password? Click here to reset