A Useful Motif for Flexible Task Learning in an Embodied Two-Dimensional Visual Environment

06/22/2017
by   Kevin T. Feigelis, et al.
0

Animals (especially humans) have an amazing ability to learn new tasks quickly, and switch between them flexibly. How brains support this ability is largely unknown, both neuroscientifically and algorithmically. One reasonable supposition is that modules drawing on an underlying general-purpose sensory representation are dynamically allocated on a per-task basis. Recent results from neuroscience and artificial intelligence suggest the role of the general purpose visual representation may be played by a deep convolutional neural network, and give some clues how task modules based on such a representation might be discovered and constructed. In this work, we investigate module architectures in an embodied two-dimensional touchscreen environment, in which an agent's learning must occur via interactions with an environment that emits images and rewards, and accepts touches as input. This environment is designed to capture the physical structure of the task environments that are commonly deployed in visual neuroscience and psychophysics. We show that in this context, very simple changes in the nonlinear activations used by such a module can significantly influence how fast it is at learning visual tasks and how suitable it is for switching to new tasks.

READ FULL TEXT

page 2

page 3

page 5

page 6

page 8

page 9

page 15

research
11/20/2017

Modular Continual Learning in a Unified Visual Environment

A core aspect of human intelligence is the ability to learn new tasks qu...
research
05/27/2019

Structure Learning for Neural Module Networks

Neural Module Networks, originally proposed for the task of visual quest...
research
02/06/2020

The Costs and Benefits of Goal-Directed Attention in Deep Convolutional Neural Networks

Attention in machine learning is largely bottom-up, whereas people also ...
research
05/10/2018

Deep Nets: What have they ever done for Vision?

This is an opinion paper about the strengths and weaknesses of Deep Nets...
research
06/03/2022

Drawing out of Distribution with Neuro-Symbolic Generative Models

Learning general-purpose representations from perceptual inputs is a hal...
research
11/16/2022

A Neural Active Inference Model of Perceptual-Motor Learning

The active inference framework (AIF) is a promising new computational fr...
research
09/13/2022

Improving Language Model Prompting in Support of Semi-autonomous Task Learning

Language models (LLMs) offer potential as a source of knowledge for agen...

Please sign up or login with your details

Forgot password? Click here to reset