Divergent representations of ethological visual inputs emerge from supervised, unsupervised, and reinforcement learning

12/03/2021
by   Grace W. Lindsay, et al.
7

Artificial neural systems trained using reinforcement, supervised, and unsupervised learning all acquire internal representations of high dimensional input. To what extent these representations depend on the different learning objectives is largely unknown. Here we compare the representations learned by eight different convolutional neural networks, each with identical ResNet architectures and trained on the same family of egocentric images, but embedded within different learning systems. Specifically, the representations are trained to guide action in a compound reinforcement learning task; to predict one or a combination of three task-related targets with supervision; or using one of three different unsupervised objectives. Using representational similarity analysis, we find that the network trained with reinforcement learning differs most from the other networks. Through further analysis using metrics inspired by the neuroscience literature, we find that the model trained with reinforcement learning has a sparse and high-dimensional representation wherein individual images are represented with very different patterns of neural activity. Further analysis suggests these representations may arise in order to guide long-term behavior and goal-seeking in the RL agent. Our results provide insights into how the properties of neural representations are influenced by objective functions and can inform transfer learning approaches.

READ FULL TEXT

page 4

page 6

page 7

page 16

page 17

page 18

research
03/30/2022

Investigating the Properties of Neural Network Representations in Reinforcement Learning

In this paper we investigate the properties of representations learned b...
research
07/26/2017

Learning Sparse Representations in Reinforcement Learning with Sparse Coding

A variety of representation learning approaches have been investigated f...
research
02/23/2021

Learning Sparse and Meaningful Representations Through Embodiment

How do humans acquire a meaningful understanding of the world with littl...
research
06/07/2021

A Computational Model of Representation Learning in the Brain Cortex, Integrating Unsupervised and Reinforcement Learning

A common view on the brain learning processes proposes that the three cl...
research
04/15/2020

Extending Unsupervised Neural Image Compression With Supervised Multitask Learning

We focus on the problem of training convolutional neural networks on gig...
research
11/16/2020

Towards Learning Controllable Representations of Physical Systems

Learned representations of dynamical systems reduce dimensionality, pote...
research
04/07/2020

How Do You Act? An Empirical Study to Understand Behavior of Deep Reinforcement Learning Agents

The demand for more transparency of decision-making processes of deep re...

Please sign up or login with your details

Forgot password? Click here to reset