A Computational Model of Representation Learning in the Brain Cortex, Integrating Unsupervised and Reinforcement Learning

06/07/2021
by   Giovanni Granato, et al.
0

A common view on the brain learning processes proposes that the three classic learning paradigms – unsupervised, reinforcement, and supervised – take place in respectively the cortex, the basal-ganglia, and the cerebellum. However, dopamine outbursts, usually assumed to encode reward, are not limited to the basal ganglia but also reach prefrontal, motor, and higher sensory cortices. We propose that in the cortex the same reward-based trial-and-error processes might support not only the acquisition of motor representations but also of sensory representations. In particular, reward signals might guide trial-and-error processes that mix with associative learning processes to support the acquisition of representations better serving downstream action selection. We tested the soundness of this hypothesis with a computational model that integrates unsupervised learning (Contrastive Divergence) and reinforcement learning (REINFORCE). The model was tested with a task requiring different responses to different visual images grouped in categories involving either colour, shape, or size. Results show that a balanced mix of unsupervised and reinforcement learning processes leads to the best performance. Indeed, excessive unsupervised learning tends to under-represent task-relevant features while excessive reinforcement learning tends to initially learn slowly and then to incur in local minima. These results stimulate future empirical studies on category learning directed to investigate similar effects in the extrastriate visual cortices. Moreover, they prompt further computational investigations directed to study the possible advantages of integrating unsupervised and reinforcement learning processes.

READ FULL TEXT

page 4

page 9

page 11

page 12

page 13

page 19

page 20

page 21

research
06/01/2023

Active Reinforcement Learning under Limited Visual Observability

In this work, we investigate Active Reinforcement Learning (Active-RL), ...
research
12/03/2021

Divergent representations of ethological visual inputs emerge from supervised, unsupervised, and reinforcement learning

Artificial neural systems trained using reinforcement, supervised, and u...
research
09/27/2021

From internal models toward metacognitive AI

In several papers published in Biological Cybernetics in the 1980s and 1...
research
10/18/2022

Simple Emergent Action Representations from Multi-Task Policy Training

Low-level sensory and motor signals in the high-dimensional spaces (e.g....
research
06/21/2017

Structure Learning in Motor Control:A Deep Reinforcement Learning Model

Motor adaptation displays a structure-learning effect: adaptation to a n...
research
08/25/2022

Light-weight probing of unsupervised representations for Reinforcement Learning

Unsupervised visual representation learning offers the opportunity to le...
research
01/21/2020

Unsupervisedly Learned Representations: Should the Quest be Over?

There exists a Classification accuracy gap of about 20 methods of genera...

Please sign up or login with your details

Forgot password? Click here to reset