High-level Features for Resource Economy and Fast Learning in Skill Transfer

06/18/2021
by   Alper Ahmetoğlu, et al.
0

Abstraction is an important aspect of intelligence which enables agents to construct robust representations for effective decision making. In the last decade, deep networks are proven to be effective due to their ability to form increasingly complex abstractions. However, these abstractions are distributed over many neurons, making the re-use of a learned skill costly. Previous work either enforced formation of abstractions creating a designer bias, or used a large number of neural units without investigating how to obtain high-level features that may more effectively capture the source task. For avoiding designer bias and unsparing resource use, we propose to exploit neural response dynamics to form compact representations to use in skill transfer. For this, we consider two competing methods based on (1) maximum information compression principle and (2) the notion that abstract events tend to generate slowly changing signals, and apply them to the neural signals generated during task execution. To be concrete, in our simulation experiments, we either apply principal component analysis (PCA) or slow feature analysis (SFA) on the signals collected from the last hidden layer of a deep network while it performs a source task, and use these features for skill transfer in a new target task. We compare the generalization performance of these alternatives with the baselines of skill transfer with full layer output and no-transfer settings. Our results show that SFA units are the most successful for skill transfer. SFA as well as PCA, incur less resources compared to usual skill transfer, whereby many units formed show a localized response reflecting end-effector-obstacle-goal relations. Finally, SFA units with lowest eigenvalues resembles symbolic representations that highly correlate with high-level features such as joint angles which might be thought of precursors for fully symbolic systems.

READ FULL TEXT

page 6

page 8

page 10

page 11

research
11/29/2018

On the Transferability of Representations in Neural Networks Between Datasets and Tasks

Deep networks, composed of multiple layers of hierarchical distributed r...
research
12/03/2020

Identification of Prototypical Task Executions Based on Smoothness as Basis of Human-to-Robot Kinematic Skill Transfer

In this paper we investigate human-to-robot skill transfer based on the ...
research
08/19/2023

Skill Transformer: A Monolithic Policy for Mobile Manipulation

We present Skill Transformer, an approach for solving long-horizon robot...
research
03/06/2023

Symbolic Synthesis of Neural Networks

Neural networks adapt very well to distributed and continuous representa...
research
09/11/2017

Combining Strategic Learning and Tactical Search in Real-Time Strategy Games

A commonly used technique for managing AI complexity in real-time strate...
research
11/07/2022

Reward-Predictive Clustering

Recent advances in reinforcement-learning research have demonstrated imp...
research
05/26/2016

The Symbolic Interior Point Method

A recent trend in probabilistic inference emphasizes the codification of...

Please sign up or login with your details

Forgot password? Click here to reset