Intrinsically Motivated Acquisition of Modular Slow Features for Humanoids in Continuous and Non-Stationary Environments

01/17/2017
by   Varun Raj Kompella, et al.
0

A compact information-rich representation of the environment, also called a feature abstraction, can simplify a robot's task of mapping its raw sensory inputs to useful action sequences. However, in environments that are non-stationary and only partially observable, a single abstraction is probably not sufficient to encode most variations. Therefore, learning multiple sets of spatially or temporally local, modular abstractions of the inputs would be beneficial. How can a robot learn these local abstractions without a teacher? More specifically, how can it decide from where and when to start learning a new abstraction? A recently proposed algorithm called Curious Dr. MISFA addresses this problem. The algorithm is based on two underlying learning principles called artificial curiosity and slowness. The former is used to make the robot self-motivated to explore by rewarding itself whenever it makes progress learning an abstraction; the later is used to update the abstraction by extracting slowly varying components from raw sensory inputs. Curious Dr. MISFA's application is, however, limited to discrete domains constrained by a pre-defined state space and has design limitations that make it unstable in certain situations. This paper presents a significant improvement that is applicable to continuous environments, is computationally less expensive, simpler to use with fewer hyper parameters, and stable in certain non-stationary environments. We demonstrate the efficacy and stability of our method in a vision-based robot simulator.

READ FULL TEXT
research
06/25/2019

Learning Causal State Representations of Partially Observable Environments

Intelligent agents can cope with sensory-rich environments by learning t...
research
03/01/2018

Representation Learning in Partially Observable Environments using Sensorimotor Prediction

In order to explore and act autonomously in an environment, an agent nee...
research
09/23/2021

Long Short View Feature Decomposition via Contrastive Video Representation Learning

Self-supervised video representation methods typically focus on the repr...
research
08/19/2022

Unified Policy Optimization for Continuous-action Reinforcement Learning in Non-stationary Tasks and Games

This paper addresses policy learning in non-stationary environments and ...
research
01/11/2018

Non-stationary Douglas-Rachford and alternating direction method of multipliers: adaptive stepsizes and convergence

We revisit the classical Douglas-Rachford (DR) method for finding a zero...
research
12/09/2011

Incremental Slow Feature Analysis: Adaptive and Episodic Learning from High-Dimensional Input Streams

Slow Feature Analysis (SFA) extracts features representing the underlyin...
research
02/28/2018

Integrating Human-Provided Information Into Belief State Representation Using Dynamic Factorization

In partially observed environments, it can be useful for a human to prov...

Please sign up or login with your details

Forgot password? Click here to reset