PVEs: Position-Velocity Encoders for Unsupervised Learning of Structured State Representations

05/27/2017
by   Rico Jonschkowski, et al.
0

We propose position-velocity encoders (PVEs) which learn---without supervision---to encode images to positions and velocities of task-relevant objects. PVEs encode a single image into a low-dimensional position state and compute the velocity state from finite differences in position. In contrast to autoencoders, position-velocity encoders are not trained by image reconstruction, but by making the position-velocity representation consistent with priors about interacting with the physical world. We applied PVEs to several simulated control tasks from pixels and achieved promising preliminary results.

READ FULL TEXT

page 4

page 5

research
03/30/2023

Torque Control with Joints Position and Velocity Limits Avoidance

The design of a control architecture for providing the desired motion al...
research
07/12/2022

Camera Pose Auto-Encoders for Improving Pose Regression

Absolute pose regressor (APR) networks are trained to estimate the pose ...
research
11/27/2019

A Benchmarking of DCM Based Architectures for Position, Velocity and Torque Controlled Humanoid Robots

This paper contributes towards the benchmarking of control architectures...
research
09/15/2017

Unsupervised state representation learning with robotic priors: a robustness benchmark

Our understanding of the world depends highly on our capacity to produce...
research
01/30/2023

Learning Control from Raw Position Measurements

We propose a Model-Based Reinforcement Learning (MBRL) algorithm named V...

Please sign up or login with your details

Forgot password? Click here to reset