Structured Policy Representation: Imposing Stability in arbitrarily conditioned dynamic systems

12/11/2020
by   Julen Urain, et al.
0

We present a new family of deep neural network-based dynamic systems. The presented dynamics are globally stable and can be conditioned with an arbitrary context state. We show how these dynamics can be used as structured robot policies. Global stability is one of the most important and straightforward inductive biases as it allows us to impose reasonable behaviors outside the region of the demonstrations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/26/2021

Almost Surely Stable Deep Dynamics

We introduce a method for learning provably stable deep neural network b...
research
05/22/2023

End-to-End Stable Imitation Learning via Autonomous Neural Dynamic Policies

State-of-the-art sensorimotor learning algorithms offer policies that ca...
research
03/30/2021

Learning Deep Neural Policies with Stability Guarantees

Reinforcement learning (RL) has been successfully used to solve various ...
research
10/25/2020

ImitationFlow: Learning Deep Stable Stochastic Dynamic Systems by Normalizing Flows

We introduce ImitationFlow, a novel Deep generative model that allows le...
research
09/20/2022

LEMURS: Learning Distributed Multi-Robot Interactions

This paper presents LEMURS, an algorithm for learning scalable multi-rob...
research
03/27/2023

The Quality-Diversity Transformer: Generating Behavior-Conditioned Trajectories with Decision Transformers

In the context of neuroevolution, Quality-Diversity algorithms have prov...
research
09/16/2023

Learning a Stable Dynamic System with a Lyapunov Energy Function for Demonstratives Using Neural Networks

Autonomous Dynamic System (DS)-based algorithms hold a pivotal and found...

Please sign up or login with your details

Forgot password? Click here to reset