Beyond Basins of Attraction: Evaluating Robustness of Natural Dynamics

06/21/2018
by   Steve Heim, et al.
0

It is commonly accepted that properly designing a system to exhibit favorable natural dynamics can greatly simplify designing or learning the control policy. It is however still unclear what constitutes favorable natural dynamics, and how to quantify its effect. Most studies of simple walking and running models have focused on the basins of attraction of passive limit-cycles, and the notion of self-stability. We emphasize instead the importance of stepping beyond basins of attraction. We show an approach based on viability theory to quantify robustness, valid for the family of all robust control policies. This allows us to evaluate the robustness inherent to the natural dynamics before designing the control policy or specifying a control objective. We illustrate this approach on simple spring mass models of running and show previously unexplored advantages to using a nonlinear leg stiffness. We believe designing robots with robust natural dynamics is partic- ularly important for enabling learning control policies directly in hardware.

READ FULL TEXT

page 4

page 5

page 7

research
11/16/2020

Enforcing robust control guarantees within neural network policies

When designing controllers for safety-critical systems, practitioners of...
research
09/29/2022

Effect of the Dynamics of a Horizontally Wobbling Mass on Biped Walking Performance

We have developed biped robots with a passive dynamic walking mechanism....
research
07/08/2022

Learning with Muscles: Benefits for Data-Efficiency and Robustness in Anthropomorphic Tasks

Humans are able to outperform robots in terms of robustness, versatility...
research
03/29/2019

Mesh-based Tools to Analyze Deep Reinforcement Learning Policies for Underactuated Biped Locomotion

In this paper, we present a mesh-based approach to analyze stability and...
research
10/03/2019

Hybrid Zero Dynamics Inspired Feedback Control Policy Design for 3D Bipedal Locomotion using Reinforcement Learning

This paper presents a novel model-free reinforcement learning (RL) frame...
research
10/11/2016

Transfer from Simulation to Real World through Learning Deep Inverse Dynamics Model

Developing control policies in simulation is often more practical and sa...
research
03/18/2019

A Control Lyapunov Perspective on Episodic Learning via Projection to State Stability

The goal of this paper is to understand the impact of learning on contro...

Please sign up or login with your details

Forgot password? Click here to reset