A modular framework for stabilizing deep reinforcement learning control

04/07/2023
by   Nathan P. Lawrence, et al.
0

We propose a framework for the design of feedback controllers that combines the optimization-driven and model-free advantages of deep reinforcement learning with the stability guarantees provided by using the Youla-Kucera parameterization to define the search domain. Recent advances in behavioral systems allow us to construct a data-driven internal model; this enables an alternative realization of the Youla-Kucera parameterization based entirely on input-output exploration data. Using a neural network to express a parameterized set of nonlinear stable operators enables seamless integration with standard deep learning libraries. We demonstrate the approach on a realistic simulation of a two-tank system.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/06/2018

Faster Deep Q-learning using Neural Episodic Control

The Research on deep reinforcement learning to estimate Q-value by deep ...
research
12/08/2021

Learning over All Stabilizing Nonlinear Controllers for a Partially-Observed Linear System

We propose a parameterization of nonlinear output feedback controllers f...
research
12/02/2021

Youla-REN: Learning Nonlinear Feedback Policies with Robust Stability Guarantees

This paper presents a parameterization of nonlinear controllers for unce...
research
10/26/2018

Stability-certified reinforcement learning: A control-theoretic perspective

We investigate the important problem of certifying stability of reinforc...
research
03/03/2021

Reinforcement Learning with External Knowledge by using Logical Neural Networks

Conventional deep reinforcement learning methods are sample-inefficient ...
research
01/09/2018

DeepTraffic: Driving Fast through Dense Traffic with Deep Reinforcement Learning

We present a micro-traffic simulation (named "DeepTraffic") where the pe...

Please sign up or login with your details

Forgot password? Click here to reset