Stability Analysis of Optimal Adaptive Control using Value Iteration with Approximation Errors

10/23/2017
by   Ali Heydari, et al.
0

Adaptive optimal control using value iteration initiated from a stabilizing control policy is theoretically analyzed in terms of stability of the system during the learning stage without ignoring the effects of approximation errors. This analysis includes the system operated using any single/constant resulting control policy and also using an evolving/time-varying control policy. A feature of the presented results is providing estimations of the region of attraction so that if the initial condition is within the region, the whole trajectory will remain inside it and hence, the function approximation results remain valid.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/17/2014

Stabilizing Value Iteration with and without Approximation Errors

Adaptive optimal control using value iteration (VI) initiated from a sta...
research
05/20/2015

Convergence Analysis of Policy Iteration

Adaptive optimal control of nonlinear dynamic systems with deterministic...
research
12/18/2014

Theoretical and Numerical Analysis of Approximate Dynamic Programming with Approximation Errors

This study is aimed at answering the famous question of how the approxim...
research
04/18/2023

Feasible Policy Iteration

Safe reinforcement learning (RL) aims to solve an optimal control proble...
research
08/20/2021

Lessons from AlphaZero for Optimal, Model Predictive, and Adaptive Control

In this paper we aim to provide analysis and insights (often based on vi...
research
06/16/2020

Reinforcement Learning Control of Robotic Knee with Human in the Loop by Flexible Policy Iteration

This study is motivated by a new class of challenging control problems d...
research
03/29/2022

Search Methods for Policy Decompositions

Computing optimal control policies for complex dynamical systems require...

Please sign up or login with your details

Forgot password? Click here to reset