Theoretical and Numerical Analysis of Approximate Dynamic Programming with Approximation Errors

12/18/2014
by   Ali Heydari, et al.
0

This study is aimed at answering the famous question of how the approximation errors at each iteration of Approximate Dynamic Programming (ADP) affect the quality of the final results considering the fact that errors at each iteration affect the next iteration. To this goal, convergence of Value Iteration scheme of ADP for deterministic nonlinear optimal control problems with undiscounted cost functions is investigated while considering the errors existing in approximating respective functions. The boundedness of the results around the optimal solution is obtained based on quantities which are known in a general optimal control problem and assumptions which are verifiable. Moreover, since the presence of the approximation errors leads to the deviation of the results from optimality, sufficient conditions for stability of the system operated by the result obtained after a finite number of value iterations, along with an estimation of its region of attraction, are derived in terms of a calculable upper bound of the control approximation error. Finally, the process of implementation of the method on an orbital maneuver problem is investigated through which the assumptions made in the theoretical developments are verified and the sufficient conditions are applied for guaranteeing stability and near optimality.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/20/2015

Convergence Analysis of Policy Iteration

Adaptive optimal control of nonlinear dynamic systems with deterministic...
research
12/17/2014

Optimal Triggering of Networked Control Systems

The problem of resource allocation of nonlinear networked control system...
research
12/17/2014

Stabilizing Value Iteration with and without Approximation Errors

Adaptive optimal control using value iteration (VI) initiated from a sta...
research
10/23/2017

Stability Analysis of Optimal Adaptive Control using Value Iteration with Approximation Errors

Adaptive optimal control using value iteration initiated from a stabiliz...
research
03/22/2021

Convergence of Finite Memory Q-Learning for POMDPs and Near Optimality of Learned Policies under Filter Stability

In this paper, for POMDPs, we provide the convergence of a Q learning al...
research
08/08/2020

Convex Q-Learning, Part 1: Deterministic Optimal Control

It is well known that the extension of Watkins' algorithm to general fun...

Please sign up or login with your details

Forgot password? Click here to reset