Lessons from AlphaZero for Optimal, Model Predictive, and Adaptive Control

08/20/2021
by   Dimitri Bertsekas, et al.
71

In this paper we aim to provide analysis and insights (often based on visualization), which explain the beneficial effects of on-line decision making on top of off-line training. In particular, through a unifying abstract mathematical framework, we show that the principal AlphaZero/TD-Gammon ideas of approximation in value space and rollout apply very broadly to deterministic and stochastic optimal control problems, involving both discrete and continuous search spaces. Moreover, these ideas can be effectively integrated with other important methodologies such as model predictive control, adaptive control, decentralized control, discrete and Bayesian optimization, neural network-based value and policy approximations, and heuristic algorithms for discrete optimization.

READ FULL TEXT
research
12/15/2022

Rollout Algorithms and Approximate Dynamic Programming for Bayesian Optimization and Sequential Estimation

We provide a unifying approximate dynamic programming framework that app...
research
09/20/2010

Approximate Inference and Stochastic Optimal Control

We propose a novel reformulation of the stochastic optimal control probl...
research
10/23/2017

Stability Analysis of Optimal Adaptive Control using Value Iteration with Approximation Errors

Adaptive optimal control using value iteration initiated from a stabiliz...
research
03/13/2022

Adaptive Model Predictive Control by Learning Classifiers

Stochastic model predictive control has been a successful and robust con...
research
11/15/2022

Reinforcement Learning Methods for Wordle: A POMDP/Adaptive Control Approach

In this paper we address the solution of the popular Wordle puzzle, usin...
research
05/27/2022

Targeted Adaptive Design

Modern advanced manufacturing and advanced materials design often requir...

Please sign up or login with your details

Forgot password? Click here to reset