Approximate Inference and Stochastic Optimal Control

09/20/2010
by   Konrad Rawlik, et al.
0

We propose a novel reformulation of the stochastic optimal control problem as an approximate inference problem, demonstrating, that such a interpretation leads to new practical methods for the original problem. In particular we characterise a novel class of iterative solutions to the stochastic optimal control problem based on a natural relaxation of the exact dual formulation. These theoretical insights are applied to the Reinforcement Learning problem where they lead to new model free, off policy methods for discrete and continuous problems.

READ FULL TEXT
research
05/17/2021

Stochastic Control through Approximate Bayesian Input Inference

Optimal control under uncertainty is a prevailing challenge in control, ...
research
09/03/2018

A Minimum Discounted Reward Hamilton-Jacobi Formulation for Computing Reachable Sets

We propose a novel formulation for approximating reachable sets through ...
research
09/18/2018

Multiobjective Reinforcement Learning for Reconfigurable Adaptive Optimal Control of Manufacturing Processes

In industrial applications of adaptive optimal control often multiple co...
research
01/30/2023

Attack Impact Evaluation for Stochastic Control Systems through Alarm Flag State Augmentation

This note addresses the problem of evaluating the impact of an attack on...
research
08/20/2021

Lessons from AlphaZero for Optimal, Model Predictive, and Adaptive Control

In this paper we aim to provide analysis and insights (often based on vi...
research
04/16/2018

Sparse solutions in optimal control of PDEs with uncertain parameters: the linear case

We study sparse solutions of optimal control problems governed by PDEs w...
research
05/02/2018

Reinforcement Learning and Control as Probabilistic Inference: Tutorial and Review

The framework of reinforcement learning or optimal control provides a ma...

Please sign up or login with your details

Forgot password? Click here to reset