Alice's Adventures in the Markovian World

07/21/2019
by   Zhanzhan Zhao, et al.
0

This paper proposes an algorithm Alice having no access to the physics law of the environment, which is actually linear with stochastic noise, and learns to make decisions directly online without a training phase or a stable policy as initial input. Neither estimating the system parameters nor the value functions online, the proposed algorithm generalizes one of the most fundamental online learning algorithms Follow-the-Leader into a linear Gauss-Markov process setting, with a regularization term similar to the momentum method in the gradient descent algorithm, and a feasible online constraint inspired by Lyapunov's Second Theorem. The proposed algorithm is considered as a mirror optimization to the model predictive control. Only knowing the state-action alignment relationship, with the ability to observe every state exactly, a no-regret proof of the algorithm without state noise is given. The analysis of the general linear system with stochastic noise is shown with a sufficient condition for the no-regret proof. The simulations compare the performance of Alice with another recent work and verify the great flexibility of Alice.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/10/2022

Regret Analysis of Online Gradient Descent-based Iterative Learning Control with Model Mismatch

In Iterative Learning Control (ILC), a sequence of feedforward control a...
research
02/29/2020

Logarithmic Regret for Adversarial Online Control

We introduce a new algorithm for online linear-quadratic control in a kn...
research
05/12/2023

Online Learning Under A Separable Stochastic Approximation Framework

We propose an online learning algorithm for a class of machine learning ...
research
07/16/2023

Byzantine-Robust Distributed Online Learning: Taming Adversarial Participants in An Adversarial Environment

This paper studies distributed online learning under Byzantine attacks. ...
research
03/14/2016

Online Isotonic Regression

We consider the online version of the isotonic regression problem. Given...
research
02/06/2020

No-Regret Prediction in Marginally Stable Systems

We consider the problem of online prediction in a marginally stable line...
research
07/12/2022

Simultaneously Learning Stochastic and Adversarial Bandits under the Position-Based Model

Online learning to rank (OLTR) interactively learns to choose lists of i...

Please sign up or login with your details

Forgot password? Click here to reset