Online Observer-Based Inverse Reinforcement Learning

11/03/2020
by   Ryan Self, et al.
0

In this paper, a novel approach to the output-feedback inverse reinforcement learning (IRL) problem is developed by casting the IRL problem, for linear systems with quadratic cost functions, as a state estimation problem. Two observer-based techniques for IRL are developed, including a novel observer method that re-uses previous state estimates via history stacks. Theoretical guarantees for convergence and robustness are established under appropriate excitation conditions. Simulations demonstrate the performance of the developed observers and filters under noisy and noise-free measurements.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/28/2022

Nonuniqueness and Convergence to Equivalent Solutions in Observer-based Inverse Reinforcement Learning

A key challenge in solving the deterministic inverse reinforcement learn...
research
11/12/2020

Imposing Robust Structured Control Constraint on Reinforcement Learning of Linear Quadratic Regulator

This paper discusses learning a structured feedback control to obtain su...
research
02/25/2020

Human Apprenticeship Learning via Kernel-based Inverse Reinforcement Learning

This paper considers if a reward function learned via inverse reinforcem...
research
10/29/2019

Feedback Linearization for Unknown Systems via Reinforcement Learning

We present a novel approach to control design for nonlinear systems, whi...
research
09/24/2017

An Optimal Online Method of Selecting Source Policies for Reinforcement Learning

Transfer learning significantly accelerates the reinforcement learning p...
research
09/12/2023

Convergence of Gradient-based MAML in LQR

The main objective of this research paper is to investigate the local co...
research
12/21/2017

Multiagent-based Participatory Urban Simulation through Inverse Reinforcement Learning

The multiagent-based participatory simulation features prominently in ur...

Please sign up or login with your details

Forgot password? Click here to reset