Unbiased Asymmetric Actor-Critic for Partially Observable Reinforcement Learning

05/25/2021
by   Andrea Baisero, et al.
0

In partially observable reinforcement learning, offline training gives access to latent information which is not available during online training and/or execution, such as the system state. Asymmetric actor-critic methods exploit such information by training a history-based policy via a state-based critic. However, many asymmetric methods lack theoretical foundation, and are only evaluated on limited domains. We examine the theory of asymmetric actor-critic methods which use state-based critics, and expose fundamental issues which undermine the validity of a common variant, and its ability to address high partial observability. We propose an unbiased asymmetric actor-critic variant which is able to exploit state information while remaining theoretically sound, maintaining the validity of the policy gradient theorem, and introducing no bias and relatively low variance into the training process. An empirical evaluation performed on domains which exhibit significant partial observability confirms our analysis, and shows the unbiased asymmetric actor-critic converges to better policies and/or faster than symmetric actor-critic and standard asymmetric actor-critic baselines.

READ FULL TEXT

page 8

page 20

research
07/23/2019

Variance Reduction in Actor Critic Methods (ACM)

After presenting Actor Critic Methods (ACM), we show ACM are control var...
research
11/03/2022

Leveraging Fully Observable Policies for Learning under Partial Observability

Reinforcement learning in partially observable domains is challenging du...
research
08/19/2021

Global Convergence of the ODE Limit for Online Actor-Critic Algorithms in Reinforcement Learning

Actor-critic algorithms are widely used in reinforcement learning, but a...
research
10/21/2018

Actor-Critic Policy Optimization in Partially Observable Multiagent Environments

Optimization of parameterized policies for reinforcement learning (RL) i...
research
08/04/2023

Synthesizing Programmatic Policies with Actor-Critic Algorithms and ReLU Networks

Programmatically Interpretable Reinforcement Learning (PIRL) encodes pol...
research
10/06/2021

Explaining Off-Policy Actor-Critic From A Bias-Variance Perspective

Off-policy Actor-Critic algorithms have demonstrated phenomenal experime...
research
05/06/2021

Deep Graph Convolutional Reinforcement Learning for Financial Portfolio Management – DeepPocket

Portfolio management aims at maximizing the return on investment while m...

Please sign up or login with your details

Forgot password? Click here to reset