Corruption-Robust Offline Reinforcement Learning

06/11/2021
by   Xuezhou Zhang, et al.
7

We study the adversarial robustness in offline reinforcement learning. Given a batch dataset consisting of tuples (s, a, r, s'), an adversary is allowed to arbitrarily modify ϵ fraction of the tuples. From the corrupted dataset the learner aims to robustly identify a near-optimal policy. We first show that a worst-case Ω(dϵ) optimality gap is unavoidable in linear MDP of dimension d, even if the adversary only corrupts the reward element in a tuple. This contrasts with dimension-free results in robust supervised learning and best-known lower-bound in the online RL setting with corruption. Next, we propose robust variants of the Least-Square Value Iteration (LSVI) algorithm utilizing robust supervised learning oracles, which achieve near-matching performances in cases both with and without full data coverage. The algorithm requires the knowledge of ϵ to design the pessimism bonus in the no-coverage case. Surprisingly, in this case, the knowledge of ϵ is necessary, as we show that being adaptive to unknown ϵ is impossible.This again contrasts with recent results on corruption-robust online RL and implies that robust offline RL is a strictly harder problem.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset