Safe Wasserstein Constrained Deep Q-Learning

02/07/2020
by   Aaron Kandel, et al.
0

This paper presents a distributionally robust Q-Learning algorithm (DrQ) which leverages Wasserstein ambiguity sets to provide probabilistic out-of-sample safety guarantees during online learning. First, we follow past work by separating the constraint functions from the principal objective to create a hierarchy of machines within the constrained Markov decision process (CMDP). DrQ works within this framework by augmenting constraint costs with tightening offset variables obtained through Wasserstein distributionally robust optimization (DRO). These offset variables correspond to worst-case distributions of modeling error characterized by the TD-errors of the constraint Q-functions. This overall procedure allows us to safely approach the nominal constraint boundaries with strong probabilistic out-of-sample safety guarantees. Using a case study of safe lithium-ion battery fast charging, we demonstrate dramatic improvements in safety and performance relative to a conventional DQN.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset