Safety Considerations in Deep Control Policies with Probabilistic Safety Barrier Certificates

01/22/2020
by   Tom Hirshberg, et al.
0

Recent advances in Deep Machine Learning have shown promise in solving complex perception and control loops via methods such as reinforcement and imitation learning. However, guaranteeing safety for such learned deep policies has been a challenge due to issues such as partial observability and difficulties in characterizing the behavior of the neural networks. While a lot of emphasis in safe learning has been placed during training, it is non-trivial to guarantee safety at deployment or test time. This paper extends the work on Safety Barrier Certificates to guarantee safety with deep control policies despite uncertainty arising due to perception and other latent variables. In particular, the proposed framework wraps around the existing deep control policy and generates safe actions by dynamically evaluating and modifying the policy from the embedded network. Our framework utilizes control barrier functions to create spaces of control actions that are probabilistically safe, and when the original actions are found to be in violation of the safety constraint, uses quadratic programming to minimally modify the original actions to ensure they lie in the safe set. Representations of the environment are built through Euclidean signed distance fields that are then used to infer the safety of actions and to guarantee forward invariance. We implement this method in simulation in a drone-racing environment and show that our method results in safer actions compared to a baseline that only relies on imitation learning to generate control actions.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset