Provably Safe Reinforcement Learning with Step-wise Violation Constraints
In this paper, we investigate a novel safe reinforcement learning problem with step-wise violation constraints. Our problem differs from existing works in that we consider stricter step-wise violation constraints and do not assume the existence of safe actions, making our formulation more suitable for safety-critical applications which need to ensure safety in all decision steps and may not always possess safe actions, e.g., robot control and autonomous driving. We propose a novel algorithm SUCBVI, which guarantees O(√(ST)) step-wise violation and O(√(H^3SAT)) regret. Lower bounds are provided to validate the optimality in both violation and regret performance with respect to S and T. Moreover, we further study a novel safe reward-free exploration problem with step-wise violation constraints. For this problem, we design an (ε,δ)-PAC algorithm SRF-UCRL, which achieves nearly state-of-the-art sample complexity O((S^2AH^2/ε+H^4SA/ε^2)(log(1/δ)+S)), and guarantees O(√(ST)) violation during the exploration. The experimental results demonstrate the superiority of our algorithms in safety performance, and corroborate our theoretical results.
READ FULL TEXT