Kim P. Wabersich

is this you? claim profile


  • Linear model predictive safety certification for learning-based control

    While it has been repeatedly shown that learning-based controllers can provide superior performance, they often lack of safety guarantees. This paper aims at addressing this problem by introducing a model predictive safety certification (MPSC) scheme for polytopic linear systems with additive disturbances. The scheme verifies safety of a proposed learning-based input and modifies it as little as necessary in order to keep the system within a given set of constraints. Safety is thereby related to the existence of a model predictive controller (MPC) providing a feasible trajectory towards a safe target set. A robust MPC formulation accounts for the fact that the model is generally uncertain in the context of learning, which allows proving constraint satisfaction at all times under the proposed MPSC strategy. The MPSC scheme can be used in order to expand any potentially conservative set of safe states for learning and we prove an iterative technique for enlarging the safe set. Finally, a practical data-based design procedure for MPSC is proposed using scenario optimization.

    03/22/2018 ∙ by Kim P. Wabersich, et al. ∙ 0 share

    read it

  • Safe exploration of nonlinear dynamical systems: A predictive safety filter for reinforcement learning

    Despite fast progress in Reinforcement Learning (RL), the transfer into real-world applications is challenged by safety requirements in the presence of physical limitations. This is often due to the fact, that most RL methods do not support explicit consideration of state and input constraints. In this paper, we address this problem for nonlinear systems by introducing a predictive safety filter, which turns a constrained dynamical system into an unconstrained safe system, to which any RL algorithm can be applied `out-of-the-box'. The predictive safety filter receives the proposed learning input and decides, based on the current system state, if it can be safely applied to the real system, or if it has to be modified otherwise. Safety is thereby established by a continuously updated safety policy, which is computed according to a data-driven system model, supporting state and input dependent uncertainties in the prediction.

    12/13/2018 ∙ by Kim P. Wabersich, et al. ∙ 0 share

    read it