Verifying Controllers Against Adversarial Examples with Bayesian Optimization

02/23/2018
by   Shromona Ghosh, et al.
0

Recent successes in reinforcement learning have lead to the development of complex controllers for real-world robots. As these robots are deployed in safety-critical applications and interact with humans, it becomes critical to ensure safety in order to avoid causing harm. A first step in this direction is to test the controllers in simulation. To be able to do this, we need to capture what we mean by safety and then efficiently search the space of all behaviors to see if they are safe. In this paper, we present an active-testing framework based on Bayesian Optimization. We specify safety constraints using logic and exploit structure in the problem in order to test the system for adversarial counter examples that violate the safety specifications. These specifications are defined as complex boolean combinations of smooth functions on the trajectories and, unlike reward functions in reinforcement learning, are expressive and impose hard constraints on the system. In our framework, we exploit regularity assumptions on individual functions in form of a Gaussian Process (GP) prior. We combine these into a coherent optimization framework using problem structure. The resulting algorithm is able to provably verify complex safety specifications or alternatively find counter examples. Experimental results show that the proposed method is able to find adversarial examples quickly.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/13/2023

Composing Control Barrier Functions for Complex Safety Specifications

The increasing complexity of control systems necessitates control laws t...
research
03/22/2018

Learning-based Model Predictive Control for Safe Exploration and Reinforcement Learning

Learning-based methods have been successful in solving complex control t...
research
12/31/2018

Gray-box Adversarial Testing for Control Systems with Machine Learning Component

Neural Networks (NN) have been proposed in the past as an effective mean...
research
10/30/2019

Safe Exploration for Interactive Machine Learning

In Interactive Machine Learning (IML), we iteratively make decisions and...
research
10/14/2022

Risk-Awareness in Learning Neural Controllers for Temporal Logic Objectives

In this paper, we consider the problem of synthesizing a controller in t...
research
08/09/2021

Neural Network Repair with Reachability Analysis

Safety is a critical concern for the next generation of autonomy that is...
research
10/07/2020

Global Optimization of Objective Functions Represented by ReLU Networks

Neural networks (NN) learn complex non-convex functions, making them des...

Please sign up or login with your details

Forgot password? Click here to reset