Safe Predictors for Enforcing Input-Output Specifications

01/29/2020
by   Stephen Mell, et al.
16

We present an approach for designing correct-by-construction neural networks (and other machine learning models) that are guaranteed to be consistent with a collection of input-output specifications before, during, and after algorithm training. Our method involves designing a constrained predictor for each set of compatible constraints, and combining them safely via a convex combination of their predictions. We demonstrate our approach on synthetic datasets and an aircraft collision avoidance problem.

READ FULL TEXT

page 9

page 10

research
02/23/2022

Learning Neural Networks under Input-Output Specifications

In this paper, we examine an important problem of learning neural networ...
research
08/03/2017

Detection of Abnormal Input-Output Associations

We study a novel outlier detection problem that aims to identify abnorma...
research
08/30/2021

Reachability Is NP-Complete Even for the Simplest Neural Networks

We investigate the complexity of the reachability problem for (deep) neu...
research
12/12/2021

Learning with Subset Stacking

We propose a new algorithm that learns from a set of input-output pairs....
research
12/27/2019

Synthetic Datasets for Neural Program Synthesis

The goal of program synthesis is to automatically generate programs in a...
research
08/03/2020

Incorrect by Construction: Fine Tuning Neural Networks for Guaranteed Performance on Finite Sets of Examples

There is great interest in using formal methods to guarantee the reliabi...
research
05/25/2018

Training verified learners with learned verifiers

This paper proposes a new algorithmic framework,predictor-verifier train...

Please sign up or login with your details

Forgot password? Click here to reset