DeepAI
Log In Sign Up

Strengthened SDP Verification of Neural Network Robustness via Non-Convex Cuts

10/16/2020
by   Ziye Ma, et al.
0

There have been major advances on the design of neural networks, but still they cannot be applied to many safety-critical systems due to the lack of efficient computational techniques to analyze and certify their robustness. Recently, various methods based on convex optimization have been proposed to address this issue. In particular, the semidefinite programming (SDP) approach has gained popularity in convexifying the robustness analysis problem. Since this approach is prone to a large relaxation gap, this paper develops a new technique to reduce the gap by adding non-convex cuts via disjunctive programming. The proposed method amounts to a sequential SDP technique. We analyze the performance of this method both theoretically and empirically, and show that it bridges the gap as the number of cuts increases.

READ FULL TEXT

page 1

page 2

page 3

page 4

04/01/2020

Tightened Convex Relaxations for Neural Network Robustness Certification

In this paper, we consider the problem of certifying the robustness of n...
02/22/2020

Improving the Tightness of Convex Relaxation Bounds for Training Certifiably Robust Classifiers

Convex relaxations are effective for training and certifying neural netw...
02/11/2020

IPBoost – Non-Convex Boosting via Integer Programming

Recently non-convex optimization approaches for solving machine learning...
10/24/2019

Convex Optimisation for Inverse Kinematics

We consider the problem of inverse kinematics (IK), where one wants to f...
03/13/2016

A Grothendieck-type inequality for local maxima

A large number of problems in optimization, machine learning, signal pro...
03/06/2022

A Unified View of SDP-based Neural Network Verification through Completely Positive Programming

Verifying that input-output relationships of a neural network conform to...