LinSyn: Synthesizing Tight Linear Bounds for Arbitrary Neural Network Activation Functions

01/31/2022
by   Brandon Paulsen, et al.
2

The most scalable approaches to certifying neural network robustness depend on computing sound linear lower and upper bounds for the network's activation functions. Current approaches are limited in that the linear bounds must be handcrafted by an expert, and can be sub-optimal, especially when the network's architecture composes operations using, for example, multiplication such as in LSTMs and the recently popular Swish activation. The dependence on an expert prevents the application of robustness certification to developments in the state-of-the-art of activation functions, and furthermore the lack of tightness guarantees may give a false sense of insecurity about a particular model. To the best of our knowledge, we are the first to consider the problem of automatically computing tight linear bounds for arbitrary n-dimensional activation functions. We propose LinSyn, the first approach that achieves tight bounds for any arbitrary activation function, while only leveraging the mathematical definition of the activation function itself. Our approach leverages an efficient heuristic approach to synthesize bounds that are tight and usually sound, and then verifies the soundness (and adjusts the bounds if necessary) using the highly optimized branch-and-bound SMT solver, dReal. Even though our approach depends on an SMT solver, we show that the runtime is reasonable in practice, and, compared with state of the art, our approach often achieves 2-5X tighter final output bounds and more than quadruple certified robustness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/19/2023

Complexity of Neural Network Training and ETR: Extensions with Effectively Continuous Functions

We study the complexity of the problem of training neural networks defin...
research
11/02/2018

Efficient Neural Network Robustness Certification with General Activation Functions

Finding minimum distortion of adversarial examples and thus certifying r...
research
07/02/2023

ENN: A Neural Network with DCT-Adaptive Activation Functions

The expressiveness of neural networks highly depends on the nature of th...
research
05/26/2023

A Tale of Two Approximations: Tightening Over-Approximation for DNN Robustness Verification via Under-Approximation

The robustness of deep neural networks (DNNs) is crucial to the hosting ...
research
11/02/2020

Reducing Neural Network Parameter Initialization Into an SMT Problem

Training a neural network (NN) depends on multiple factors, including bu...
research
11/21/2022

DualApp: Tight Over-Approximation for Neural Network Robustness Verification via Under-Approximation

The robustness of neural networks is fundamental to the hosting system's...
research
06/27/2017

When Neurons Fail

We view a neural network as a distributed system of which neurons can fa...

Please sign up or login with your details

Forgot password? Click here to reset