Limitations of Piecewise Linearity for Efficient Robustness Certification

01/21/2023
by   Klas Leino, et al.
0

Certified defenses against small-norm adversarial examples have received growing attention in recent years; though certified accuracies of state-of-the-art methods remain far below their non-robust counterparts, despite the fact that benchmark datasets have been shown to be well-separated at far larger radii than the literature generally attempts to certify. In this work, we offer insights that identify potential factors in this performance gap. Specifically, our analysis reveals that piecewise linearity imposes fundamental limitations on the tightness of leading certification techniques. These limitations are felt in practical terms as a greater need for capacity in models hoped to be certified efficiently. Moreover, this is in addition to the capacity necessary to learn a robust boundary, studied in prior work. However, we argue that addressing the limitations of piecewise linearity through scaling up model capacity may give rise to potential difficulties – particularly regarding robust generalization – therefore, we conclude by suggesting that developing smooth activation functions may be the way forward for advancing the performance of certified neural networks.

READ FULL TEXT

page 6

page 16

research
11/11/2015

Piecewise Linear Activation Functions For More Efficient Deep Networks

This submission has been withdrawn by arXiv administrators because it is...
research
07/25/2018

Limitations of the Lipschitz constant as a defense against adversarial examples

Several recent papers have discussed utilizing Lipschitz constants to li...
research
07/14/2020

Adversarial Examples and Metrics

Adversarial examples are a type of attack on machine learning (ML) syste...
research
05/10/2023

DNN Verification, Reachability, and the Exponential Function Problem

Deep neural networks (DNNs) are increasingly being deployed to perform s...
research
07/17/2016

Piecewise convexity of artificial neural networks

Although artificial neural networks have shown great promise in applicat...
research
10/11/2021

Parameterizing Activation Functions for Adversarial Robustness

Deep neural networks are known to be vulnerable to adversarially perturb...

Please sign up or login with your details

Forgot password? Click here to reset