Universal Approximation with Certified Networks

by   Maximilian Baader, et al.

Training neural networks to be certifiably robust is a powerful defense against adversarial attacks. However, while promising, state-of-the-art results with certified training are far from satisfactory. Currently, it is very difficult to train a neural network that is both accurate and certified on realistic datasets and specifications (e.g., robustness). Given this difficulty, a pressing existential question is: given a dataset and a specification, is there a network that is both certified and accurate with respect to these? While the evidence suggests "no", we prove that for realistic datasets and specifications, such a network does exist and its certification can be established by propagating lower and upper bounds of each neuron through the network (interval analysis) - the most relaxed yet computationally efficient convex relaxation. Our result can be seen as a Universal Approximation Theorem for interval-certified ReLU networks. To the best of our knowledge, this is the first work to prove the existence of accurate, interval-certified networks.



page 1

page 2

page 3

page 4


Abstract Universal Approximation for Neural Networks

With growing concerns about the safety and robustness of neural networks...

Improving the Tightness of Convex Relaxation Bounds for Training Certifiably Robust Classifiers

Convex relaxations are effective for training and certifying neural netw...

On the Convergence of Certified Robust Training with Interval Bound Propagation

Interval Bound Propagation (IBP) is so far the base of state-of-the-art ...

Towards Stable and Efficient Training of Verifiably Robust Neural Networks

Training neural networks with verifiable robustness guarantees is challe...

The Fundamental Limits of Interval Arithmetic for Neural Networks

Interval analysis (or interval bound propagation, IBP) is a popular tech...

A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks

Verification of neural networks enables us to gauge their robustness aga...

The mathematics of adversarial attacks in AI – Why deep learning is unstable despite the existence of stable neural networks

The unprecedented success of deep learning (DL) makes it unchallenged wh...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.