Abstract Universal Approximation for Neural Networks

by   Zi Wang, et al.

With growing concerns about the safety and robustness of neural networks, a number of researchers have successfully applied abstract interpretation with numerical domains to verify properties of neural networks. Why do numerical domains work for neural-network verification? We present a theoretical result that demonstrates the power of numerical domains, namely, the simple interval domain, for analysis of neural networks. Our main theorem, which we call the abstract universal approximation (AUA) theorem, generalizes the recent result by Baader et al. [2020] for ReLU networks to a rich class of neural networks. The classical universal approximation theorem says that, given function f, for any desired precision, there is a neural network that can approximate f. The AUA theorem states that for any function f, there exists a neural network whose abstract interpretation is an arbitrarily close approximation of the collecting semantics of f. Further, the network may be constructed using any well-behaved activation function—sigmoid, tanh, parametric ReLU, ELU, and more—making our result quite general. The implication of the AUA theorem is that there exist provably correct neural networks: Suppose, for instance, that there is an ideal robust image classifier represented as function f. The AUA theorem tells us that there exists a neural network that approximates f and for which we can automatically construct proofs of robustness using the interval abstract domain. Our work sheds light on the existence of provably correct neural networks, using arbitrary activation functions, and establishes intriguing connections between well-known theoretical properties of neural networks and abstract interpretation using numerical domains.


page 1

page 2

page 3

page 4


Universal Approximation Theorem for Neural Networks

Is there any theoretical guarantee for the approximation ability of neur...

Abstract Neural Networks

Deep Neural Networks (DNNs) are rapidly being applied to safety-critical...

Neural Networks on Groups

Recent work on neural networks has shown that allowing them to build int...

Validation of RELU nets with tropical polyhedra

This paper studies the problem of range analysis for feedforward neural ...

Universal Approximation with Certified Networks

Training neural networks to be certifiably robust is a powerful defense ...

Universal Approximation in Dropout Neural Networks

We prove two universal approximation theorems for a range of dropout neu...

On the Relative Expressiveness of Bayesian and Neural Networks

A neural network computes a function. A central property of neural netwo...