Abstract Universal Approximation for Neural Networks

07/12/2020
by   Zi Wang, et al.
9

With growing concerns about the safety and robustness of neural networks, a number of researchers have successfully applied abstract interpretation with numerical domains to verify properties of neural networks. Why do numerical domains work for neural-network verification? We present a theoretical result that demonstrates the power of numerical domains, namely, the simple interval domain, for analysis of neural networks. Our main theorem, which we call the abstract universal approximation (AUA) theorem, generalizes the recent result by Baader et al. [2020] for ReLU networks to a rich class of neural networks. The classical universal approximation theorem says that, given function f, for any desired precision, there is a neural network that can approximate f. The AUA theorem states that for any function f, there exists a neural network whose abstract interpretation is an arbitrarily close approximation of the collecting semantics of f. Further, the network may be constructed using any well-behaved activation function—sigmoid, tanh, parametric ReLU, ELU, and more—making our result quite general. The implication of the AUA theorem is that there exist provably correct neural networks: Suppose, for instance, that there is an ideal robust image classifier represented as function f. The AUA theorem tells us that there exists a neural network that approximates f and for which we can automatically construct proofs of robustness using the interval abstract domain. Our work sheds light on the existence of provably correct neural networks, using arbitrary activation functions, and establishes intriguing connections between well-known theoretical properties of neural networks and abstract interpretation using numerical domains.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/19/2021

Universal Approximation Theorem for Neural Networks

Is there any theoretical guarantee for the approximation ability of neur...
research
06/13/2019

Neural Networks on Groups

Recent work on neural networks has shown that allowing them to build int...
research
07/30/2021

Validation of RELU nets with tropical polyhedra

This paper studies the problem of range analysis for feedforward neural ...
research
09/30/2019

Universal Approximation with Certified Networks

Training neural networks to be certifiably robust is a powerful defense ...
research
12/18/2020

Universal Approximation in Dropout Neural Networks

We prove two universal approximation theorems for a range of dropout neu...
research
12/21/2018

On the Relative Expressiveness of Bayesian and Neural Networks

A neural network computes a function. A central property of neural netwo...
research
03/01/2022

A Domain-Theoretic Framework for Robustness Analysis of Neural Networks

We present a domain-theoretic framework for validated robustness analysi...

Please sign up or login with your details

Forgot password? Click here to reset