Training ReLU networks to high uniform accuracy is intractable

05/26/2022
by   Julius Berner, et al.
0

Statistical learning theory provides bounds on the necessary number of training samples needed to reach a prescribed accuracy in a learning problem formulated over a given target class. This accuracy is typically measured in terms of a generalization error, that is, an expected value of a given loss function. However, for several applications – for example in a security-critical context or for problems in the computational sciences – accuracy in this sense is not sufficient. In such cases, one would like to have guarantees for high accuracy on every input value, that is, with respect to the uniform norm. In this paper we precisely quantify the number of training samples needed for any conceivable training algorithm to guarantee a given uniform accuracy on any learning problem formulated over target classes containing (or consisting of) ReLU neural networks of a prescribed architecture. We prove that, under very general assumptions, the minimal number of training samples for this task scales exponentially both in the depth and the input dimension of the network architecture. As a corollary we conclude that the training of ReLU neural networks to high uniform accuracy is intractable. In a security-critical context this points to the fact that deep learning based systems are prone to being fooled by a possible adversary. We corroborate our theoretical findings by numerical results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/04/2021

Convergence Analysis for the PINNs

In recent years, physical informed neural networks (PINNs) have been sho...
research
09/15/2017

Optimal approximation of piecewise smooth functions using deep ReLU neural networks

We study the necessary and sufficient complexity of ReLU neural networks...
research
07/15/2022

Error analysis for deep neural network approximations of parametric hyperbolic conservation laws

We derive rigorous bounds on the error resulting from the approximation ...
research
11/18/2020

Neural network approximation and estimation of classifiers with classification boundary in a Barron class

We prove bounds for the approximation and estimation of certain classifi...
research
04/19/2023

Generalization and Estimation Error Bounds for Model-based Neural Networks

Model-based neural networks provide unparalleled performance for various...
research
05/18/2018

Reconstruction of training samples from loss functions

This paper presents a new mathematical framework to analyze the loss fun...
research
10/18/2019

Towards Quantifying Intrinsic Generalization of Deep ReLU Networks

Understanding the underlying mechanisms that enable the empirical succes...

Please sign up or login with your details

Forgot password? Click here to reset