The mathematics of adversarial attacks in AI – Why deep learning is unstable despite the existence of stable neural networks

09/13/2021
by   Alexander Bastounis, et al.
0

The unprecedented success of deep learning (DL) makes it unchallenged when it comes to classification problems. However, it is well established that the current DL methodology produces universally unstable neural networks (NNs). The instability problem has caused an enormous research effort – with a vast literature on so-called adversarial attacks – yet there has been no solution to the problem. Our paper addresses why there has been no solution to the problem, as we prove the following mathematical paradox: any training procedure based on training neural networks for classification problems with a fixed architecture will yield neural networks that are either inaccurate or unstable (if accurate) – despite the provable existence of both accurate and stable neural networks for the same classification problems. The key is that the stable and accurate neural networks must have variable dimensions depending on the input, in particular, variable dimensions is a necessary condition for stability. Our result points towards the paradox that accurate and stable neural networks exist, however, modern algorithms do not compute them. This yields the question: if the existence of neural networks with desirable properties can be proven, can one also find algorithms that compute them? There are cases in mathematics where provable existence implies computability, but will this be the case for neural networks? The contrary is true, as we demonstrate how neural networks can provably exist as approximate minimisers to standard optimisation problems with standard cost functions, however, no randomised algorithm can compute them with probability better than 1/2.

READ FULL TEXT
research
01/20/2021

Can stable and accurate neural networks be computed? – On the barriers of deep learning and Smale's 18th problem

Deep learning (DL) has had unprecedented success and is now entering sci...
research
06/04/2019

What do AI algorithms actually learn? - On false structures in deep learning

There are two big unsolved mathematical questions in artificial intellig...
research
02/22/2021

Resilience of Bayesian Layer-Wise Explanations under Adversarial Attacks

We consider the problem of the stability of saliency-based explanations ...
research
01/04/2022

On the Minimal Adversarial Perturbation for Deep Neural Networks with Provable Estimation Error

Although Deep Neural Networks (DNNs) have shown incredible performance i...
research
07/17/2019

Connecting Lyapunov Control Theory to Adversarial Attacks

Significant work is being done to develop the math and tools necessary t...
research
11/18/2019

Hacking Neural Networks: A Short Introduction

A large chunk of research on the security issues of neural networks is f...
research
04/25/2023

Learning Robust Deep Equilibrium Models

Deep equilibrium (DEQ) models have emerged as a promising class of impli...

Please sign up or login with your details

Forgot password? Click here to reset