On The Robustness of a Neural Network

07/25/2017
by   El Mahdi El Mhamdi, et al.
0

With the development of neural networks based machine learning and their usage in mission critical applications, voices are rising against the black box aspect of neural networks as it becomes crucial to understand their limits and capabilities. With the rise of neuromorphic hardware, it is even more critical to understand how a neural network, as a distributed system, tolerates the failures of its computing nodes, neurons, and its communication channels, synapses. Experimentally assessing the robustness of neural networks involves the quixotic venture of testing all the possible failures, on all the possible inputs, which ultimately hits a combinatorial explosion for the first, and the impossibility to gather all the possible inputs for the second. In this paper, we prove an upper bound on the expected error of the output when a subset of neurons crashes. This bound involves dependencies on the network parameters that can be seen as being too pessimistic in the average case. It involves a polynomial dependency on the Lipschitz coefficient of the neurons activation function, and an exponential dependency on the depth of the layer where a failure occurs. We back up our theoretical results with experiments illustrating the extent to which our prediction matches the dependencies between the network parameters and robustness. Our results show that the robustness of neural networks to the average crash can be estimated without the need to neither test the network on all failure configurations, nor access the training set used to train the network, both of which are practically impossible requirements.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/27/2017

When Neurons Fail

We view a neural network as a distributed system of which neurons can fa...
research
09/30/2020

A law of robustness for two-layers neural networks

We initiate the study of the inherent tradeoffs between the size of a ne...
research
08/15/2022

A Tool for Neural Network Global Robustness Certification and Training

With the increment of interest in leveraging machine learning technology...
research
10/13/2021

Detecting Modularity in Deep Neural Networks

A neural network is modular to the extent that parts of its computationa...
research
02/18/2020

Failout: Achieving Failure-Resilient Inference in Distributed Neural Networks

When a neural network is partitioned and distributed across physical nod...
research
02/13/2017

Training Neural Networks Based on Imperialist Competitive Algorithm for Predicting Earthquake Intensity

In this study we determined neural network weights and biases by Imperia...
research
07/10/2023

Self Expanding Neural Networks

The results of training a neural network are heavily dependent on the ar...

Please sign up or login with your details

Forgot password? Click here to reset