When Neurons Fail

06/27/2017
by   El Mahdi El Mhamdi, et al.
0

We view a neural network as a distributed system of which neurons can fail independently, and we evaluate its robustness in the absence of any (recovery) learning phase. We give tight bounds on the number of neurons that can fail without harming the result of a computation. To determine our bounds, we leverage the fact that neural activation functions are Lipschitz-continuous. Our bound is on a quantity, we call the Forward Error Propagation, capturing how much error is propagated by a neural network when a given number of components is failing, computing this quantity only requires looking at the topology of the network, while experimentally assessing the robustness of a network requires the costly experiment of looking at all the possible inputs and testing all the possible configurations of the network corresponding to different failure situations, facing a discouraging combinatorial explosion. We distinguish the case of neurons that can fail and stop their activity (crashed neurons) from the case of neurons that can fail by transmitting arbitrary values (Byzantine neurons). Interestingly, as we show in the paper, our bound can easily be extended to the case where synapses can fail. We show how our bound can be leveraged to quantify the effect of memory cost reduction on the accuracy of a neural network, to estimate the amount of information any neuron needs from its preceding layer, enabling thereby a boosting scheme that prevents neurons from waiting for unnecessary signals. We finally discuss the trade-off between neural networks robustness and learning cost.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/25/2017

On The Robustness of a Neural Network

With the development of neural networks based machine learning and their...
research
06/01/2018

q-Neurons: Neuron Activations based on Stochastic Jackson's Derivative Operators

We propose a new generic type of stochastic neurons, called q-neurons, t...
research
09/30/2020

A law of robustness for two-layers neural networks

We initiate the study of the inherent tradeoffs between the size of a ne...
research
06/08/2021

Enhancing Robustness of Neural Networks through Fourier Stabilization

Despite the considerable success of neural networks in security settings...
research
07/15/2022

The Mechanical Neural Network(MNN) – A physical implementation of a multilayer perceptron for education and hands-on experimentation

In this paper the Mechanical Neural Network(MNN) is introduced, a physic...
research
01/31/2022

LinSyn: Synthesizing Tight Linear Bounds for Arbitrary Neural Network Activation Functions

The most scalable approaches to certifying neural network robustness dep...
research
06/29/2022

Automatic Synthesis of Neurons for Recurrent Neural Nets

We present a new class of neurons, ARNs, which give a cross entropy on t...

Please sign up or login with your details

Forgot password? Click here to reset