Information contraction in noisy binary neural networks and its implications

01/28/2021
by   Chuteng Zhou, et al.
0

Neural networks have gained importance as the machine learning models that achieve state-of-the-art performance on large-scale image classification, object detection and natural language processing tasks. In this paper, we consider noisy binary neural networks, where each neuron has a non-zero probability of producing an incorrect output. These noisy models may arise from biological, physical and electronic contexts and constitute an important class of models that are relevant to the physical world. Intuitively, the number of neurons in such systems has to grow to compensate for the noise while maintaining the same level of expressive power and computation reliability. Our key finding is a lower bound for the required number of neurons in noisy neural networks, which is first of its kind. To prove this lower bound, we take an information theoretic approach and obtain a novel strong data processing inequality (SDPI), which not only generalizes the Evans-Schulman results for binary symmetric channels to general channels, but also improves the tightness drastically when applied to estimate end-to-end information contraction in networks. Our SDPI can be applied to various information processing systems, including neural networks and cellular automata. Applying the SDPI in noisy binary neural networks, we obtain our key lower bound and investigate its implications on network depth-width trade-offs, our results suggest a depth-width trade-off for noisy neural networks that is very different from the established understanding regarding noiseless neural networks. Furthermore, we apply the SDPI to study fault-tolerant cellular automata and obtain bounds on the error correction overheads and the relaxation time. This paper offers new understanding of noisy information processing systems through the lens of information theory.

READ FULL TEXT
research
02/25/2022

Biological error correction codes generate fault-tolerant neural networks

It has been an open question in deep learning if fault-tolerant computat...
research
10/15/2020

Depth-Width Trade-offs for Neural Networks via Topological Entropy

One of the central problems in the study of deep learning theory is to u...
research
03/02/2020

Better Depth-Width Trade-offs for Neural Networks through the lens of Dynamical Systems

The expressivity of neural networks as a function of their depth, width ...
research
07/18/2023

How Many Neurons Does it Take to Approximate the Maximum?

We study the size of a neural network needed to approximate the maximum ...
research
11/11/2020

On contraction coefficients, partial orders and approximation of capacities for quantum channels

The data processing inequality is the most basic requirement for any mea...
research
07/13/2020

Probabilistic bounds on data sensitivity in deep rectifier networks

Neuron death is a complex phenomenon with implications for model trainab...
research
10/06/2016

Computational Tradeoffs in Biological Neural Networks: Self-Stabilizing Winner-Take-All Networks

We initiate a line of investigation into biological neural networks from...

Please sign up or login with your details

Forgot password? Click here to reset