Neural Networks Built from Unreliable Components

01/26/2013
by   Amin Karbasi, et al.
0

Recent advances in associative memory design through strutured pattern sets and graph-based inference algorithms have allowed the reliable learning and retrieval of an exponential number of patterns. Both these and classical associative memories, however, have assumed internally noiseless computational nodes. This paper considers the setting when internal computations are also noisy. Even if all components are noisy, the final error probability in recall can often be made exceedingly small, as we characterize. There is a threshold phenomenon. We also show how to optimize inference algorithm parameters when knowing statistical properties of internal noise.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

03/13/2014

Noise Facilitation in Associative Memories of Exponential Capacity

Recent advances in associative memory design through structured pattern ...
02/05/2013

A Non-Binary Associative Memory with Exponential Pattern Retrieval Capacity and Iterative Learning: Extended Results

We consider the problem of neural association for a network of non-binar...
08/12/2019

Nonleaf Patterns in Trees: Protected Nodes and Fine Numbers

A closed-form formula is derived for the number of occurrences of matche...
09/28/2020

Discrimination of attractors with noisy nodes in Boolean networks

Observing the internal state of the whole system using a small number of...
12/23/2019

Layerwise Noise Maximisation to Train Low-Energy Deep Neural Networks

Deep neural networks (DNNs) depend on the storage of a large number of p...
02/06/2015

Computational and Statistical Boundaries for Submatrix Localization in a Large Noisy Matrix

The interplay between computational efficiency and statistical accuracy ...
07/24/2014

Convolutional Neural Associative Memories: Massive Capacity with Noise Tolerance

The task of a neural associative memory is to retrieve a set of previous...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.