Subdominant Dense Clusters Allow for Simple Learning and High Computational Performance in Neural Networks with Discrete Synapses

09/18/2015
by   Carlo Baldassi, et al.
0

We show that discrete synaptic weights can be efficiently used for learning in large scale neural systems, and lead to unanticipated computational performance. We focus on the representative case of learning random patterns with binary synapses in single layer networks. The standard statistical analysis shows that this problem is exponentially dominated by isolated solutions that are extremely hard to find algorithmically. Here, we introduce a novel method that allows us to find analytical evidence for the existence of subdominant and extremely dense regions of solutions. Numerical experiments confirm these findings. We also show that the dense regions are surprisingly accessible by simple learning protocols, and that these synaptic configurations are robust to perturbations and generalize better than typical solutions. These outcomes extend to synapses with multiple states and to deeper neural architectures. The large deviation measure also suggests how to design novel algorithmic schemes for optimization based on local entropy maximization.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

10/26/2017

On the role of synaptic stochasticity in training low-precision neural networks

Stochasticity and limited precision of synaptic weights in neural networ...
05/20/2016

Unreasonable Effectiveness of Learning Neural Networks: From Accessible States and Robust Ensembles to Basic Algorithmic Schemes

In artificial neural networks, learning from data is a computationally d...
02/12/2016

Learning may need only a few bits of synaptic precision

Learning in neural networks poses peculiar challenges when using discret...
11/18/2015

Local entropy as a measure for sampling solutions in Constraint Satisfaction Problems

We introduce a novel Entropy-driven Monte Carlo (EdMC) strategy to effic...
07/17/2019

On the geometry of solutions and on the capacity of multi-layer neural networks with ReLU activations

Rectified Linear Units (ReLU) have become the main model for the neural ...
09/22/2016

Deep Learning in Multi-Layer Architectures of Dense Nuclei

We assume that, within the dense clusters of neurons that can be found i...

References

  • (1) Alfredo Braunstein and Riccardo Zecchina. Learning by message-passing in neural networks with material synapses. Phys. Rev. Lett., 96:030201, 2006.
  • (2) Carlo Baldassi and Alfredo Braunstein. A max-sum algorithm for training discrete neural networks. Journal of Statistical Mechanics: Theory and Experiment, 2015(8):P08008, 2015.
  • (3) Carlo Baldassi, Alfredo Braunstein, Nicolas Brunel, and Riccardo Zecchina.

    Efficient supervised learning in networks with binary synapses.

    Proceedings of the National Academy of Sciences, 104:11079–11084, 2007.
  • (4) Carlo Baldassi. Generalization learning in a perceptron with binary synapses. J. Stat. Phys., 136:1572, 2009.
  • (5) Werner Krauth and Marc Mézard. Storage capacity of memory networks with binary couplings. J. Phys. France, 50:3057–3066, 1989.
  • (6) Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
  • (7) Geoffrey Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554, 2006.
  • (8) Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 35(8):1798–1828, 2013.