On the role of synaptic stochasticity in training low-precision neural networks

by   Carlo Baldassi, et al.

Stochasticity and limited precision of synaptic weights in neural network models is a key aspect of both biological and hardware modeling of learning processes. Here we show that a neural network model with stochastic binary weights naturally gives prominence to exponentially rare dense regions of solutions with a number of desirable properties such as robustness and good generalization per- formance, while typical solutions are isolated and hard to find. Binary solutions of the standard perceptron problem are obtained from a simple gradient descent procedure on a set of real values parametrizing a probability distribution over the binary synapses. Both analytical and numerical results are presented. An algorithmic extension aimed at training discrete deep neural networks is also investigated.


page 1

page 2

page 3

page 4


Subdominant Dense Clusters Allow for Simple Learning and High Computational Performance in Neural Networks with Discrete Synapses

We show that discrete synaptic weights can be efficiently used for learn...

Learning may need only a few bits of synaptic precision

Learning in neural networks poses peculiar challenges when using discret...

Clustering of solutions in the symmetric binary perceptron

The geometrical features of the (non-convex) loss landscape of neural ne...

Unreasonable Effectiveness of Learning Neural Networks: From Accessible States and Robust Ensembles to Basic Algorithmic Schemes

In artificial neural networks, learning from data is a computationally d...

A learning framework for winner-take-all networks with stochastic synapses

Many recent generative models make use of neural networks to transform t...

A role of constraint in self-organization

In this paper we introduce a neural network model of self-organization. ...

Deep Convolutional Neural Networks with Unitary Weights

While normalizations aim to fix the exploding and vanishing gradient pro...