A theory of learning with constrained weight-distribution

06/14/2022
by   Weishun Zhong, et al.
0

A central question in computational neuroscience is how structure determines function in neural networks. The emerging high-quality large-scale connectomic datasets raise the question of what general functional principles can be gleaned from structural information such as the distribution of excitatory/inhibitory synapse types and the distribution of synaptic weights. Motivated by this question, we developed a statistical mechanical theory of learning in neural networks that incorporates structural information as constraints. We derived an analytical solution for the memory capacity of the perceptron, a basic feedforward model of supervised learning, with constraint on the distribution of its weights. Our theory predicts that the reduction in capacity due to the constrained weight-distribution is related to the Wasserstein distance between the imposed distribution and that of the standard normal distribution. To test the theoretical predictions, we use optimal transport theory and information geometry to develop an SGD-based algorithm to find weights that simultaneously learn the input-output task and satisfy the distribution constraint. We show that training in our algorithm can be interpreted as geodesic flows in the Wasserstein space of probability distributions. We further developed a statistical mechanical theory for teacher-student perceptron rule learning and ask for the best way for the student to incorporate prior knowledge of the rule. Our theory shows that it is beneficial for the learner to adopt different prior weight distributions during learning, and shows that distribution-constrained learning outperforms unconstrained and sign-constrained learning. Our theory and algorithm provide novel strategies for incorporating prior knowledge about weights into learning, and reveal a powerful connection between structure and function in neural networks.

READ FULL TEXT

page 4

page 6

page 10

page 11

page 12

page 32

page 33

page 34

research
10/16/2018

The Deep Weight Prior. Modeling a prior distribution for CNNs using generative models

Bayesian inference is known to provide a general framework for incorpora...
research
08/19/2020

A new role for circuit expansion for learning in neural networks

Many sensory pathways in the brain rely on sparsely active populations o...
research
12/01/2021

Controlling Wasserstein distances by Kernel norms with application to Compressive Statistical Learning

Comparing probability distributions is at the crux of many machine learn...
research
06/21/2018

Sliced-Wasserstein Flows: Nonparametric Generative Modeling via Optimal Transport and Diffusions

By building up on the recent theory that established the connection betw...
research
07/31/2021

Two-sample goodness-of-fit tests on the flat torus based on Wasserstein distance and their relevance to structural biology

This work is motivated by the study of local protein structure, which is...
research
12/30/2017

Learning Structural Weight Uncertainty for Sequential Decision-Making

Learning probability distributions on the weights of neural networks (NN...
research
05/04/2012

Weighted Patterns as a Tool for Improving the Hopfield Model

We generalize the standard Hopfield model to the case when a weight is a...

Please sign up or login with your details

Forgot password? Click here to reset