Slot Machines: Discovering Winning Combinations of Random Weights in Neural Networks

01/16/2021
by   Maxwell Mbabilla Aladago, et al.
0

In contrast to traditional weight optimization in a continuous space, we demonstrate the existence of effective random networks whose weights are never updated. By selecting a weight among a fixed set of random values for each individual connection, our method uncovers combinations of random weights that match the performance of traditionally-trained networks of the same capacity. We refer to our networks as "slot machines" where each reel (connection) contains a fixed set of symbols (random values). Our backpropagation algorithm "spins" the reels to seek "winning" combinations, i.e., selections of random weight values that minimize the given loss. Quite surprisingly, we find that allocating just a few random values to each connection (e.g., 8 values per connection) yields highly competitive combinations despite being dramatically more constrained compared to traditionally learned weights. Moreover, finetuning these combinations often improves performance over the trained baselines. A randomly initialized VGG-19 with 8 values per connection contains a combination that achieves 90 achieves an impressive performance of 98.1 containing only random weights.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/29/2019

What's Hidden in a Randomly Weighted Neural Network?

Training a neural network is synonymous with learning the values of the ...
research
01/04/2020

RPR: Random Partition Relaxation for Training; Binary and Ternary Weight Neural Networks

We present Random Partition Relaxation (RPR), a method for strong quanti...
research
11/14/2020

AutoRWN: automatic construction and training of random weight networks using competitive swarm of agents

Random Weight Networks have been extensively used in many applications i...
research
06/30/2020

Training highly effective connectivities within neural networks with randomly initialized, fixed weights

We present some novel, straightforward methods for training the connecti...
research
12/08/2016

Learning in the Machine: Random Backpropagation and the Learning Channel

Random backpropagation (RBP) is a variant of the backpropagation algorit...
research
10/06/2021

CBP: Backpropagation with constraint on weight precision using a pseudo-Lagrange multiplier method

Backward propagation of errors (backpropagation) is a method to minimize...
research
11/16/2018

Residual Convolutional Neural Network Revisited with Active Weighted Mapping

In visual recognition, the key to the performance improvement of ResNet ...

Please sign up or login with your details

Forgot password? Click here to reset