Proving the Lottery Ticket Hypothesis: Pruning is All You Need

02/03/2020
by   Eran Malach, et al.
7

The lottery ticket hypothesis (Frankle and Carbin, 2018), states that a randomly-initialized network contains a small subnetwork such that, when trained in isolation, can compete with the performance of the original network. We prove an even stronger hypothesis (as was also conjectured in Ramanujan et al., 2019), showing that for every bounded distribution and every target network with bounded weights, a sufficiently over-parameterized neural network with random weights contains a subnetwork with roughly the same accuracy as the target network, without any further training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/05/2019

The Lottery Ticket Hypothesis at Scale

Recent work on the "lottery ticket hypothesis" proposes that randomly-in...
research
06/22/2020

Logarithmic Pruning is All You Need

The Lottery Ticket Hypothesis is a conjecture that every large neural ne...
research
06/25/2020

Data-dependent Pruning to find the Winning Lottery Ticket

The Lottery Ticket Hypothesis postulates that a freshly initialized neur...
research
06/12/2020

How many winning tickets are there in one DNN?

The recent lottery ticket hypothesis proposes that there is one sub-netw...
research
10/18/2021

Finding Everything within Random Binary Networks

A recent work by Ramanujan et al. (2020) provides significant empirical ...
research
06/09/2022

A General Framework For Proving The Equivariant Strong Lottery Ticket Hypothesis

The Strong Lottery Ticket Hypothesis (SLTH) stipulates the existence of ...
research
10/29/2022

Strong Lottery Ticket Hypothesis with ε–perturbation

The strong Lottery Ticket Hypothesis (LTH) claims the existence of a sub...

Please sign up or login with your details

Forgot password? Click here to reset