Iterative Magnitude Pruning as a Renormalisation Group: A Study in The Context of The Lottery Ticket Hypothesis

08/06/2023
by   Abu-Al Hassan, et al.
0

This thesis delves into the intricate world of Deep Neural Networks (DNNs), focusing on the exciting concept of the Lottery Ticket Hypothesis (LTH). The LTH posits that within extensive DNNs, smaller, trainable subnetworks termed "winning tickets", can achieve performance comparable to the full model. A key process in LTH, Iterative Magnitude Pruning (IMP), incrementally eliminates minimal weights, emulating stepwise learning in DNNs. Once we identify these winning tickets, we further investigate their "universality". In other words, we check if a winning ticket that works well for one specific problem could also work well for other, similar problems. We also bridge the divide between the IMP and the Renormalisation Group (RG) theory in physics, promoting a more rigorous understanding of IMP.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/07/2021

Universality of Deep Neural Network Lottery Tickets: A Renormalization Group Perspective

Foundational work on the Lottery Ticket Hypothesis has suggested an exci...
research
07/04/2022

Lottery Ticket Hypothesis for Spiking Neural Networks

Spiking Neural Networks (SNNs) have recently emerged as a new generation...
research
06/16/2023

Transferability of Winning Lottery Tickets in Neural Network Differential Equation Solvers

Recent work has shown that renormalisation group theory is a useful fram...
research
07/29/2022

A One-Shot Reparameterization Method for Reducing the Loss of Tile Pruning on DNNs

Recently, tile pruning has been widely studied to accelerate the inferen...
research
01/13/2022

Examining and Mitigating the Impact of Crossbar Non-idealities for Accurate Implementation of Sparse Deep Neural Networks

Recently several structured pruning techniques have been introduced for ...
research
07/16/2020

Lottery Tickets in Linear Models: An Analysis of Iterative Magnitude Pruning

We analyse the pruning procedure behind the lottery ticket hypothesis ar...
research
06/14/2022

Zeroth-Order Topological Insights into Iterative Magnitude Pruning

Modern-day neural networks are famously large, yet also highly redundant...

Please sign up or login with your details

Forgot password? Click here to reset