Signing the Supermask: Keep, Hide, Invert

01/31/2022
by   Nils Koster, et al.
0

The exponential growth in numbers of parameters of neural networks over the past years has been accompanied by an increase in performance across several fields. However, due to their sheer size, the networks not only became difficult to interpret but also problematic to train and use in real-world applications, since hardware requirements increased accordingly. Tackling both issues, we present a novel approach that either drops a neural network's initial weights or inverts their respective sign. Put simply, a network is trained by weight selection and inversion without changing their absolute values. Our contribution extends previous work on masking by additionally sign-inverting the initial weights and follows the findings of the Lottery Ticket Hypothesis. Through this extension and adaptations of initialization methods, we achieve a pruning rate of up to 99 exceeding the performance of various baseline and previous models. Our approach has two main advantages. First, and most notable, signed Supermask models drastically simplify a model's structure, while still performing well on given tasks. Second, by reducing the neural network to its very foundation, we gain insights into which weights matter for performance.

READ FULL TEXT

page 6

page 20

page 24

page 25

page 26

page 27

page 35

page 37

research
03/09/2018

The Lottery Ticket Hypothesis: Finding Small, Trainable Neural Networks

Neural network compression techniques are able to reduce the parameter c...
research
09/09/2017

Deep Residual Networks and Weight Initialization

Residual Network (ResNet) is the state-of-the-art architecture that real...
research
09/17/2020

Holistic Filter Pruning for Efficient Deep Neural Networks

Deep neural networks (DNNs) are usually over-parameterized to increase t...
research
03/25/2019

MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning

In this paper, we propose a novel meta learning approach for automatic c...
research
08/19/2023

To prune or not to prune : A chaos-causality approach to principled pruning of dense neural networks

Reducing the size of a neural network (pruning) by removing weights with...
research
12/20/2022

Constructing Organism Networks from Collaborative Self-Replicators

We introduce organism networks, which function like a single neural netw...
research
05/03/2019

Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask

The recent "Lottery Ticket Hypothesis" paper by Frankle & Carbin showed ...

Please sign up or login with your details

Forgot password? Click here to reset