LOFT: Finding Lottery Tickets through Filter-wise Training

10/28/2022
by   Qihan Wang, et al.
0

Recent work on the Lottery Ticket Hypothesis (LTH) shows that there exist “winning tickets” in large neural networks. These tickets represent “sparse” versions of the full model that can be trained independently to achieve comparable accuracy with respect to the full model. However, finding the winning tickets requires one to pretrain the large model for at least a number of epochs, which can be a burdensome task, especially when the original neural network gets larger. In this paper, we explore how one can efficiently identify the emergence of such winning tickets, and use this observation to design efficient pretraining algorithms. For clarity of exposition, our focus is on convolutional neural networks (CNNs). To identify good filters, we propose a novel filter distance metric that well-represents the model convergence. As our theory dictates, our filter analysis behaves consistently with recent findings of neural network learning dynamics. Motivated by these observations, we present the LOttery ticket through Filter-wise Training algorithm, dubbed as LoFT. LoFT is a model-parallel pretraining algorithm that partitions convolutional layers by filters to train them independently in a distributed setting, resulting in reduced memory and communication costs during pretraining. Experiments show that LoFT i) preserves and finds good lottery tickets, while ii) it achieves non-trivial computation and communication savings, and maintains comparable or even better accuracy than other pretraining methods.

READ FULL TEXT
research
12/20/2013

Unsupervised Pretraining Encourages Moderate-Sparseness

It is well known that direct training of deep neural networks will gener...
research
11/20/2015

Training CNNs with Low-Rank Filters for Efficient Image Classification

We propose a new method for creating computationally efficient convoluti...
research
12/02/2021

Batch Normalization Tells You Which Filter is Important

The goal of filter pruning is to search for unimportant filters to remov...
research
12/08/2016

Filter sharing: Efficient learning of parameters for volumetric convolutions

Typical convolutional neural networks (CNNs) have several millions of pa...
research
11/21/2019

Band-limited Training and Inference for Convolutional Neural Networks

The convolutional layers are core building blocks of neural network arch...
research
11/27/2018

GaterNet: Dynamic Filter Selection in Convolutional Neural Network via a Dedicated Global Gating Network

The concept of conditional computation for deep nets has been proposed p...

Please sign up or login with your details

Forgot password? Click here to reset