Structural Dropout for Model Width Compression

05/13/2022
by   Julian Knodt, et al.
0

Existing ML models are known to be highly over-parametrized, and use significantly more resources than required for a given task. Prior work has explored compressing models offline, such as by distilling knowledge from larger models into much smaller ones. This is effective for compression, but does not give an empirical method for measuring how much the model can be compressed, and requires additional training for each compressed model. We propose a method that requires only a single training session for the original model and a set of compressed models. The proposed approach is a "structural" dropout that prunes all elements in the hidden state above a randomly chosen index, forcing the model to learn an importance ordering over its features. After learning this ordering, at inference time unimportant features can be pruned while retaining most accuracy, reducing parameter size significantly. In this work, we focus on Structural Dropout for fully-connected layers, but the concept can be applied to any kind of layer with unordered features, such as convolutional or attention layers. Structural Dropout requires no additional pruning/retraining, but requires additional validation for each possible hidden sizes. At inference time, a non-expert can select a memory versus accuracy trade-off that best suits their needs, across a wide range of highly compressed versus more accurate models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/02/2022

Triangular Dropout: Variable Network Width without Retraining

One of the most fundamental design choices in neural networks is layer w...
research
12/22/2014

Learning Compact Convolutional Neural Networks with Nested Dropout

Recently, nested dropout was proposed as a method for ordering represent...
research
02/07/2020

DropCluster: A structured dropout for convolutional networks

Dropout as a regularizer in deep neural networks has been less effective...
research
10/22/2018

An Exploration of Dropout with RNNs for Natural Language Inference

Dropout is a crucial regularization technique for the Recurrent Neural N...
research
10/15/2021

Differentiable Network Pruning for Microcontrollers

Embedded and personal IoT devices are powered by microcontroller units (...
research
05/17/2023

Compress, Then Prompt: Improving Accuracy-Efficiency Trade-off of LLM Inference with Transferable Prompt

Large Language Models (LLMs), armed with billions of parameters, exhibit...
research
07/08/2019

ShrinkML: End-to-End ASR Model Compression Using Reinforcement Learning

End-to-end automatic speech recognition (ASR) models are increasingly la...

Please sign up or login with your details

Forgot password? Click here to reset