Compressing Neural Networks with the Hashing Trick

04/19/2015
by   Wenlin Chen, et al.
0

As deep nets are increasingly used in applications suited for mobile devices, a fundamental dilemma becomes apparent: the trend in deep learning is to grow models to absorb ever-increasing data set sizes; however mobile devices are designed with very little memory and cannot store such large models. We present a novel network architecture, HashedNets, that exploits inherent redundancy in neural networks to achieve drastic reductions in model sizes. HashedNets uses a low-cost hash function to randomly group connection weights into hash buckets, and all connections within the same hash bucket share a single parameter value. These parameters are tuned to adjust to the HashedNets weight sharing architecture with standard backprop during training. Our hashing procedure introduces no additional memory overhead, and we demonstrate on several benchmark data sets that HashedNets shrink the storage requirements of neural networks substantially while mostly preserving generalization performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/14/2015

Compressing Convolutional Neural Networks

Convolutional neural networks (CNN) are increasingly used in many areas ...
research
05/20/2016

Functional Hashing for Compressing Neural Networks

As the complexity of deep neural networks (DNNs) trend to grow to absorb...
research
09/29/2019

Learning Efficient Convolutional Networks through Irregular Convolutional Kernels

As deep neural networks are increasingly used in applications suited for...
research
02/08/2018

From Hashing to CNNs: Training BinaryWeight Networks via Hashing

Deep convolutional neural networks (CNNs) have shown appealing performan...
research
10/21/2017

An efficient deep learning hashing neural network for mobile visual search

Mobile visual search applications are emerging that enable users to sens...
research
03/11/2022

Generalized Key-Value Memory to Flexibly Adjust Redundancy in Memory-Augmented Networks

Memory-augmented neural networks enhance a neural network with an extern...
research
05/24/2018

Multi-Task Zipping via Layer-wise Neuron Sharing

Future mobile devices are anticipated to perceive, understand and react ...

Please sign up or login with your details

Forgot password? Click here to reset