Towards a Theoretical Understanding of Hashing-Based Neural Nets

12/26/2018
by   Yibo Lin, et al.
0

Parameter reduction has been an important topic in deep learning due to the ever-increasing size of deep neural network models and the need to train and run them on resource limited machines. Despite many efforts in this area, there were no rigorous theoretical guarantees on why existing neural net compression methods should work. In this paper, we provide provable guarantees on some hashing-based parameter reduction methods in neural nets. First, we introduce a neural net compression scheme based on random linear sketching (which is usually implemented efficiently via hashing), and show that the sketched (smaller) network is able to approximate the original network on all input data coming from any smooth and well-conditioned low-dimensional manifold. The sketched network can also be trained directly via back-propagation. Next, we study the previously proposed HashedNets architecture and show that the optimization landscape of one-hidden-layer HashedNets has a local strong convexity property similar to a normal fully connected neural network. We complement our theoretical results with empirical verifications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/19/2020

On the Approximation Lower Bound for Neural Nets with Random Weights

A random net is a shallow neural network where the hidden layer is froze...
research
11/22/2018

Enhanced Expressive Power and Fast Training of Neural Networks by Random Projections

Random projections are able to perform dimension reduction efficiently f...
research
08/27/2023

Locally Uniform Hashing

Hashing is a common technique used in data processing, with a strong imp...
research
03/09/2018

Construction of neural networks for realization of localized deep learning

The subject of deep learning has recently attracted users of machine lea...
research
07/09/2019

On Activation Function Coresets for Network Pruning

Model compression provides a means to efficiently deploy deep neural net...
research
05/20/2016

Functional Hashing for Compressing Neural Networks

As the complexity of deep neural networks (DNNs) trend to grow to absorb...
research
07/28/2020

Model Size Reduction Using Frequency Based Double Hashing for Recommender Systems

Deep Neural Networks (DNNs) with sparse input features have been widely ...

Please sign up or login with your details

Forgot password? Click here to reset