Predicting Parameters in Deep Learning

06/03/2013
by   Misha Denil, et al.
0

We demonstrate that there is significant redundancy in the parameterization of several deep learning models. Given only a few weight values for each feature it is possible to accurately predict the remaining values. Moreover, we show that not only can the parameter values be predicted, but many of them need not be learned at all. We train several different architectures by learning only a small number of weights and predicting the rest. In the best case we are able to predict more than 95 accuracy.

READ FULL TEXT
research
10/11/2022

Dataloader Parameter Tuner: An Automated Dataloader Parameter Tuner for Deep Learning Models

Deep learning has recently become one of the most compute/data-intensive...
research
03/07/2021

Spectral Tensor Train Parameterization of Deep Learning Layers

We study low-rank parameterizations of weight matrices with embedded spe...
research
07/19/2017

Sentence-level quality estimation by predicting HTER as a multi-component metric

This submission investigates alternative machine learning models for pre...
research
10/11/2022

Deep learning model compression using network sensitivity and gradients

Deep learning model compression is an improving and important field for ...
research
12/07/2021

A deep language model to predict metabolic network equilibria

We show that deep learning models, and especially architectures like the...
research
11/28/2018

Predicting the Computational Cost of Deep Learning Models

Deep learning is rapidly becoming a go-to tool for many artificial intel...
research
05/10/2023

Access-Redundancy Tradeoffs in Quantized Linear Computations

Linear real-valued computations over distributed datasets are common in ...

Please sign up or login with your details

Forgot password? Click here to reset