DeepAI
Log In Sign Up

Efficient NTK using Dimensionality Reduction

10/10/2022
by   Nir Ailon, et al.
0

Recently, neural tangent kernel (NTK) has been used to explain the dynamics of learning parameters of neural networks, at the large width limit. Quantitative analyses of NTK give rise to network widths that are often impractical and incur high costs in time and energy in both training and deployment. Using a matrix factorization technique, we show how to obtain similar guarantees to those obtained by a prior analysis while reducing training and inference resource costs. The importance of our result further increases when the input points' data dimension is in the same order as the number of input points. More generally, our work suggests how to analyze large width networks in which dense linear layers are replaced with a low complexity factorization, thus reducing the heavy dependence on the large width.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/24/2022

Fast and Low-Memory Deep Neural Networks Using Binary Matrix Factorization

Despite the outstanding performance of deep neural networks in different...
11/29/2022

Infinite-width limit of deep linear neural networks

This paper studies the infinite-width limit of deep linear neural networ...
09/27/2019

In-training Matrix Factorization for Parameter-frugal Neural Machine Translation

In this paper, we propose the use of in-training matrix factorization to...
02/01/2022

Neural Tangent Kernel Beyond the Infinite-Width Limit: Effects of Depth and Initialization

Neural Tangent Kernel (NTK) is widely used to analyze overparametrized n...
05/10/2021

Exploiting Elasticity in Tensor Ranks for Compressing Neural Networks

Elasticities in depth, width, kernel size and resolution have been explo...
06/06/2022

Spectral Bias Outside the Training Set for Deep Networks in the Kernel Regime

We provide quantitative bounds measuring the L^2 difference in function ...
10/19/2021

Expressivity of Neural Networks via Chaotic Itineraries beyond Sharkovsky's Theorem

Given a target function f, how large must a neural network be in order t...