Log In Sign Up

Efficient NTK using Dimensionality Reduction

by   Nir Ailon, et al.

Recently, neural tangent kernel (NTK) has been used to explain the dynamics of learning parameters of neural networks, at the large width limit. Quantitative analyses of NTK give rise to network widths that are often impractical and incur high costs in time and energy in both training and deployment. Using a matrix factorization technique, we show how to obtain similar guarantees to those obtained by a prior analysis while reducing training and inference resource costs. The importance of our result further increases when the input points' data dimension is in the same order as the number of input points. More generally, our work suggests how to analyze large width networks in which dense linear layers are replaced with a low complexity factorization, thus reducing the heavy dependence on the large width.


page 1

page 2

page 3

page 4


Fast and Low-Memory Deep Neural Networks Using Binary Matrix Factorization

Despite the outstanding performance of deep neural networks in different...

Infinite-width limit of deep linear neural networks

This paper studies the infinite-width limit of deep linear neural networ...

In-training Matrix Factorization for Parameter-frugal Neural Machine Translation

In this paper, we propose the use of in-training matrix factorization to...

Neural Tangent Kernel Beyond the Infinite-Width Limit: Effects of Depth and Initialization

Neural Tangent Kernel (NTK) is widely used to analyze overparametrized n...

Exploiting Elasticity in Tensor Ranks for Compressing Neural Networks

Elasticities in depth, width, kernel size and resolution have been explo...

Spectral Bias Outside the Training Set for Deep Networks in the Kernel Regime

We provide quantitative bounds measuring the L^2 difference in function ...

Expressivity of Neural Networks via Chaotic Itineraries beyond Sharkovsky's Theorem

Given a target function f, how large must a neural network be in order t...