DeepAI AI Chat
Log In Sign Up

Identifying good directions to escape the NTK regime and efficiently learn low-degree plus sparse polynomials

by   Eshaan Nichani, et al.

A recent goal in the theory of deep learning is to identify how neural networks can escape the "lazy training," or Neural Tangent Kernel (NTK) regime, where the network is coupled with its first order Taylor expansion at initialization. While the NTK is minimax optimal for learning dense polynomials (Ghorbani et al, 2021), it cannot learn features, and hence has poor sample complexity for learning many classes of functions including sparse polynomials. Recent works have thus aimed to identify settings where gradient based algorithms provably generalize better than the NTK. One such example is the "QuadNTK" approach of Bai and Lee (2020), which analyzes the second-order term in the Taylor expansion. Bai and Lee (2020) show that the second-order term can learn sparse polynomials efficiently; however, it sacrifices the ability to learn general dense polynomials. In this paper, we analyze how gradient descent on a two-layer neural network can escape the NTK regime by utilizing a spectral characterization of the NTK (Montanari and Zhong, 2020) and building on the QuadNTK approach. We first expand upon the spectral analysis to identify "good" directions in parameter space in which we can move without harming generalization. Next, we show that a wide two-layer neural network can jointly use the NTK and QuadNTK to fit target functions consisting of a dense low-degree term and a sparse high-degree term – something neither the NTK nor the QuadNTK can do on their own. Finally, we construct a regularizer which encourages our parameter vector to move in the "good" directions, and show that gradient descent on the regularized loss will converge to a global minimizer, which also has low test error. This yields an end to end convergence and generalization guarantee with provable sample complexity improvement over both the NTK and QuadNTK on their own.


page 1

page 2

page 3

page 4


Neural Networks can Learn Representations with Gradient Descent

Significant theoretical work has established that in specific regimes, n...

Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks

Recent works have cast some light on the mystery of why deep nets fit an...

Learning One-hidden-layer neural networks via Provable Gradient Descent with Random Initialization

Although deep learning has shown its powerful performance in many applic...

Adversarial Robustness is at Odds with Lazy Training

Recent works show that random neural networks are vulnerable against adv...

Neural Networks Efficiently Learn Low-Dimensional Representations with SGD

We study the problem of training a two-layer neural network (NN) of arbi...

What Happens after SGD Reaches Zero Loss? –A Mathematical Framework

Understanding the implicit bias of Stochastic Gradient Descent (SGD) is ...