Neural Optimization Kernel: Towards Robust Deep Learning

06/11/2021
by   Yueming Lyu, et al.
0

Recent studies show a close connection between neural networks (NN) and kernel methods. However, most of these analyses (e.g., NTK) focus on the influence of (infinite) width instead of the depth of NN models. There remains a gap between theory and practical network designs that benefit from the depth. This paper first proposes a novel kernel family named Neural Optimization Kernel (NOK). Our kernel is defined as the inner product between two T-step updated functionals in RKHS w.r.t. a regularized optimization problem. Theoretically, we proved the monotonic descent property of our update rule for both convex and non-convex problems, and a O(1/T) convergence rate of our updates for convex problems. Moreover, we propose a data-dependent structured approximation of our NOK, which builds the connection between training deep NNs and kernel methods associated with NOK. The resultant computational graph is a ResNet-type finite width NN. Our structured approximation preserved the monotonic descent property and O(1/T) convergence rate. Namely, a T-layer NN performs T-step monotonic descent updates. Notably, we show our T-layered structured NN with ReLU maintains a O(1/T) convergence rate w.r.t. a convex regularized problem, which explains the success of ReLU on training deep NN from a NN architecture optimization perspective. For the unsupervised learning and the shared parameter case, we show the equivalence of training structured NN with GD and performing functional gradient descent in RKHS associated with a fixed (data-dependent) NOK at an infinity-width regime. For finite NOKs, we prove generalization bounds. Remarkably, we show that overparameterized deep NN (NOK) can increase the expressive power to reduce empirical risk and reduce the generalization bound at the same time. Extensive experiments verify the robustness of our structured NOK blocks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/11/2021

On the Equivalence between Neural Network and Support Vector Machine

Recent research shows that the dynamics of an infinitely wide neural net...
research
12/06/2019

A priori generalization error for two-layer ReLU neural network through minimum norm solution

We focus on estimating a priori generalization error of two-layer ReLU n...
research
03/22/2021

Weighted Neural Tangent Kernel: A Generalized and Improved Network-Induced Kernel

The Neural Tangent Kernel (NTK) has recently attracted intense study, as...
research
03/17/2019

Training Over-parameterized Deep ResNet Is almost as Easy as Training a Two-layer Network

It has been proved that gradient descent converges linearly to the globa...
research
10/19/2019

Neural Spectrum Alignment

Expressiveness of deep models was recently addressed via the connection ...
research
02/08/2020

Extrapolation Towards Imaginary 0-Nearest Neighbour and Its Improved Convergence Rate

k-nearest neighbour (k-NN) is one of the simplest and most widely-used m...

Please sign up or login with your details

Forgot password? Click here to reset