Vector-Valued Variation Spaces and Width Bounds for DNNs: Insights on Weight Decay Regularization

05/25/2023
by   Joseph Shenouda, et al.
0

Deep neural networks (DNNs) trained to minimize a loss term plus the sum of squared weights via gradient descent corresponds to the common approach of training with weight decay. This paper provides new insights into this common learning framework. We characterize the kinds of functions learned by training with weight decay for multi-output (vector-valued) ReLU neural networks. This extends previous characterizations that were limited to single-output (scalar-valued) networks. This characterization requires the definition of a new class of neural function spaces that we call vector-valued variation (VV) spaces. We prove that neural networks (NNs) are optimal solutions to learning problems posed over VV spaces via a novel representer theorem. This new representer theorem shows that solutions to these learning problems exist as vector-valued neural networks with widths bounded in terms of the number of training data. Next, via a novel connection to the multi-task lasso problem, we derive new and tighter bounds on the widths of homogeneous layers in DNNs. The bounds are determined by the effective dimensions of the training data embeddings in/out of the layers. This result sheds new light on the architectural requirements for DNNs. Finally, the connection to the multi-task lasso problem suggests a new approach to compressing pre-trained networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/07/2021

What Kinds of Functions do Deep Neural Networks Learn? Insights from Variational Spline Theory

We develop a variational framework to understand the properties of funct...
research
10/06/2022

A Better Way to Decay: Proximal Gradient Training Algorithms for Neural Nets

Weight decay is one of the most widely used forms of regularization in d...
research
09/18/2021

Near-Minimax Optimal Estimation With Shallow ReLU Neural Networks

We study the problem of estimating an unknown function from noisy data u...
research
02/22/2022

Explicit Regularization via Regularizer Mirror Descent

Despite perfectly interpolating the training data, deep neural networks ...
research
06/05/2016

Bounds for Vector-Valued Function Estimation

We present a framework to derive risk bounds for vector-valued learning ...
research
06/10/2020

Neural Networks, Ridge Splines, and TV Regularization in the Radon Domain

We develop a variational framework to understand the properties of the f...
research
11/04/2019

Persistency of Excitation for Robustness of Neural Networks

When an online learning algorithm is used to estimate the unknown parame...

Please sign up or login with your details

Forgot password? Click here to reset