On the minimax optimality and superiority of deep neural network learning over sparse parameter spaces

05/22/2019
by   Satoshi Hayakawa, et al.
6

Deep learning has been applied to various tasks in the field of machine learning and has shown superiority to other common procedures such as kernel methods. To provide a better theoretical understanding of the reasons for its success, we discuss the performance of deep learning and other methods on a nonparametric regression problem with a Gaussian noise. Whereas existing theoretical studies of deep learning have been based mainly on mathematical theories of well-known function classes such as Hölder and Besov classes, we focus on function classes with discontinuity and sparsity, which are those naturally assumed in practice. To highlight the effectiveness of deep learning, we compare deep learning with a class of linear estimators representative of a class of shallow estimators. It is shown that the minimax risk of a linear estimator on the convex hull of a target function class does not differ from that of the original target function class. This results in the suboptimality of linear methods over a simple but non-convex function class, on which deep learning can attain nearly the minimax-optimal rate. In addition to this extreme case, we consider function classes with sparse wavelet coefficients. On these function classes, deep learning also attains the minimax rate up to log factors of the sample size, and linear methods are still suboptimal if the assumed sparsity is strong. We also point out that the parameter sharing of deep neural networks can remarkably reduce the complexity of the model in our setting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/06/2020

Benefit of deep learning with non-convex noisy gradient descent: Provable excess risk bound and superiority to kernel methods

Establishing a theoretical analysis that explains why deep learning can ...
research
08/12/2021

Statistical Learning using Sparse Deep Neural Networks in Empirical Risk Minimization

We consider a sparse deep ReLU network (SDRN) estimator obtained from em...
research
10/18/2018

Adaptivity of deep ReLU network for learning in Besov and mixed smooth Besov spaces: optimal rate and curse of dimensionality

Deep learning has shown high performances in various types of tasks from...
research
05/19/2020

One Size Fits All: Can We Train One Denoiser for All Noise Levels?

When training an estimator such as a neural network for tasks like image...
research
10/28/2019

Deep learning is adaptive to intrinsic dimensionality of model smoothness in anisotropic Besov space

Deep learning has exhibited superior performance for various tasks, espe...
research
05/07/2018

DIRECT: Deep Discriminative Embedding for Clustering of LIGO Data

In this paper, benefiting from the strong ability of deep neural network...
research
07/04/2022

Minimax Optimal Deep Neural Network Classifiers Under Smooth Decision Boundary

Deep learning has gained huge empirical successes in large-scale classif...

Please sign up or login with your details

Forgot password? Click here to reset