Realization of spatial sparseness by deep ReLU nets with massive data

12/16/2019
by   Charles K. Chui, et al.
16

The great success of deep learning poses urgent challenges for understanding its working mechanism and rationality. The depth, structure, and massive size of the data are recognized to be three key ingredients for deep learning. Most of the recent theoretical studies for deep learning focus on the necessity and advantages of depth and structures of neural networks. In this paper, we aim at rigorous verification of the importance of massive data in embodying the out-performance of deep learning. To approximate and learn spatially sparse and smooth functions, we establish a novel sampling theorem in learning theory to show the necessity of massive data. We then prove that implementing the classical empirical risk minimization on some deep nets facilitates in realization of the optimal learning rates derived in the sampling theorem. This perhaps explains why deep learning performs so well in the era of big data.

READ FULL TEXT

page 1

page 4

page 5

research
04/01/2020

Depth Selection for Deep ReLU Nets in Feature Extraction and Generalization

Deep learning is recognized to be capable of discovering deep features f...
research
01/13/2020

Approximation smooth and sparse functions by deep neural networks without saturation

Constructing neural networks for function approximation is a classical a...
research
04/03/2019

Deep Neural Networks for Rotation-Invariance Approximation and Learning

Based on the tree architecture, the objective of this paper is to design...
research
04/28/2021

A Study of the Mathematics of Deep Learning

"Deep Learning"/"Deep Neural Nets" is a technological marvel that is now...
research
03/10/2018

Generalization and Expressivity for Deep Nets

Along with the rapid development of deep learning in practice, the theor...
research
02/15/2022

A Statistical Learning View of Simple Kriging

In the Big Data era, with the ubiquity of geolocation sensors in particu...
research
10/20/2022

Global Convergence of SGD On Two Layer Neural Nets

In this note we demonstrate provable convergence of SGD to the global mi...

Please sign up or login with your details

Forgot password? Click here to reset