Statistical guarantees for sparse deep learning

12/11/2022
by   Johannes Lederer, et al.
0

Neural networks are becoming increasingly popular in applications, but our mathematical understanding of their potential and limitations is still limited. In this paper, we further this understanding by developing statistical guarantees for sparse deep learning. In contrast to previous work, we consider different types of sparsity, such as few active connections, few active nodes, and other norm-based types of sparsity. Moreover, our theories cover important aspects that previous theories have neglected, such as multiple outputs, regularization, and l2-loss. The guarantees have a mild dependence on network widths and depths, which means that they support the application of sparse but wide and deep networks from a statistical perspective. Some of the concepts and tools that we use in our derivations are uncommon in deep learning and, hence, might be of additional interest.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/30/2020

Statistical Guarantees for Regularized Neural Networks

Neural networks have become standard tools in the analysis of data, but ...
research
05/09/2022

Statistical Guarantees for Approximate Stationary Points of Simple Neural Networks

Since statistical guarantees for neural networks are usually restricted ...
research
02/16/2023

Reducing Computational and Statistical Complexity in Machine Learning Through Cardinality Sparsity

High-dimensional data has become ubiquitous across the sciences but caus...
research
01/23/2023

Deep Learning Meets Sparse Regularization: A Signal Processing Perspective

Deep learning has been wildly successful in practice and most state-of-t...
research
07/06/2021

Neural Computing

This chapter aims to provide next-level understanding of the problems of...
research
06/28/2020

Layer Sparsity in Neural Networks

Sparsity has become popular in machine learning, because it can save com...
research
10/03/2022

Sparsity by Redundancy: Solving L_1 with a Simple Reparametrization

We identify and prove a general principle: L_1 sparsity can be achieved ...

Please sign up or login with your details

Forgot password? Click here to reset