Does a sparse ReLU network training problem always admit an optimum?

06/05/2023
by   Quoc-Tung Le, et al.
0

Given a training set, a loss function, and a neural network architecture, it is often taken for granted that optimal network parameters exist, and a common practice is to apply available optimization algorithms to search for them. In this work, we show that the existence of an optimal solution is not always guaranteed, especially in the context of sparse ReLU neural networks. In particular, we first show that optimization problems involving deep networks with certain sparsity patterns do not always have optimal parameters, and that optimization algorithms may then diverge. Via a new topological relation between sparse ReLU neural networks and their linear counterparts, we derive – using existing tools from real algebraic geometry – an algorithm to verify that a given sparsity pattern suffers from this issue. Then, the existence of a global optimum is proved for every concrete optimization problem involving a shallow sparse ReLU neural network of output dimension one. Overall, the analysis is based on the investigation of two topological properties of the space of functions implementable as sparse ReLU neural networks: a best approximation property, and a closedness property, both in the uniform norm. This is studied both for (finite) domains corresponding to practical training on finite training sets, and for more general domains such as the unit cube. This allows us to provide conditions for the guaranteed existence of an optimum given a sparsity pattern. The results apply not only to several sparsity patterns proposed in recent works on network pruning/sparsification, but also to classical dense neural networks, including architectures not covered by existing results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/17/2020

Sparse-grid sampling recovery and deep ReLU neural networks in high-dimensional approximation

We investigate approximations of functions from the Hölder-Zygmund space...
research
12/24/2020

Vector-output ReLU Neural Network Problems are Copositive Programs: Convex Analysis of Two Layer Networks and Polynomial-time Algorithms

We describe the convex semi-infinite dual of the two-layer vector-output...
research
03/06/2023

On the existence of optimal shallow feedforward networks with ReLU activation

We prove existence of global minima in the loss landscape for the approx...
research
07/05/2018

Deeply-Sparse Signal rePresentations (DS^2P)

The solution to the regularized least-squares problem min_x∈R^p+1/2y-Ax_...
research
05/11/2022

An Inexact Augmented Lagrangian Algorithm for Training Leaky ReLU Neural Network with Group Sparsity

The leaky ReLU network with a group sparse regularization term has been ...
research
10/01/2022

A Combinatorial Perspective on the Optimization of Shallow ReLU Networks

The NP-hard problem of optimizing a shallow ReLU network can be characte...
research
03/02/2023

Penalising the biases in norm regularisation enforces sparsity

Controlling the parameters' norm often yields good generalisation when t...

Please sign up or login with your details

Forgot password? Click here to reset