The Computational Complexity of ReLU Network Training Parameterized by Data Dimensionality

05/18/2021
by   Vincent Froese, et al.
0

Understanding the computational complexity of training simple neural networks with rectified linear units (ReLUs) has recently been a subject of intensive research. Closing gaps and complementing results from the literature, we present several results on the parameterized complexity of training two-layer ReLU networks with respect to various loss functions. After a brief discussion of other parameters, we focus on analyzing the influence of the dimension d of the training data on the computational complexity. We provide running time lower bounds in terms of W[1]-hardness for parameter d and prove that known brute-force strategies are essentially optimal (assuming the Exponential Time Hypothesis). In comparison with previous work, our results hold for a broad(er) range of loss functions, including ℓ^p-loss for all p∈[0,∞]. In particular, we extend a known polynomial-time algorithm for constant d and convex loss functions to a more general class of loss functions, matching our running time lower bounds also in these cases.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/27/2020

Tight Hardness Results for Training Depth-2 ReLU Networks

We prove several hardness results for training depth-2 neural networks w...
research
06/05/2022

Early Stage Convergence and Global Convergence of Training Mildly Parameterized Neural Networks

The convergence of GD and SGD when training mildly parameterized neural ...
research
05/27/2014

Differentially Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds

In this paper, we initiate a systematic investigation of differentially ...
research
07/26/2016

Approximation and Parameterized Complexity of Minimax Approval Voting

We present three results on the complexity of Minimax Approval Voting. F...
research
03/29/2023

Training Neural Networks is NP-Hard in Fixed Dimension

We study the parameterized complexity of training two-layer neural netwo...
research
10/07/2018

Principled Deep Neural Network Training through Linear Programming

Deep Learning has received significant attention due to its impressive p...
research
04/24/2021

Achieving Small Test Error in Mildly Overparameterized Neural Networks

Recent theoretical works on over-parameterized neural nets have focused ...

Please sign up or login with your details

Forgot password? Click here to reset