Training Neural Networks is ER-complete

02/19/2021
by   Mikkel Abrahamsen, et al.
0

Given a neural network, training data, and a threshold, it was known that it is NP-hard to find weights for the neural network such that the total error is below the threshold. We determine the algorithmic complexity of this fundamental problem precisely, by showing that it is ER-complete. This means that the problem is equivalent, up to polynomial-time reductions, to deciding whether a system of polynomial equations and inequalities with integer coefficients and real unknowns has a solution. If, as widely expected, ER is strictly larger than NP, our work implies that the problem of training neural networks is not even in NP.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/27/2018

Complexity of Training ReLU Neural Network

In this paper, we explore some basic questions on the complexity of trai...
research
10/23/2022

The Point-Boundary Art Gallery Problem is ∃ℝ-hard

We resolve the complexity of the point-boundary variant of the art galle...
research
04/04/2022

Training Fully Connected Neural Networks is ∃ℝ-Complete

We consider the algorithmic problem of finding the optimal weights and b...
research
08/05/2021

Geometric Embeddability of Complexes is ∃ℝ-complete

We show that the decision problem of determining whether a given (abstra...
research
05/26/2021

BioNavi-NP: Biosynthesis Navigator for Natural Products

Nature, a synthetic master, creates more than 300,000 natural products (...
research
05/04/2023

All Kronecker coefficients are reduced Kronecker coefficients

We settle the question of where exactly the reduced Kronecker coefficien...
research
02/21/2020

It's Not What Machines Can Learn, It's What We Cannot Teach

Can deep neural networks learn to solve any task, and in particular prob...

Please sign up or login with your details

Forgot password? Click here to reset