Reachability Is NP-Complete Even for the Simplest Neural Networks

08/30/2021
by   Marco Sälzer, et al.
0

We investigate the complexity of the reachability problem for (deep) neural networks: does it compute valid output given some valid input? It was recently claimed that the problem is NP-complete for general neural networks and conjunctive input/output specifications. We repair some flaws in the original upper and lower bound proofs. We then show that NP-hardness already holds for restricted classes of simple specifications and neural networks with just one layer, as well as neural networks with minimal requirements on the occurring parameters.

READ FULL TEXT

page 1

page 2

page 3

page 4

03/15/2022

Reachability In Simple Neural Networks

We investigate the complexity of the reachability problem for (deep) neu...
06/09/2023

Complexity of Reachability Problems in Neural Networks

In this paper we investigate formal verification problems for Neural Net...
04/24/2013

Decision-Theoretic Troubleshooting: Hardness of Approximation

Decision-theoretic troubleshooting is one of the areas to which Bayesian...
01/29/2020

Safe Predictors for Enforcing Input-Output Specifications

We present an approach for designing correct-by-construction neural netw...
07/10/2020

On flat lossy channel machines

We show that reachability, repeated reachability, nontermination and unb...
02/23/2022

Learning Neural Networks under Input-Output Specifications

In this paper, we examine an important problem of learning neural networ...
01/14/2023

First Three Years of the International Verification of Neural Networks Competition (VNN-COMP)

This paper presents a summary and meta-analysis of the first three itera...

Please sign up or login with your details

Forgot password? Click here to reset