Reachability Is NP-Complete Even for the Simplest Neural Networks

by   Marco Sälzer, et al.

We investigate the complexity of the reachability problem for (deep) neural networks: does it compute valid output given some valid input? It was recently claimed that the problem is NP-complete for general neural networks and conjunctive input/output specifications. We repair some flaws in the original upper and lower bound proofs. We then show that NP-hardness already holds for restricted classes of simple specifications and neural networks with just one layer, as well as neural networks with minimal requirements on the occurring parameters.


page 1

page 2

page 3

page 4


Reachability In Simple Neural Networks

We investigate the complexity of the reachability problem for (deep) neu...

Complexity of Reachability Problems in Neural Networks

In this paper we investigate formal verification problems for Neural Net...

Decision-Theoretic Troubleshooting: Hardness of Approximation

Decision-theoretic troubleshooting is one of the areas to which Bayesian...

Safe Predictors for Enforcing Input-Output Specifications

We present an approach for designing correct-by-construction neural netw...

On flat lossy channel machines

We show that reachability, repeated reachability, nontermination and unb...

Learning Neural Networks under Input-Output Specifications

In this paper, we examine an important problem of learning neural networ...

First Three Years of the International Verification of Neural Networks Competition (VNN-COMP)

This paper presents a summary and meta-analysis of the first three itera...

Please sign up or login with your details

Forgot password? Click here to reset