Exploiting Verified Neural Networks via Floating Point Numerical Error

03/06/2020
by   Kai Jia, et al.
0

We show how to construct adversarial examples for neural networks with exactly verified robustness against ℓ_∞-bounded input perturbations by exploiting floating point error. We argue that any exact verification of real-valued neural networks must accurately model the implementation details of any floating point arithmetic used during inference or verification.

READ FULL TEXT
research
01/21/2021

Deductive Verification of Floating-Point Java Programs in KeY

Deductive verification has been successful in verifying interesting prop...
research
05/20/2022

Getting a-Round Guarantees: Floating-Point Attacks on Certified Robustness

Adversarial examples pose a security risk as they can alter a classifier...
research
05/07/2020

Efficient Exact Verification of Binarized Neural Networks

Concerned with the reliability of neural networks, researchers have deve...
research
07/07/2017

A Verified Certificate Checker for Floating-Point Error Bounds

Being able to soundly estimate roundoff errors in floating-point computa...
research
08/18/2021

Verifying Low-dimensional Input Neural Networks via Input Quantization

Deep neural networks are an attractive tool for compressing the control ...
research
03/01/2022

A Domain-Theoretic Framework for Robustness Analysis of Neural Networks

We present a domain-theoretic framework for validated robustness analysi...
research
01/07/2020

Automatic generation and verification of test-stable floating-point code

Test instability in a floating-point program occurs when the control flo...

Please sign up or login with your details

Forgot password? Click here to reset