Formal Verification of CNN-based Perception Systems

11/28/2018
by   Panagiotis Kouvaros, et al.
0

We address the problem of verifying neural-based perception systems implemented by convolutional neural networks. We define a notion of local robustness based on affine and photometric transformations. We show the notion cannot be captured by previously employed notions of robustness. The method proposed is based on reachability analysis for feed-forward neural networks and relies on MILP encodings of both the CNNs and transformations under question. We present an implementation and discuss the experimental results obtained for a CNN trained from the MNIST data set.

READ FULL TEXT
research
06/22/2017

An approach to reachability analysis for feed-forward ReLU neural networks

We study the reachability problem for systems implemented as feed-forwar...
research
03/15/2018

Studying Invariances of Trained Convolutional Neural Networks

Convolutional Neural Networks (CNNs) define an exceptionally powerful cl...
research
05/06/2018

Reachability Analysis of Deep Neural Networks with Provable Guarantees

Verifying correctness of deep neural networks (DNNs) is challenging. We ...
research
03/25/2023

Verifying Properties of Tsetlin Machines

Tsetlin Machines (TsMs) are a promising and interpretable machine learni...
research
01/30/2021

Enacted Visual Perception: A Computational Model based on Piaget Equilibrium

In Maurice Merleau-Ponty's phenomenology of perception, analysis of perc...
research
04/26/2019

Robustness Verification of Support Vector Machines

We study the problem of formally verifying the robustness to adversarial...
research
10/16/2020

Formal Verification of Robustness and Resilience of Learning-Enabled State Estimation Systems for Robotics

This paper presents a formal verification guided approach for a principl...

Please sign up or login with your details

Forgot password? Click here to reset