Work In Progress: Safety and Robustness Verification of Autoencoder-Based Regression Models using the NNV Tool

07/14/2022
by   Neelanjana Pal, et al.
0

This work in progress paper introduces robustness verification for autoencoder-based regression neural network (NN) models, following state-of-the-art approaches for robustness verification of image classification NNs. Despite the ongoing progress in developing verification methods for safety and robustness in various deep neural networks (DNNs), robustness checking of autoencoder models has not yet been considered. We explore this open space of research and check ways to bridge the gap between existing DNN verification methods by extending existing robustness analysis methods for such autoencoder networks. While classification models using autoencoders work more or less similar to image classification NNs, the functionality of regression models is distinctly different. We introduce two definitions of robustness evaluation metrics for autoencoder-based regression models, specifically the percentage robustness and un-robustness grade. We also modified the existing Imagestar approach, adjusting the variables to take care of the specific input types for regression networks. The approach is implemented as an extension of NNV, then applied and evaluated on a dataset, with a case study experiment shown using the same dataset. As per the authors' understanding, this work in progress paper is the first to show possible reachability analysis of autoencoder-based NNs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/17/2018

Boosting the Robustness Verification of DNN by Identifying the Achilles's Heel

Deep Neural Network (DNN) is a widely used deep learning technique. How ...
research
10/09/2020

Understanding Spatial Robustness of Deep Neural Networks

Deep Neural Networks (DNNs) are being deployed in a wide range of settin...
research
05/08/2022

VPN: Verification of Poisoning in Neural Networks

Neural networks are successfully used in a variety of applications, many...
research
04/03/2023

Model-Agnostic Reachability Analysis on Deep Neural Networks

Verification plays an essential role in the formal analysis of safety-cr...
research
12/19/2019

Towards Verifying Robustness of Neural Networks Against Semantic Perturbations

Verifying robustness of neural networks given a specified threat model i...
research
09/12/2022

Boosting Robustness Verification of Semantic Feature Neighborhoods

Deep neural networks have been shown to be vulnerable to adversarial att...
research
04/04/2023

Incremental Verification of Neural Networks

Complete verification of deep neural networks (DNNs) can exactly determi...

Please sign up or login with your details

Forgot password? Click here to reset