DeepSafe: A Data-driven Approach for Checking Adversarial Robustness in Neural Networks

10/02/2017
by   Divya Gopinath, et al.
0

Deep neural networks have become widely used, obtaining remarkable results in domains such as computer vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, and bio-informatics, where they have produced results comparable to human experts. However, these networks can be easily fooled by adversarial perturbations: minimal changes to correctly-classified inputs, that cause the network to mis-classify them. This phenomenon represents a concern for both safety and security, but it is currently unclear how to measure a network's robustness against such perturbations. Existing techniques are limited to checking robustness around a few individual input points, providing only very limited guarantees. We propose a novel approach for automatically identifying safe regions of the input space, within which the network is robust against adversarial perturbations. The approach is data-guided, relying on clustering to identify well-defined geometric regions as candidate safe regions. We then utilize verification techniques to confirm that these regions are safe or to provide counter-examples showing that they are not safe. We also introduce the notion of targeted robustness which, for a given target label and region, ensures that a NN does not map any input in the region to the target label. We evaluated our technique on the MNIST dataset and on a neural network implementation of a controller for the next-generation Airborne Collision Avoidance System for unmanned aircraft (ACAS Xu). For these networks, our approach identified multiple regions which were completely safe as well as some which were only safe for specific labels. It also discovered several adversarial perturbations of interest.

READ FULL TEXT
research
03/04/2020

Metrics and methods for robustness evaluation of neural networks with generative models

Recent studies have shown that modern deep neural network classifiers ar...
research
07/16/2020

Accelerating Robustness Verification of Deep Neural Networks Guided by Target Labels

Deep Neural Networks (DNNs) have become key components of many safety-cr...
research
05/16/2020

Universal Adversarial Perturbations: A Survey

Over the past decade, Deep Learning has emerged as a useful and efficien...
research
10/30/2022

FI-ODE: Certified and Robust Forward Invariance in Neural ODEs

We study how to certifiably enforce forward invariance properties in neu...
research
08/20/2020

β-Variational Classifiers Under Attack

Deep Neural networks have gained lots of attention in recent years thank...
research
08/04/2020

Can Adversarial Weight Perturbations Inject Neural Backdoors?

Adversarial machine learning has exposed several security hazards of neu...
research
03/25/2021

The Geometry of Over-parameterized Regression and Adversarial Perturbations

Classical regression has a simple geometric description in terms of a pr...

Please sign up or login with your details

Forgot password? Click here to reset