Towards Verifying Robustness of Neural Networks Against Semantic Perturbations

12/19/2019
by   Jeet Mohapatra, et al.
17

Verifying robustness of neural networks given a specified threat model is a fundamental yet challenging task. While current verification methods mainly focus on the L_p-norm-ball threat model of the input instances, robustness verification against semantic adversarial attacks inducing large L_p-norm perturbations such as color shifting and lighting adjustment are beyond their capacity. To bridge this gap, we propose Semantify-NN, a model-agnostic and generic robustness verification approach against semantic perturbations for neural networks. By simply inserting our proposed semantic perturbation layers (SP-layers) to the input layer of any given model, Semantify-NN is model-agnostic, and any L_p-norm-ball based verification tools can be used to verify the model robustness against semantic perturbations. We illustrate the principles of designing the SP-layers and provide examples including semantic perturbations to image classification in the space of hue, saturation, lightness, brightness, contrast and rotation, respectively. Experimental results on various network architectures and different datasets demonstrate the superior verification performance of Semantify-NN over L_p-norm-based verification frameworks that naively convert semantic perturbation to L_p-norm. To the best of our knowledge, Semantify-NN is the first framework to support robustness verification against a wide range of semantic perturbations.

READ FULL TEXT
research
08/20/2020

On ℓ_p-norm Robustness of Ensemble Stumps and Trees

Recent papers have demonstrated that ensemble stumps and trees could be ...
research
09/01/2021

Shared Certificates for Neural Network Verification

Existing neural network verifiers compute a proof that each input is han...
research
06/26/2019

Verifying Robustness of Gradient Boosted Models

Gradient boosted models are a fundamental machine learning technique. Ro...
research
06/11/2023

Precise and Generalized Robustness Certification for Neural Networks

The objective of neural network (NN) robustness certification is to dete...
research
03/02/2021

DeepCert: Verification of Contextually Relevant Robustness for Neural Network Image Classifiers

We introduce DeepCert, a tool-supported method for verifying the robustn...
research
06/23/2022

Measuring Representational Robustness of Neural Networks Through Shared Invariances

A major challenge in studying robustness in deep learning is defining th...
research
07/14/2022

Work In Progress: Safety and Robustness Verification of Autoencoder-Based Regression Models using the NNV Tool

This work in progress paper introduces robustness verification for autoe...

Please sign up or login with your details

Forgot password? Click here to reset