RoMA: a Method for Neural Network Robustness Measurement and Assessment

10/21/2021
by   Natan Levy, et al.
0

Neural network models have become the leading solution for a large variety of tasks, such as classification, language processing, protein folding, and others. However, their reliability is heavily plagued by adversarial inputs: small input perturbations that cause the model to produce erroneous outputs. Adversarial inputs can occur naturally when the system's environment behaves randomly, even in the absence of a malicious adversary, and are a severe cause for concern when attempting to deploy neural networks within critical systems. In this paper, we present a new statistical method, called Robustness Measurement and Assessment (RoMA), which can measure the expected robustness of a neural network model. Specifically, RoMA determines the probability that a random input perturbation might cause misclassification. The method allows us to provide formal guarantees regarding the expected frequency of errors that a trained model will encounter after deployment. Our approach can be applied to large-scale, black-box neural networks, which is a significant advantage compared to recently proposed verification methods. We apply our approach in two ways: comparing the robustness of different models, and measuring how a model's robustness is affected by the magnitude of input perturbation. One interesting insight obtained through this work is that, in a classification network, different output labels can exhibit very different robustness levels. We term this phenomenon categorial robustness. Our ability to perform risk and robustness assessments on a categorial basis opens the door to risk mitigation, which may prove to be a significant step towards neural network certification in safety-critical applications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/05/2023

gRoMA: a Tool for Measuring Deep Neural Networks Global Robustness

Deep neural networks (DNNs) are a state-of-the-art technology, capable o...
research
07/21/2022

AugRmixAT: A Data Processing and Training Method for Improving Multiple Robustness and Generalization Performance

Deep neural networks are powerful, but they also have shortcomings such ...
research
03/16/2022

What Do Adversarially trained Neural Networks Focus: A Fourier Domain-based Study

Although many fields have witnessed the superior performance brought abo...
research
02/15/2021

Certified Robustness to Programmable Transformations in LSTMs

Deep neural networks for natural language processing are fragile in the ...
research
09/14/2020

Into the unknown: Active monitoring of neural networks

Machine-learning techniques achieve excellent performance in modern appl...
research
12/10/2019

Statistically Robust Neural Network Classification

Recently there has been much interest in quantifying the robustness of n...
research
07/05/2022

PRoA: A Probabilistic Robustness Assessment against Functional Perturbations

In safety-critical deep learning applications robustness measurement is ...

Please sign up or login with your details

Forgot password? Click here to reset