RoMA: a Method for Neural Network Robustness Measurement and Assessment

by   Natan Levy, et al.

Neural network models have become the leading solution for a large variety of tasks, such as classification, language processing, protein folding, and others. However, their reliability is heavily plagued by adversarial inputs: small input perturbations that cause the model to produce erroneous outputs. Adversarial inputs can occur naturally when the system's environment behaves randomly, even in the absence of a malicious adversary, and are a severe cause for concern when attempting to deploy neural networks within critical systems. In this paper, we present a new statistical method, called Robustness Measurement and Assessment (RoMA), which can measure the expected robustness of a neural network model. Specifically, RoMA determines the probability that a random input perturbation might cause misclassification. The method allows us to provide formal guarantees regarding the expected frequency of errors that a trained model will encounter after deployment. Our approach can be applied to large-scale, black-box neural networks, which is a significant advantage compared to recently proposed verification methods. We apply our approach in two ways: comparing the robustness of different models, and measuring how a model's robustness is affected by the magnitude of input perturbation. One interesting insight obtained through this work is that, in a classification network, different output labels can exhibit very different robustness levels. We term this phenomenon categorial robustness. Our ability to perform risk and robustness assessments on a categorial basis opens the door to risk mitigation, which may prove to be a significant step towards neural network certification in safety-critical applications.


page 1

page 2

page 3

page 4


gRoMA: a Tool for Measuring Deep Neural Networks Global Robustness

Deep neural networks (DNNs) are a state-of-the-art technology, capable o...

AugRmixAT: A Data Processing and Training Method for Improving Multiple Robustness and Generalization Performance

Deep neural networks are powerful, but they also have shortcomings such ...

What Do Adversarially trained Neural Networks Focus: A Fourier Domain-based Study

Although many fields have witnessed the superior performance brought abo...

Certified Robustness to Programmable Transformations in LSTMs

Deep neural networks for natural language processing are fragile in the ...

Into the unknown: Active monitoring of neural networks

Machine-learning techniques achieve excellent performance in modern appl...

Statistically Robust Neural Network Classification

Recently there has been much interest in quantifying the robustness of n...

PRoA: A Probabilistic Robustness Assessment against Functional Perturbations

In safety-critical deep learning applications robustness measurement is ...

Please sign up or login with your details

Forgot password? Click here to reset