PRoA: A Probabilistic Robustness Assessment against Functional Perturbations

07/05/2022
by   Tianle Zhang, et al.
0

In safety-critical deep learning applications robustness measurement is a vital pre-deployment phase. However, existing robustness verification methods are not sufficiently practical for deploying machine learning systems in the real world. On the one hand, these methods attempt to claim that no perturbations can “fool” deep neural networks (DNNs), which may be too stringent in practice. On the other hand, existing works rigorously consider L_p bounded additive perturbations on the pixel space, although perturbations, such as colour shifting and geometric transformations, are more practically and frequently occurring in the real world. Thus, from the practical standpoint, we present a novel and general probabilistic robustness assessment method (PRoA) based on the adaptive concentration, and it can measure the robustness of deep learning models against functional perturbations. PRoA can provide statistical guarantees on the probabilistic robustness of a model, i.e., the probability of failure encountered by the trained model after deployment. Our experiments demonstrate the effectiveness and flexibility of PRoA in terms of evaluating the probabilistic robustness against a broad range of functional perturbations, and PRoA can scale well to various large-scale deep neural networks compared to existing state-of-the-art baselines. For the purpose of reproducibility, we release our tool on GitHub: < https://github.com/TrustAI/PRoA>.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/22/2021

CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks

In safety-critical machine learning applications, it is crucial to defen...
research
01/28/2022

REET: Robustness Evaluation and Enhancement Toolbox for Computational Pathology

Motivation: Digitization of pathology laboratories through digital slide...
research
07/22/2022

Training Certifiably Robust Neural Networks Against Semantic Perturbations

Semantic image perturbations, such as scaling and rotation, have been sh...
research
07/03/2018

Adversarial Robustness Toolbox v0.2.2

Adversarial examples have become an indisputable threat to the security ...
research
10/21/2021

RoMA: a Method for Neural Network Robustness Measurement and Assessment

Neural network models have become the leading solution for a large varie...
research
06/14/2022

Adversarial Vulnerability of Randomized Ensembles

Despite the tremendous success of deep neural networks across various ta...
research
01/13/2021

Random Shadows and Highlights: A new data augmentation method for extreme lighting conditions

In this paper, we propose a new data augmentation method, Random Shadows...

Please sign up or login with your details

Forgot password? Click here to reset