Benchmarking Robustness of Deep Learning Classifiers Using Two-Factor Perturbation

03/02/2021
by   Wei Dai, et al.
0

This paper adds to the fundamental body of work on benchmarking the robustness of deep learning (DL) classifiers. We innovate a new benchmarking methodology to evaluate robustness of DL classifiers. Also, we introduce a new four-quadrant statistical visualization tool, including minimum accuracy, maximum accuracy, mean accuracy, and coefficient of variation, for benchmarking robustness of DL classifiers. To measure robust DL classifiers, we created a comprehensive 69 benchmarking image set, including a clean set, sets with single factor perturbations, and sets with two-factor perturbation conditions. After collecting experimental results, we first report that using two-factor perturbed images improves both robustness and accuracy of DL classifiers. The two-factor perturbation includes (1) two digital perturbations (salt pepper noise and Gaussian noise) applied in both sequences, and (2) one digital perturbation (salt pepper noise) and a geometric perturbation (rotation) applied in both sequences. All source codes, related image sets, and preliminary data, figures are shared on a GitHub website to support future academic research and industry projects. The web resources locate at https://github.com/caperock/robustai

READ FULL TEXT

page 5

page 7

research
03/28/2019

Benchmarking Neural Network Robustness to Common Corruptions and Perturbations

In this paper we establish rigorous benchmarks for image classifier robu...
research
10/19/2022

Discovering Limitations of Image Quality Assessments with Noised Deep Learning Image Sets

Image quality is important, and it can affect overall performance in ima...
research
07/16/2020

Learning perturbation sets for robust machine learning

Although much progress has been made towards robust deep learning, a sig...
research
05/29/2020

Adversarial Robustness of Deep Convolutional Candlestick Learner

Deep learning (DL) has been applied extensively in a wide range of field...
research
11/18/2019

DLBricks: Composable Benchmark Generation toReduce Deep Learning Benchmarking Effort on CPUs

The past few years have seen a surge of applying Deep Learning (DL) mode...
research
11/18/2019

DLBricks: Composable Benchmark Generation to Reduce Deep Learning Benchmarking Effort on CPUs (Extended)

The past few years have seen a surge of applying Deep Learning (DL) mode...
research
09/01/2021

Shared Certificates for Neural Network Verification

Existing neural network verifiers compute a proof that each input is han...

Please sign up or login with your details

Forgot password? Click here to reset