RoBIC: A benchmark suite for assessing classifiers robustness

02/10/2021
by   Thibault Maho, et al.
0

Many defenses have emerged with the development of adversarial attacks. Models must be objectively evaluated accordingly. This paper systematically tackles this concern by proposing a new parameter-free benchmark we coin RoBIC. RoBIC fairly evaluates the robustness of image classifiers using a new half-distortion measure. It gauges the robustness of the network against white and black box attacks, independently of its accuracy. RoBIC is faster than the other available benchmarks. We present the significant differences in the robustness of 16 recent models as assessed by RoBIC.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/24/2021

Adversarial training may be a double-edged sword

Adversarial training has been shown as an effective approach to improve ...
research
01/10/2019

Extending Adversarial Attacks and Defenses to Deep 3D Point Cloud Classifiers

3D object classification and segmentation using deep neural networks has...
research
07/04/2018

Benchmarking Neural Network Robustness to Common Corruptions and Surface Variations

In this paper we establish rigorous benchmarks for image classifier robu...
research
03/29/2022

Recent improvements of ASR models in the face of adversarial attacks

Like many other tasks involving neural networks, Speech Recognition mode...
research
02/24/2019

MaskDGA: A Black-box Evasion Technique Against DGA Classifiers and Adversarial Defenses

Domain generation algorithms (DGAs) are commonly used by botnets to gene...
research
08/22/2022

Real-world-robustness of tree-based classifiers

The concept of trustworthy AI has gained widespread attention lately. On...
research
02/22/2018

L2-Nonexpansive Neural Networks

This paper proposes a class of well-conditioned neural networks in which...

Please sign up or login with your details

Forgot password? Click here to reset