Robustness Metrics for Real-World Adversarial Examples

11/24/2019
by   Brett Jefferson, et al.
13

We explore metrics to evaluate the robustness of real-world adversarial attacks, in particular adversarial patches, to changes in environmental conditions. We demonstrate how these metrics can be used to establish model baseline performance and uncover model biases to then compare against real-world adversarial attacks. We establish a custom score for an adversarial condition that is adjusted for different environmental conditions and explore how the score changes with respect to specific environmental factors. Lastly, we propose two use cases for confidence distributions in each environmental condition.

READ FULL TEXT

page 1

page 3

page 5

page 6

research
02/07/2022

On The Empirical Effectiveness of Unrealistic Adversarial Hardening Against Realistic Adversarial Attacks

While the literature on security attacks and defense of Machine Learning...
research
06/17/2022

Detecting Adversarial Examples in Batches – a geometrical approach

Many deep learning methods have successfully solved complex tasks in com...
research
10/26/2019

Detection of Adversarial Attacks and Characterization of Adversarial Subspace

Adversarial attacks have always been a serious threat for any data-drive...
research
07/27/2023

NSA: Naturalistic Support Artifact to Boost Network Confidence

Visual AI systems are vulnerable to natural and synthetic physical corru...
research
08/07/2023

Exploring the Physical World Adversarial Robustness of Vehicle Detection

Adversarial attacks can compromise the robustness of real-world detectio...
research
02/10/2021

Enhancing Real-World Adversarial Patches with 3D Modeling Techniques

Although many studies have examined adversarial examples in the real wor...
research
12/22/2022

Aliasing is a Driver of Adversarial Attacks

Aliasing is a highly important concept in signal processing, as careful ...

Please sign up or login with your details

Forgot password? Click here to reset