Focused Adversarial Attacks

05/19/2022
by   Thomas Cilloni, et al.
0

Recent advances in machine learning show that neural models are vulnerable to minimally perturbed inputs, or adversarial examples. Adversarial algorithms are optimization problems that minimize the accuracy of ML models by perturbing inputs, often using a model's loss function to craft such perturbations. State-of-the-art object detection models are characterized by very large output manifolds due to the number of possible locations and sizes of objects in an image. This leads to their outputs being sparse and optimization problems that use them incur a lot of unnecessary computation. We propose to use a very limited subset of a model's learned manifold to compute adversarial examples. Our Focused Adversarial Attacks (FA) algorithm identifies a small subset of sensitive regions to perform gradient-based adversarial attacks. FA is significantly faster than other gradient-based attacks when a model's manifold is sparsely activated. Also, its perturbations are more efficient than other methods under the same perturbation constraints. We evaluate FA on the COCO 2017 and Pascal VOC 2007 detection datasets.

READ FULL TEXT

page 4

page 9

research
06/02/2023

Adversarial Attack Based on Prediction-Correction

Deep neural networks (DNNs) are vulnerable to adversarial examples obtai...
research
05/26/2021

Intriguing Parameters of Structural Causal Models

In recent years there has been a lot of focus on adversarial attacks, es...
research
04/18/2019

Gotta Catch 'Em All: Using Concealed Trapdoors to Detect Adversarial Attacks on Neural Networks

Deep neural networks are vulnerable to adversarial attacks. Numerous eff...
research
12/18/2020

Efficient Training of Robust Decision Trees Against Adversarial Examples

In the present day we use machine learning for sensitive tasks that requ...
research
02/09/2022

Adversarial Detection without Model Information

Most prior state-of-the-art adversarial detection works assume that the ...
research
08/08/2018

Adversarial Geometry and Lighting using a Differentiable Renderer

Many machine learning classifiers are vulnerable to adversarial attacks,...
research
03/09/2020

Gradient-based adversarial attacks on categorical sequence models via traversing an embedded world

An adversarial attack paradigm explores various scenarios for vulnerabil...

Please sign up or login with your details

Forgot password? Click here to reset