Fooling Object Detectors: Adversarial Attacks by Half-Neighbor Masks

01/04/2021
by   Yanghao Zhang, et al.
0

Although there are a great number of adversarial attacks on deep learning based classifiers, how to attack object detection systems has been rarely studied. In this paper, we propose a Half-Neighbor Masked Projected Gradient Descent (HNM-PGD) based attack, which can generate strong perturbation to fool different kinds of detectors under strict constraints. We also applied the proposed HNM-PGD attack in the CIKM 2020 AnalytiCup Competition, which was ranked within the top 1 https://github.com/YanghaoZYH/HNM-PGD.

READ FULL TEXT

page 2

page 3

research
12/26/2020

Sparse Adversarial Attack to Object Detection

Adversarial examples have gained tons of attention in recent years. Many...
research
03/25/2023

Ensemble-based Blackbox Attacks on Dense Prediction

We propose an approach for adversarial attacks on dense prediction model...
research
05/26/2022

Denial-of-Service Attack on Object Detection Model Using Universal Adversarial Perturbation

Adversarial attacks against deep learning-based object detectors have be...
research
05/28/2021

DeepMoM: Robust Deep Learning With Median-of-Means

Data used in deep learning is notoriously problematic. For example, data...
research
03/23/2021

RPATTACK: Refined Patch Attack on General Object Detectors

Nowadays, general object detectors like YOLO and Faster R-CNN as well as...
research
07/23/2023

Towards Generic and Controllable Attacks Against Object Detection

Existing adversarial attacks against Object Detectors (ODs) suffer from ...
research
02/21/2020

UnMask: Adversarial Detection and Defense Through Robust Feature Alignment

Deep learning models are being integrated into a wide range of high-impa...

Please sign up or login with your details

Forgot password? Click here to reset