Enhancing Cross-task Black-Box Transferability of Adversarial Examples with Dispersion Reduction

by   Yantao Lu, et al.
Duke University
Syracuse University

Neural networks are known to be vulnerable to carefully crafted adversarial examples, and these malicious samples often transfer, i.e., they remain adversarial even against other models. Although great efforts have been delved into the transferability across models, surprisingly, less attention has been paid to the cross-task transferability, which represents the real-world cybercriminal's situation, where an ensemble of different defense/detection mechanisms need to be evaded all at once. In this paper, we investigate the transferability of adversarial examples across a wide range of real-world computer vision tasks, including image classification, object detection, semantic segmentation, explicit content detection, and text detection. Our proposed attack minimizes the “dispersion” of the internal feature map, which overcomes existing attacks' limitation of requiring task-specific loss functions and/or probing a target model. We conduct evaluation on open source detection and segmentation models as well as four different computer vision tasks provided by Google Cloud Vision (GCV) APIs, to show how our approach outperforms existing attacks by degrading performance of multiple CV tasks by a large margin with only modest perturbations linf=16.


page 3

page 7

page 13


Enhancing Cross-task Transferability of Adversarial Examples with Dispersion Reduction

Neural networks are known to be vulnerable to carefully crafted adversar...

Enhancing Adversarial Example Transferability with an Intermediate Level Attack

Neural networks are vulnerable to adversarial examples, malicious inputs...

Transferable Adversarial Attacks for Image and Video Object Detection

Adversarial examples have been demonstrated to threaten many computer vi...

Traits & Transferability of Adversarial Examples against Instance Segmentation & Object Detection

Despite the recent advancements in deploying neural networks for image c...

Adversarial Examples for Edge Detection: They Exist, and They Transfer

Convolutional neural networks have recently advanced the state of the ar...

Adversarial Examples versus Cloud-based Detectors: A Black-box Empirical Study

Deep learning has been broadly leveraged by major cloud providers such a...

Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models

Vision-language pre-training (VLP) models have shown vulnerability to ad...

Please sign up or login with your details

Forgot password? Click here to reset