Houdini: Fooling Deep Structured Prediction Models

07/17/2017
by   Moustapha Cisse, et al.
0

Generating adversarial examples is a critical step for evaluating and improving the robustness of learning machines. So far, most existing methods only work for classification and are not designed to alter the true performance measure of the problem at hand. We introduce a novel flexible approach named Houdini for generating adversarial examples specifically tailored for the final performance measure of the task considered, be it combinatorial and non-decomposable. We successfully apply Houdini to a range of applications such as speech recognition, pose estimation and semantic segmentation. In all cases, the attacks based on Houdini achieve higher success rate than those based on the traditional surrogates used to train the models while using a less perceptible adversarial perturbation.

READ FULL TEXT

page 2

page 6

page 7

research
10/06/2019

Unrestricted Adversarial Attacks for Semantic Segmentation

Semantic segmentation is one of the most impactful applications of machi...
research
06/25/2023

On Evaluating the Adversarial Robustness of Semantic Segmentation Models

Achieving robustness against adversarial input perturbation is an import...
research
02/13/2018

Adversarial Examples on Discrete Sequences for Beating Whole-Binary Malware Detection

In recent years, deep learning has shown performance breakthroughs in ma...
research
10/14/2020

Towards Resistant Audio Adversarial Examples

Adversarial examples tremendously threaten the availability and integrit...
research
11/27/2017

On the Robustness of Semantic Segmentation Models to Adversarial Attacks

Deep Neural Networks (DNNs) have been demonstrated to perform exceptiona...
research
05/23/2021

Exploring Robustness of Unsupervised Domain Adaptation in Semantic Segmentation

Recent studies imply that deep neural networks are vulnerable to adversa...
research
06/01/2023

Constructing Semantics-Aware Adversarial Examples with Probabilistic Perspective

In this study, we introduce a novel, probabilistic viewpoint on adversar...

Please sign up or login with your details

Forgot password? Click here to reset