Log In Sign Up

Houdini: Fooling Deep Structured Prediction Models

by   Moustapha Cisse, et al.

Generating adversarial examples is a critical step for evaluating and improving the robustness of learning machines. So far, most existing methods only work for classification and are not designed to alter the true performance measure of the problem at hand. We introduce a novel flexible approach named Houdini for generating adversarial examples specifically tailored for the final performance measure of the task considered, be it combinatorial and non-decomposable. We successfully apply Houdini to a range of applications such as speech recognition, pose estimation and semantic segmentation. In all cases, the attacks based on Houdini achieve higher success rate than those based on the traditional surrogates used to train the models while using a less perceptible adversarial perturbation.


page 2

page 6

page 7


Unrestricted Adversarial Attacks for Semantic Segmentation

Semantic segmentation is one of the most impactful applications of machi...

Adversarial Examples on Discrete Sequences for Beating Whole-Binary Malware Detection

In recent years, deep learning has shown performance breakthroughs in ma...

APE-GAN: Adversarial Perturbation Elimination with GAN

Although neural networks could achieve state-of-the-art performance whil...

Towards Resistant Audio Adversarial Examples

Adversarial examples tremendously threaten the availability and integrit...

On the Robustness of Semantic Segmentation Models to Adversarial Attacks

Deep Neural Networks (DNNs) have been demonstrated to perform exceptiona...

Exploring Robustness of Unsupervised Domain Adaptation in Semantic Segmentation

Recent studies imply that deep neural networks are vulnerable to adversa...