DeepAI
Log In Sign Up

Houdini: Fooling Deep Structured Prediction Models

07/17/2017
by   Moustapha Cisse, et al.
0

Generating adversarial examples is a critical step for evaluating and improving the robustness of learning machines. So far, most existing methods only work for classification and are not designed to alter the true performance measure of the problem at hand. We introduce a novel flexible approach named Houdini for generating adversarial examples specifically tailored for the final performance measure of the task considered, be it combinatorial and non-decomposable. We successfully apply Houdini to a range of applications such as speech recognition, pose estimation and semantic segmentation. In all cases, the attacks based on Houdini achieve higher success rate than those based on the traditional surrogates used to train the models while using a less perceptible adversarial perturbation.

READ FULL TEXT

page 2

page 6

page 7

10/06/2019

Unrestricted Adversarial Attacks for Semantic Segmentation

Semantic segmentation is one of the most impactful applications of machi...
02/13/2018

Adversarial Examples on Discrete Sequences for Beating Whole-Binary Malware Detection

In recent years, deep learning has shown performance breakthroughs in ma...
07/18/2017

APE-GAN: Adversarial Perturbation Elimination with GAN

Although neural networks could achieve state-of-the-art performance whil...
10/14/2020

Towards Resistant Audio Adversarial Examples

Adversarial examples tremendously threaten the availability and integrit...
11/27/2017

On the Robustness of Semantic Segmentation Models to Adversarial Attacks

Deep Neural Networks (DNNs) have been demonstrated to perform exceptiona...
05/23/2021

Exploring Robustness of Unsupervised Domain Adaptation in Semantic Segmentation

Recent studies imply that deep neural networks are vulnerable to adversa...