LOTS about Attacking Deep Features

11/18/2016
by   Andras Rozsa, et al.
0

Deep neural networks provide state-of-the-art performance on various tasks and are, therefore, widely used in real world applications. DNNs are becoming frequently utilized in biometrics for extracting deep features, which can be used in recognition systems for enrolling and recognizing new individuals. It was revealed that deep neural networks suffer from a fundamental problem, namely, they can unexpectedly misclassify examples formed by slightly perturbing correctly recognized inputs. Various approaches have been developed for generating these so-called adversarial examples, but they aim at attacking end-to-end networks. For biometrics, it is natural to ask whether systems using deep features are immune to or, at least, more resilient to attacks than end-to-end networks. In this paper, we introduce a general technique called the layerwise origin-target synthesis (LOTS) that can be efficiently used to form adversarial examples that mimic the deep features of the target. We analyze and compare the adversarial robustness of the end-to-end VGG Face network with systems that use Euclidean or cosine distance between gallery templates and extracted deep features. We demonstrate that iterative LOTS is very effective and show that systems utilizing deep features are easier to attack than the end-to-end network.

READ FULL TEXT

page 1

page 5

page 7

page 8

research
08/05/2017

Adversarial Robustness: Softmax versus Openmax

Deep neural networks (DNNs) provide state-of-the-art results on various ...
research
01/01/2019

A Noise-Sensitivity-Analysis-Based Test Prioritization Technique for Deep Neural Networks

Deep neural networks (DNNs) have been widely used in the fields such as ...
research
01/10/2018

Fooling End-to-end Speaker Verification by Adversarial Examples

Automatic speaker verification systems are increasingly used as the prim...
research
08/05/2019

Automated Detection System for Adversarial Examples with High-Frequency Noises Sieve

Deep neural networks are being applied in many tasks with encouraging re...
research
11/09/2017

Crafting Adversarial Examples For Speech Paralinguistics Applications

Computational paralinguistic analysis is increasingly being used in a wi...
research
06/09/2021

Towards Defending against Adversarial Examples via Attack-Invariant Features

Deep neural networks (DNNs) are vulnerable to adversarial noise. Their a...
research
11/20/2019

Deep Minimax Probability Machine

Deep neural networks enjoy a powerful representation and have proven eff...

Please sign up or login with your details

Forgot password? Click here to reset