Defending Against Adversarial Iris Examples Using Wavelet Decomposition

08/08/2019
by   Sobhan Soleymani, et al.
0

Deep neural networks have presented impressive performance in biometric applications. However, their performance is highly at risk when facing carefully crafted input samples known as adversarial examples. In this paper, we present three defense strategies to detect adversarial iris examples. These defense strategies are based on wavelet domain denoising of the input examples by investigating each wavelet sub-band and removing the sub-bands that are most affected by the adversary. The first proposed defense strategy reconstructs multiple denoised versions of the input example through manipulating the mid- and high-frequency components of the wavelet domain representation of the input example and makes a decision upon the classification result of the majority of the denoised examples. The second and third proposed defense strategies aim to denoise each wavelet domain sub-band and determine the sub-bands that are most likely affected by the adversary using the reconstruction error computed for each sub-band. We test the performance of the proposed defense strategies against several attack scenarios and compare the results with five state of the art defense strategies.

READ FULL TEXT
research
06/24/2021

Differential Morph Face Detection using Discriminative Wavelet Sub-bands

Face recognition systems are extremely vulnerable to morphing attacks, i...
research
10/29/2020

WaveTransform: Crafting Adversarial Examples via Input Decomposition

Frequency spectrum has played a significant role in learning unique and ...
research
02/16/2021

A Sub-band Approach to Deep Denoising Wavelet Networks and a Frequency-adaptive Loss for Perceptual Quality

In this paper, we propose two contributions to neural network based deno...
research
06/16/2021

Detection of Morphed Face Images Using Discriminative Wavelet Sub-bands

This work investigates the well-known problem of morphing attacks, which...
research
12/17/2020

On the Limitations of Denoising Strategies as Adversarial Defenses

As adversarial attacks against machine learning models have raised incre...
research
11/29/2021

Morph Detection Enhanced by Structured Group Sparsity

In this paper, we consider the challenge of face morphing attacks, which...
research
02/27/2017

Image Analysis Using a Dual-Tree M-Band Wavelet Transform

We propose a 2D generalization to the M-band case of the dual-tree decom...

Please sign up or login with your details

Forgot password? Click here to reset