A Deep, Information-theoretic Framework for Robust Biometric Recognition

02/23/2019
by   Renjie Xie, et al.
0

Deep neural networks (DNN) have been a de facto standard for nowadays biometric recognition solutions. A serious, but still overlooked problem in these DNN-based recognition systems is their vulnerability against adversarial attacks. Adversarial attacks can easily cause the output of a DNN system to greatly distort with only tiny changes in its input. Such distortions can potentially lead to an unexpected match between a valid biometric and a synthetic one constructed by a strategic attacker, raising security issue. In this work, we show how this issue can be resolved by learning robust biometric features through a deep, information-theoretic framework, which builds upon the recent deep variational information bottleneck method but is carefully adapted to biometric recognition tasks. Empirical evaluation demonstrates that our method not only offers stronger robustness against adversarial attacks but also provides better recognition performance over state-of-the-art approaches.

READ FULL TEXT

page 5

page 6

research
06/09/2019

On the Vulnerability of Capsule Networks to Adversarial Attacks

This paper extensively evaluates the vulnerability of capsule networks t...
research
08/12/2023

Not So Robust After All: Evaluating the Robustness of Deep Neural Networks to Unseen Adversarial Attacks

Deep neural networks (DNNs) have gained prominence in various applicatio...
research
11/26/2020

Exposing the Robustness and Vulnerability of Hybrid 8T-6T SRAM Memory Architectures to Adversarial Attacks in Deep Neural Networks

Deep Learning is able to solve a plethora of once impossible problems. H...
research
01/10/2023

AdvBiom: Adversarial Attacks on Biometric Matchers

With the advent of deep learning models, face recognition systems have a...
research
02/14/2018

Security Analysis and Enhancement of Model Compressed Deep Learning Systems under Adversarial Attacks

DNN is presenting human-level performance for many complex intelligent t...
research
10/05/2022

On Adversarial Robustness of Deep Image Deblurring

Recent approaches employ deep learning-based solutions for the recovery ...
research
08/12/2020

Defending Adversarial Examples via DNN Bottleneck Reinforcement

This paper presents a DNN bottleneck reinforcement scheme to alleviate t...

Please sign up or login with your details

Forgot password? Click here to reset