Characterizing and evaluating adversarial examples for Offline Handwritten Signature Verification

01/10/2019
by   Luiz G. Hafemann, et al.
0

The phenomenon of Adversarial Examples is attracting increasing interest from the Machine Learning community, due to its significant impact to the security of Machine Learning systems. Adversarial examples are similar (from a perceptual notion of similarity) to samples from the data distribution, that "fool" a machine learning classifier. For computer vision applications, these are images with carefully crafted but almost imperceptible changes, that are misclassified. In this work, we characterize this phenomenon under an existing taxonomy of threats to biometric systems, in particular identifying new attacks for Offline Handwritten Signature Verification systems. We conducted an extensive set of experiments on four widely used datasets: MCYT-75, CEDAR, GPDS-160 and the Brazilian PUC-PR, considering both a CNN-based system and a system using a handcrafted feature extractor (CLBP). We found that attacks that aim to get a genuine signature rejected are easy to generate, even in a limited knowledge scenario, where the attacker does not have access to the trained classifier nor the signatures used for training. Attacks that get a forgery to be accepted are harder to produce, and often require a higher level of noise - in most cases, no longer "imperceptible" as previous findings in object recognition. We also evaluated the impact of two countermeasures on the success rate of the attacks and the amount of noise required for generating successful attacks.

READ FULL TEXT
research
01/19/2019

Writer Independent Offline Signature Recognition Using Ensemble Learning

The area of Handwritten Signature Verification has been broadly research...
research
05/05/2019

Better the Devil you Know: An Analysis of Evasion Attacks using Out-of-Distribution Adversarial Examples

A large body of recent work has investigated the phenomenon of evasion a...
research
09/17/2019

Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges

As machine learning (ML) becomes more and more powerful and easily acces...
research
08/16/2018

Mitigation of Adversarial Attacks through Embedded Feature Selection

Machine learning has become one of the main components for task automati...
research
01/05/2022

ROOM: Adversarial Machine Learning Attacks Under Real-Time Constraints

Advances in deep learning have enabled a wide range of promising applica...
research
10/13/2020

Intrapersonal Parameter Optimization for Offline Handwritten Signature Augmentation

Usually, in a real-world scenario, few signature samples are available t...

Please sign up or login with your details

Forgot password? Click here to reset