Adversarial Ink: Componentwise Backward Error Attacks on Deep Learning

06/05/2023
by   Lucas Beerens, et al.
0

Deep neural networks are capable of state-of-the-art performance in many classification tasks. However, they are known to be vulnerable to adversarial attacks – small perturbations to the input that lead to a change in classification. We address this issue from the perspective of backward error and condition number, concepts that have proved useful in numerical analysis. To do this, we build on the work of Beuzeville et al. (2021). In particular, we develop a new class of attack algorithms that use componentwise relative perturbations. Such attacks are highly relevant in the case of handwritten documents or printed texts where, for example, the classification of signatures, postcodes, dates or numerical quantities may be altered by changing only the ink consistency and not the background. This makes the perturbed images look natural to the naked eye. Such “adversarial ink” attacks therefore reveal a weakness that can have a serious impact on safety and security. We illustrate the new attacks on real data and contrast them with existing algorithms. We also study the use of a componentwise condition number to quantify vulnerability.

READ FULL TEXT

page 22

page 24

research
10/13/2020

Towards Understanding Pixel Vulnerability under Adversarial Attacks for Images

Deep neural network image classifiers are reported to be susceptible to ...
research
08/23/2018

Adversarial Attacks on Deep-Learning Based Radio Signal Classification

Deep learning (DL), despite its enormous success in many computer vision...
research
01/15/2018

Sparsity-based Defense against Adversarial Attacks on Linear Classifiers

Deep neural networks represent the state of the art in machine learning ...
research
10/29/2020

Can the state of relevant neurons in a deep neural networks serve as indicators for detecting adversarial attacks?

We present a method for adversarial attack detection based on the inspec...
research
08/16/2021

Identifying and Exploiting Structures for Reliable Deep Learning

Deep learning research has recently witnessed an impressively fast-paced...
research
10/22/2020

Adversarial Attacks on Binary Image Recognition Systems

We initiate the study of adversarial attacks on models for binary (i.e. ...
research
03/09/2023

Learning the Legibility of Visual Text Perturbations

Many adversarial attacks in NLP perturb inputs to produce visually simil...

Please sign up or login with your details

Forgot password? Click here to reset