FAWA: Fast Adversarial Watermark Attack on Optical Character Recognition (OCR) Systems

12/15/2020
by   Lu Chen, et al.
0

Deep neural networks (DNNs) significantly improved the accuracy of optical character recognition (OCR) and inspired many important applications. Unfortunately, OCRs also inherit the vulnerabilities of DNNs under adversarial examples. Different from colorful vanilla images, text images usually have clear backgrounds. Adversarial examples generated by most existing adversarial attacks are unnatural and pollute the background severely. To address this issue, we propose the Fast Adversarial Watermark Attack (FAWA) against sequence-based OCR models in the white-box manner. By disguising the perturbations as watermarks, we can make the resulting adversarial images appear natural to human eyes and achieve a perfect attack success rate. FAWA works with either gradient-based or optimization-based perturbation generation. In both letter-level and word-level attacks, our experiments show that in addition to natural appearance, FAWA achieves a 100 60 further extend FAWA to support full-color watermarks, other languages, and even the OCR accuracy-enhancing mechanism.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/08/2020

Attacking Optical Character Recognition (OCR) Systems with Adversarial Watermarks

Optical character recognition (OCR) is widely applied in real applicatio...
research
08/17/2019

Nesterov Accelerated Gradient and Scale Invariance for Improving Transferability of Adversarial Examples

Recent evidence suggests that deep neural networks (DNNs) are vulnerable...
research
02/15/2018

ASP:A Fast Adversarial Attack Example Generation Framework based on Adversarial Saliency Prediction

With the excellent accuracy and feasibility, the Neural Networks have be...
research
10/31/2022

Character-level White-Box Adversarial Attacks against Transformers via Attachable Subwords Substitution

We propose the first character-level white-box adversarial attack method...
research
06/18/2021

Light Lies: Optical Adversarial Attack

A significant amount of work has been done on adversarial attacks that i...
research
02/03/2021

IWA: Integrated Gradient based White-box Attacks for Fooling Deep Neural Networks

The widespread application of deep neural network (DNN) techniques is be...
research
05/31/2018

Greedy Attack and Gumbel Attack: Generating Adversarial Examples for Discrete Data

We present a probabilistic framework for studying adversarial attacks on...

Please sign up or login with your details

Forgot password? Click here to reset