Adaptive Adversarial Attack on Scene Text Recognition

07/09/2018
by   Xiaoyong Yuan, et al.
0

Recent studies have shown that state-of-the-art deep learning models are vulnerable to the inputs with small perturbations (adversarial examples). We observe two critical obstacles in adversarial examples: (i) Strong adversarial attacks require manually tuning hyper-parameters, which take longer time to construct a single adversarial example, making it impractical to attack real-time systems; (ii) Most of the studies focus on non-sequential tasks, such as image classification and object detection. Only a few consider sequential tasks. Despite extensive research studies, the cause of adversarial examples remains an open problem, especially on sequential tasks. We propose an adaptive adversarial attack, called AdaptiveAttack, to speed up the process of generating adversarial examples. To validate its effectiveness, we leverage the scene text detection task as a case study of sequential adversarial examples. We further visualize the generated adversarial examples to analyze the cause of sequential adversarial examples. AdaptiveAttack achieved over 99.9% success rate with 3-6 times speedup compared to state-of-the-art adversarial attacks.

READ FULL TEXT

page 13

page 14

research
10/06/2019

Unrestricted Adversarial Attacks for Semantic Segmentation

Semantic segmentation is one of the most impactful applications of machi...
research
06/01/2019

Perceptual Evaluation of Adversarial Attacks for CNN-based Image Classification

Deep neural networks (DNNs) have recently achieved state-of-the-art perf...
research
06/13/2023

Area is all you need: repeatable elements make stronger adversarial attacks

Over the last decade, deep neural networks have achieved state of the ar...
research
01/31/2023

Reverse engineering adversarial attacks with fingerprints from adversarial examples

In spite of intense research efforts, deep neural networks remain vulner...
research
11/23/2021

Adversarial machine learning for protecting against online manipulation

Adversarial examples are inputs to a machine learning system that result...
research
02/26/2019

Design of intentional backdoors in sequential models

Recent work has demonstrated robust mechanisms by which attacks can be o...
research
09/17/2019

Adversarial Attacks and Defenses in Images, Graphs and Text: A Review

Deep neural networks (DNN) have achieved unprecedented success in numero...

Please sign up or login with your details

Forgot password? Click here to reset