Fooling OCR Systems with Adversarial Text Images

02/15/2018
by   Congzheng Song, et al.
0

We demonstrate that state-of-the-art optical character recognition (OCR) based on deep learning is vulnerable to adversarial images. Minor modifications to images of printed text, which do not change the meaning of the text to a human reader, cause the OCR system to "recognize" a different text where certain words chosen by the adversary are replaced by their semantic opposites. This completely changes the meaning of the output produced by the OCR system and by the NLP applications that use OCR for preprocessing their inputs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/08/2020

Attacking Optical Character Recognition (OCR) Systems with Adversarial Watermarks

Optical character recognition (OCR) is widely applied in real applicatio...
research
08/04/2019

Deep Neural Network for Semantic-based Text Recognition in Images

State-of-the-art text spotting systems typically aim to detect isolated ...
research
06/18/2021

Bad Characters: Imperceptible NLP Attacks

Several years of research have shown that machine-learning systems are v...
research
11/16/2021

Improving the robustness and accuracy of biomedical language models through adversarial training

Deep transformer neural network models have improved the predictive accu...
research
03/30/2020

Amharic Abstractive Text Summarization

Text Summarization is the task of condensing long text into just a handf...
research
10/22/2020

TLGAN: document Text Localization using Generative Adversarial Nets

Text localization from the digital image is the first step for the optic...
research
03/14/2022

Extracting associations and meanings of objects depicted in artworks through bi-modal deep networks

We present a novel bi-modal system based on deep networks to address the...

Please sign up or login with your details

Forgot password? Click here to reset