Fooling OCR Systems with Adversarial Text Images

02/15/2018
by   Congzheng Song, et al.
0

We demonstrate that state-of-the-art optical character recognition (OCR) based on deep learning is vulnerable to adversarial images. Minor modifications to images of printed text, which do not change the meaning of the text to a human reader, cause the OCR system to "recognize" a different text where certain words chosen by the adversary are replaced by their semantic opposites. This completely changes the meaning of the output produced by the OCR system and by the NLP applications that use OCR for preprocessing their inputs.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset