Optical Character Recognition (OCR) is a widely adopted application for conversing printed or handwritten images to text, which becomes a critical preprocessing component in text analysis pipelines, such as document retrieval and summarization. OCR has been significantly improved in recent years thanks to the wide adoption of the deep neural network (DNN), and thus deployed in many critical applications where OCR’s quality is vital. For example, photo-based ID recognition depends on OCR’s quality to automatically structure information into databases, and automatic trading sometimes relies on OCR to read certain news articles for determining the sentiment of news.
Unfortunately, OCR also inherits all counter-intuitive security problems of the DNNs. Especially, the OCR model is also vulnerable to adversarial examples, which are crafted by making human-imperceptible perturbations on original images with the intent of misleading the model. The wide adoption of OCR in real pipelines gives more incentives for adversaries to game the OCR, such as causing fake ID information, incorrect readings of metrics or instructions, etc. Figure 2 and 3 in the evaluation section illustrate two real-world examples with attacking the ID number and financial report number. This paper provides a preliminary study on the possibility of OCR attacks.
Many prior works [Nguyen, Yosinski, and Clune2015, Goodfellow, Shlens, and Szegedy2014, Papernot et al.2016a, Szegedy et al.2013] have shown that changing the prediction of DNNs is practicable by applying carefully-designed perturbations (usually background noise) to original images, in traditional image classification tasks. Recent projects, Adversarial Patch [Brown et al.2017] and LaVAN [Karmon, Zoran, and Goldberg2018], introduced the adversarial patch attack, which puts visible perturbations confined to a small region or location in the image.
However, these methods are not directly applicable to OCR attacks for the following three reasons:
First, the input image to OCR is on a white paper with a spotless background. Thus any perturbation added by existing attacks will appear so obvious to human readers that it will cause suspicion.
Second, in complex languages like Chinese, there are many characters (e.g., the dataset we use contains 5,989 unique characters). If an adversary wants to perform a targeted attack, i.e., changing one character to another specific one (target) in a sentence and meanwhile resulting in semantically meaningful recognition results, it requires a large number of perturbations that are too obvious to hide.
Third, instead of classifying characters individually, the modern OCR model is an end-to-end neural network, inputing a variable-sized image and outputting sequences of labels. In other words, it works on feeding images line by line. It is usually called thesequential labeling task, which is relatively harder to be attacked than the image classification task. It is insufficient to just add perturbations to a single character. Instead, the perturbations are required to span multiple characters. Also, as the OCR model is end-to-end, the internal feature representations rely on nearby characters (contexts). Thus the perturbations of attacking a single character are designed given its contexts.
In this preliminary study, we propose a new attack method, WATERMARK attack, against modern OCR models. Watermarks are images or patterns commonly used in documents as background to identify things, such as marking a document proprietary, secret, urgent, or even simply as decoration. Similar to watermarks, in Asian countries, documents often contain stamps to verify their authority. Human eyes are so used to these watermarks and ignore them. In this paper, we generate natural watermark-style perturbations. That is, we limit all perturbations within a small region of the image, i.e., a watermark. Given the bound, we minimize the perturbation level. In comparison, classic adversarial examples spread noise all over the image. Our approach is similar to the patch-based attacks [Brown et al.2017, Karmon, Zoran, and Goldberg2018]. Different from that patches absolutely cover part of the images, watermarks do not hinder text’s readability and thus look more natural. [Heng, Zhou, and Jiang2018] generated disguising perturbations as shadows, exposure problems or color problems. And [Hanwei Zhang2019] generated smooth noise by Laplacian smoothing. But none of these solve the clear background challenge for OCR.
We focus on white-box, targeted attack
in this paper. That is, we assume adversaries have perfect knowledge of the DNN architecture and parameters (white-box model) and aim to generate specific recognition results (targeted attack). Given that many real OCR softwares are based on similar open-source OCR DNN models, we believe the white-box model, in addition to being a starting point, also has real-world applicability.
As a consequence, the WATERMARK attack is an adversarial attack on the OCR model. The WATERMARK attack attaches natural watermark-style noise, tricks the OCR model into outputting specific recognition results, and preserves the readability of adversarial images at the same time. To some extent, the WATERMARK attack solves the clear background problem.
As an evaluation, we performed the WATERMARK attack on a state-of-the-art open-source OCR model using DenseNet + CTC neural network architectures for the Chinese language. We used a dataset with 3.64 million images and 5,989 unique characters. With 158 pairs of original-target, we show that the WATERMARK
attack can generate quite human-eye friendly adversarial samples with a high probability of success. Some ofWATERMARK adversarial examples even work on Tesseract OCR [Google2019] in a black-box manner.
Even more, we applied our model to real-world scenarios. In Figure 3, we employed the WATERMARK method to a page of an annual report of a listed Chinese company and changed the semantics, in the meantime, the image looks natural to human readers.
The contributions of this paper include: 1) We propose the WATERMARK adversarial attack to generate natural-to-human watermark-style perturbations, targeting DNN-based OCR systems. We also demonstrated a method to hide perturbations that human eyes are accustomed to, in a watermark-bounded region . 2) Using difficult OCR cases (Chinese), we demonstrated the success rate of WATERMARK attacks comparing to existing ones.
Background and Related Work
Optical Character Recognition (OCR)
Generally, the OCR pipeline, as shown in Figure 1
, begins with line segmentation, which includes page layout analysis for locating the position of each line, de-skewing the image, and segmenting the input image into line images. After preprocessing line images, such as rescaling and normalizing, such images are fed into the recognition model, which outputs recognition results.
There are two types of OCR models. 1) Character-based models are the classic way [Smith2007]. Such a recognition model segments the image into per-character sub-images and classifies each sub-image into the most likely recognition result. Obviously, its performance heavily relies on the character segmentation. 2) End-to-end models are a segmentation-free approach. It recognizes entire sequences of characters in a variable-sized image. [Bengio et al.1995, Espana-Boquera et al.2011] adopted sequential models. [Breuel et al.2013, Wang et al.2012] utilize DNNs as the feature extractor for the end-to-end sequential recognition. Sequential DNN models [Graves et al.2006] introduced a segmentation-free approach, connectionist temporal classification(CTC), which allows variable-sized input images and output results.
In end-to-end models, sequence labeling is a task that assigns a sequence of discrete labels to variable-sized sequential input data. In our case, the input is a variable-size image and the output is a sequence of characters , from predefined character set .
Connectionist Temporal Classification (CTC).
CTC is an alignment-free method for training DNNs on the sequential labeling task, which provides a kind of loss enabling us to recognize sequences without explicit segmentation while training DNNs. Therefore, many state-of-the-art OCR models use CTC as the model’s loss function. Given the input image, let be the sequence of model ’s outputs, where and
is the probability distribution over the character setin observing label .
CTC requires calculating the likelihood , which is barely directly measured from the model’s probability distribution and the target sequence . To settle this, CTC uses a valid alignment of , , where the target sequence can be obtained by removing all blanks and sequential duplicate characters (e.g. both [a, –, a, b, –] and [–, a, a, –, –, a, b, b] are valid alignments of [a, a, b]). The likelihood is to sum up the probability of all possible valid alignments denoted as .
The negative log-likelihood of is the CTC loss function .
To obtain the most probable output sequence , a greedy path decoding algorithm can select the most probable alignment at each step. However, the greedy algorithm does not guarantee to find the most probable labeling. A better method, beam search decoding, simultaneously keeps a certain number of the most probable alignments at each step and chooses the most probable output in the top-alignment list.
Attacking DNN-based computer vision tasks
Where to add perturbations?
Attacking DNN models is a popular topic in both computer vision and security fields. Many projects focus on finding small-bounded perturbations, hoping that the bound will keep the perturbations visually imperceptible. FGSM [Goodfellow, Shlens, and Szegedy2014], L-BFGS [Szegedy et al.2013], DeepFool [Moosavi-Dezfooli, Fawzi, and Frossard2016], Carlini [Carlini and Wagner2017], PGD [Madry et al.2017] and EAD [Chen et al.2018] all performed modifications at the pixel level by a small amount bounded by .
Other attacks such as JSMA [Papernot et al.2016a], Carlini [Carlini and Wagner2017], Adversarial Patch [Brown et al.2017] and LaVAN [Karmon, Zoran, and Goldberg2018], perturb a small region of pixels in an image but the pixel-level perturbations are not bounded by .
As we have mentioned, neither approach can hide perturbations from the normal human vision in OCR tasks, as a document with enough readability usually has a spotless background and vivid text, which is greatly different from natural RGB photos.
How to generate perturbations?
There are two types of methods to generate perturbations.
1) Gradient-based attack is to add perturbations generated by gradient against input pixels. Formally, we can describe the general problem as: For a -bounded adversary, we compute an adversarial example given an original image and target labels where perturbations’ bound is tiny enough to be indistinguishable to human observers.
Fast Gradient Sign Method (FGSM) [Goodfellow, Shlens, and Szegedy2014] is a one-step attack that obtains the adversarial image as . The original image takes a gradient sign step with step size in the direction that increases the probability of the target label . It is efficient, but it only provides a coarse approximation of the optimal perturbations.
Basic Iterative Method (BIM) [Kurakin, Goodfellow, and Bengio2016] takes multiple smaller steps and the result image is clipped by the same bound : , where is an adversarial example yielded at step . BIM produces superior results to FGSM.
Momentum Iterative Method (MIM) [Dong et al.2018] extends BIM with a momentum item. MIM can not only stabilize update directions but also escape from poor local maxima during the iteration. Thus, it generates more transferable adversarial examples. Each iterative update is to adjust the update direction and generate new adversarial image using the momentum item , as following , where is the decay factor.
2) Optimization-based attack directly solves the optimization problem of minimizing the distance between the original example and the adversarial example and yielding the incorrect classification.
Box-constraint L-BFGS [Szegedy et al.2013] finds the adversarial examples by solving the box-constraint problem, subject to , where
is the cross-entropy loss between the logit output and the target label. Although the perturbations generated by L-BFGS are much less than the gradient-based attack, L-BFGS has far low efficiency.
C&W [Carlini and Wagner2017] is a
-oriented attack that can successfully break undefended and defensively distilled DNNs. Given the logitof the model , other than applying cross-entropy as the loss function, C&W attack designed a new loss function solved by gradient descent, where is the confidence of misclassification.
Defense methods against these attacks
People have proposed many practical defense methods against adversarial examples. Adversarial training [Tramèr et al.2017] improves the robustness of DNNs by injecting label-corrected adversarial examples into the training procedure and training a robust model that has a resistance to perturbations generated by gradient-based methods. Defensive distillation [Papernot et al.2016b] defends against adversarial perturbations using the distillation techniques [Hinton, Vinyals, and Dean2015] to retrain the same network with class probabilities predicted by the original network. There are also methods focusing on detecting adversarial samples [Xu, Evans, and Qi2017, Lu, Issaranon, and Forsyth2017, Grosse et al.2017, Feinman et al.2017].
Preliminaries. We assume that attackers have full knowledge of the threat model, such as the model architecture and parameter values. Given an input image , an adversarial image where is the number of pixels in the original image, OCR model , the adversarial example’s prediction result is . Given target label , loss function of the target model with respect to the input image is . Besides, we assume to be differentiable almost everywhere (sub-gradient may be used at discontinuities). Because the gradient-descent approach is applicable to any DNNs with a differentiable discriminant function.
Distance Metric. We define the distance metric to quantify the similarity between the original image and the corresponding adversarial image . Such a distance metric reflects the cost of manipulations. -norm is a widely-used distance metric defined as
where d-dimensional vectorfor any . accounts for the total variation in the perturbations and serves as a popular convex surrogate function of the that measures the number of modified pixels. measures the standard Euclidean distance, which is usually used to improve the visual quality. measures the maximum change of the perturbations between and .
Watermark attack to CTC-based OCR
In this paper, we propose the MIM-based WATERMARK attack on the CTC-based OCR model to generate adversarial examples. In this section, we will first introduce how to integrate watermarks into MIM [Dong et al.2018] , which induces the MIM-based WATERMARK attack method (WM) to generate adversarial examples satisfying the -norm restriction in the targeted and non-targeted attack fashion. We then present several variants of WM to -norm bound. The generation pipeline of the WATERMARK adversarial attack is illustrated in Figure 1. Table 1 shows adversarial examples generated from each method.
MIM-based Watermark attack (WM).
Watermark widely occurs in a mass of documents and files. Making use of the popularity of the watermark in the documents, we apply the idea of the watermark to decorate perturbations as the watermark; that is, we restrict the manipulation region on a specific predefined watermark-shape region .
To generate a targeted -bounded adversarial example , we start with an initial image given an original image . WM seeks the adversarial example by solving the constrained optimization problem
where is the size of adversarial perturbations and . We summarized WM in Algorithm 1. At each attacking iteration , the attacker first feeds the adversarial example to the OCR model and obtain the gradient through back-propagation. Then, for the purpose of stabilizing update directions and escaping from poor local maxima, update momentum item by accumulating the velocity vector in the gradient direction as Equation 3 shown in Algorithm 1. Last, update new adversarial example by applying the -restricted sign gradient with small step size , and clip the intermediate perturbations to ensure them in the -ball of as Equation 4. The attacking iteration proceeds until the attack is successful or reaches the maximum iterations .
|original||MIM||WM||WMinit||WMneg||WMedge||OCR output||English translation|
Variants of WM attack.
To present a more natural appearance of watermark-like perturbations, we design three variants of WM attack.
WMinit. Watermark region element-wisely multiplies the sign gradient which attackers only operate pixels inside the watermark region . As shown in the WM column of Table 1, the perturbations of WM adversarial examples are not dense enough to construct a complete watermark and be a natural watermark. Thus, for filling in the blanks of the watermark region, we start from an initial watermark-pasted image by attaching a watermark to the original image, , where is the grayscale value of the pasted watermark and denotes the position except the text.
WMneg. The sign of the gradient, , can be -1 or +1 based on the direction of gradient descent. When the gradient is positive, , the pixel value will increase, that is, the pixel looks whiter (the maximum of the grayscale value mean the whitest color or else blackest color). Otherwise, the pixel value decreases, and the pixel looks blacker. Obviously, the pixels in the text region become whiter resulting in the fuzzy text, and it’s meaningless to whiten the clear white background. Thus, we only need to keep the negative gradient and leave the positive gradient behind. We generated WM noise but only kept the negative gradient during attacking iteration. After adding the new constraint, the update step of new adversarial example, Equation 4, becomes
WMedge. A different way to add perturbations is to confine watermark region around the text edges, pretending to be defectives in printing.
WMedge is similar with WM. We define the watermark as the region of the text edge, which can be obtained by image erosion in morphological image processing. The erosion operation erodes the original image using the specified structuring element that determines the shape of a pixel neighborhood over which the minimum is taken. In experiments, we use a rectangular structuring element as a kernel, . We take the bolder text region after erosion as the watermark. Thus, the text-edge shape watermark is defined as
In this section, we generate adversarial examples on the CTC-based OCR model. We compared the performance of the basic MIM, WM, and its variants.
Threat model. We performed WM attack on the DenseNet + CTC neural network 111https://github.com/YCG09/chinese˙ocr which is trained in the Chinese text image dataset. DenseNet [Huang et al.2017] is one of powerful DNNs for visual recognition, which can capture complex features. Thus, we utilize DenseNet as the feature extractor and CTC [Graves et al.2006] as the loss function. In the test phase, the DenseNet+CTC OCR recognition model achieved 98.3% accuracy on the validation dataset that involves 36400 images. The Chinese text image dataset has 3.64 million images that are generated by altering fonts, scale, grayscale, blur, sketch based on Chinese news articles. The character set has 5989 unique characters, including Chinese and English characters, digits, punctuations.
Attack setting. The attack setting is applied among all experiments. Our experiment setup is based on MIM’s framework. We use the implementation of MIM in CleverHans package222https://github.com/tensorflow/cleverhan. We use the attack setting which runs iterations at the most. We utilize an early stopping criterion based on the attacking result at each iteration. The -norm perturbations is bounded by 0.2. The pixel value of the image ranges from 0 (black) to 1 (white). For the initial watermark in WMinit, we set the grayscale value of the watermark to 0.3, and we put the watermark in the center of the image by default. The watermarks are set to the font size 30.
Choose attacked candidates. To satisfy the semantic fluency in our OCR attack, we choose 691 pairs of antonym characters with high similarity of character shape, 333Given two characters and , character similarity is defined as
where denotes the absolute value of strokes between and . is the Levenshtein distance of sijiao between and , which is an encoding approach to fast retrieving Chinese characters. is edit distance of character images between and . The weights , and is chosen as 0.33, 0.33 and 0.34, respectively., so that the adversarial attack only requires adversarial perturbations as less as possible to fool the OCR system. Then we match selected antonym character pairs in the corpus of People’s Daily in1988 and choose 158 sentences containing selected characters which do not cause syntactic errors after substituting the corresponding antonym character. Last, we generate the line image containing chosen sentences.
Evaluation metrics. To quantify perturbations of adversarial images compared with benign images , we measure perturbations from MSE, PSNR and SSIM. Mean-squared error (MSE) denotes the difference between adversarial images and original images, calculated by . (PSNR) is a ratio of maximum possible power of a signal and power of distortion, calculated by where denotes the dynamic range of pixel intensities, e.g., for an 8 bits/pixel image we have . Structural similarity index (SSIM) attempts to model the structural information of an image from luminance, contrast and structure respectively.
To evaluate the efficiency of adversarial attacks, we calculate attack success rate (ASR) by that is the fraction of adversarial images that were successful in fooling the DNN model, targeted attack success rate (ASR*) of adversarial attacks calculated as , the average time to generate adversarial perturbations from the clean images.
Comparison of attacks on single character altering
We compare different methods of altering a single character. Table 1 shows some successful adversarial examples that different attack methods generated. Our intuitions are: 1) MIM generates human-perceptible and unnatural noise on account of the dirty background, which distributes all over the image, and harms the image structure similarity and image quality. 2) WM and its variants retain the noise in the watermark region bringing in a more clear background and reasonable perturbations. 3) The watermark-fashion perturbations of WM are relatively light, and do not look like a real watermark. 4) WMinit and WMneg look more real with darker and more complete watermark’s shape. 5) The perturbations of WMedge are around the edge of the text, which makes the text looks bolder and similar to printing/scanning defects.
Intuitively, we can see that WM family of attacks generate better visual quality (in terms of looking natural) images if the attack is successful.
We evaluate the attack performance of altering a single character using the corpus discussed above. In Table 2, we report the metrics above (MSE/PSNR/SSIM,ASR*/ASR), as well as the average time required to generate an attack sample. Our observations are: 1) From Table 2 , compared to MIM, we can observe that WM, WMneg and WMedge obtained lower MSE, higher PSNR and higher SSIM, indicating that the noise level is indeed lower on a successful attack. 2) Due to the lower noise level, the attack success rates (ASR* and ASR) of WM and its variants are also lower than MIM’s. We believe there are several reasons why they are lower. First, in this preliminary study, we always choose a fixed shape and location of the watermark that is at the center of the original image. The fixed location severely limits what the adversary can do. As our future work, we will allow multiple shapes of the watermark (e.g., different texts, logos of the watermark), and different locations. 3) WMneg casts away the positive gradient noise which possesses a certain attack ability. Hence, WMneg behaved worse than WM. However, WMneg does generate more natural examples visually. 4) WMedge is a special case of WM which restricts the watermark region to the shape of the text edge, which has no location problem of the watermarks like other WM-series methods. It achieved good ASR and preserved the naturalness of perturbations. It’s obvious to find that retaining all gradient noise is better than only keeping the negative gradient noise. 5) WM0 is to attach an initial watermark to the original image, which can evaluate the impact of the watermark originally. After attaching an initial watermark, 37.9% and 17.1% images are misclassified by the DenseNet+CTC model and Tesseract OCR, respectively. Thus watermark owns attacking properties intrinsically. 6) The time for producing each adversarial example is similar and within a reasonable range. Thus, a practical strategy is to combine different attack methods to improve ASR.
Attack transferability to blackbox OCR
We want to see if adversarial examples can mislead other (black-box) models, or have commonly called transferability [Liu et al.2016, Papernot, McDaniel, and Goodfellow2016, Sharma and Chen2017, Papernot et al.2017].
We adopt the widely-used latest version Tesseract OCR [Google2019] as a black-box model to perform adversarial attacks. We fed the adversarial samples, which are generated by attack methods above in the Densenet+CTC model, into the off-the-shelf Tesseract OCR, and evaluated recognition results (ASR*/ASR) shown in the last two columns of Table 2.
We find that all attacks produce transferable adversarial examples in terms of ASR. It may be due to the reason that the noise indeed perturbs the intrinsic features of a character sequence for different models, or because Tesseract OCR cannot handle noise. However, ASR* reduces significantly because perturbations are still trained on a different model.
Figure 2 and 3 show two real-world examples of WM. In Figure 2, using watermarks, we successfully altered the license number recognition results. Figure 3 shows an example of a paragraph of an annual financial report of a public company. By adding the AICS watermark, we altered all the revenue numbers in the recognition results.
Defense against these attacks
We evaluate the robustness of these attacks against common defense methods that preprocess the input images, and Table 3 summarizes the results.
Noise removing methods with local smoothing. Local smoothing makes use of nearby pixels to smooth each pixel. We use three local smoothing methods from OpenCV [OpenCVb], average blur, median blur and gaussian blur. We observe that 1) median blur is particularly effective in removing sparsely-occurring black and white pixels in an image while preserving edges of the text well. 2) Median blur with kernel size 3 blurs texts so much that OCR algorithms no longer work, although it reduces ASR*. 3) Average smoothing with kernel size 2 and Gaussian smoothing with kernel size 3 have similar performance. Although MIM has a high ASR, it seems more sensitive to various deformations than WM and WMedge.
Salt&pepper noise is a common way against adversarial examples. We find that it is particularly effective in decreasing ASR* (to 0), but the result is an increase of ASR to almost . It indicates that salt&pepper noise harms the general image quality too much in exchange for reducing adversarial perturbations.
Image compression. We show that adversarial examples can survive lossy image compression and decompression, applying the standard JPEG compression method, with the quality parameter is set to 20. We can point out that WATERMARK attack has more chance of surviving under the compression process.
Watermark removing techniques. Inpainting is a commonly used method to remove (real) watermarks. We use the inpainting method [Telea2004] implemented in OpenCV [OpenCVa], which required the mask of watermarks as a priori and tried to recover the watermark region according to surrounding pixels. While inpainting eliminated the watermark, because the text region overlaped with the watermark region, the inpainting method removed too many useful pixels, that is text pixels, causing OCR to fail completely. We observe that the text even lost readability to human eyes.
Conclusion and Future Work
Generating adversarial examples for OCR systems is different from normal CV tasks. We propose a method that successfully hides perturbations from human eyes while making them effective in the modern sequence-based OCR, by pretending perturbations as a watermark, or printing defects. We show that even with preliminary implementations, our perturbations can be still effective, transferable, and deceiving to human eyes.
There are many future directions. For example, allowing different watermark shapes and locations, as well as on longer sequences. Also, it is interesting to add semantic-based (language model) attacks even further to improve the attack effectiveness. Also, the adversarial attack calls for better defense methods other than traditional image transformations.
Acknowledgements. This work is supported in part by the National Natural Science Foundation of China (NSFC) Grant 61532001 and the Zhongguancun Haihua Institute for Frontier Information Technology.
- [Bengio et al.1995] Bengio, Y.; LeCun, Y.; Nohl, C.; and Burges, C. 1995. Lerec: A nn/hmm hybrid for on-line handwriting recognition. Neural Computation 7(6):1289–1303.
- [Breuel et al.2013] Breuel, T. M.; Ul-Hasan, A.; Al-Azawi, M. A.; and Shafait, F. 2013. High-performance ocr for printed english and fraktur using lstm networks. In 2013 12th International Conference on Document Analysis and Recognition, 683–687. IEEE.
- [Brown et al.2017] Brown, T. B.; Mané, D.; Roy, A.; Abadi, M.; and Gilmer, J. 2017. Adversarial patch. arXiv preprint arXiv:1712.09665.
- [Carlini and Wagner2017] Carlini, N., and Wagner, D. 2017. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), 39–57. IEEE.
[Chen et al.2018]
Chen, P.-Y.; Sharma, Y.; Zhang, H.; Yi, J.; and Hsieh, C.-J.
Ead: elastic-net attacks to deep neural networks via adversarial
Thirty-Second AAAI Conference on Artificial Intelligence.
[Dong et al.2018]
Dong, Y.; Liao, F.; Pang, T.; Su, H.; Zhu, J.; Hu, X.; and Li, J.
Boosting adversarial attacks with momentum.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 9185–9193.
- [Espana-Boquera et al.2011] Espana-Boquera, S.; Castro-Bleda, M. J.; Gorbe-Moya, J.; and Zamora-Martinez, F. 2011. Improving offline handwritten text recognition with hybrid hmm/ann models. IEEE transactions on pattern analysis and machine intelligence 33(4):767–779.
- [Feinman et al.2017] Feinman, R.; Curtin, R. R.; Shintre, S.; and Gardner, A. B. 2017. Detecting adversarial samples from artifacts. arXiv preprint arXiv:1703.00410.
- [Goodfellow, Shlens, and Szegedy2014] Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
- [Google2019] Google. 2019. Tesseract.
[Graves et al.2006]
Graves, A.; Fernández, S.; Gomez, F.; and Schmidhuber, J.
Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks.In
Proceedings of the 23rd international conference on Machine learning, 369–376. ACM.
- [Grosse et al.2017] Grosse, K.; Manoharan, P.; Papernot, N.; Backes, M.; and McDaniel, P. 2017. On the (statistical) detection of adversarial examples. arXiv preprint arXiv:1702.06280.
- [Hanwei Zhang2019] Hanwei Zhang, Yannis Avrithis, T. F. L. A. 2019. Smooth adversarial examples. arXiv preprint arXiv:1903.11862.
- [Heng, Zhou, and Jiang2018] Heng, W.; Zhou, S.; and Jiang, T. 2018. Harmonic adversarial attack method. arXiv preprint arXiv:1807.10590.
- [Hinton, Vinyals, and Dean2015] Hinton, G.; Vinyals, O.; and Dean, J. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.
- [Huang et al.2017] Huang, G.; Liu, Z.; Van Der Maaten, L.; and Weinberger, K. Q. 2017. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 4700–4708.
- [Karmon, Zoran, and Goldberg2018] Karmon, D.; Zoran, D.; and Goldberg, Y. 2018. Lavan: Localized and visible adversarial noise. arXiv preprint arXiv:1801.02608.
- [Kurakin, Goodfellow, and Bengio2016] Kurakin, A.; Goodfellow, I.; and Bengio, S. 2016. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236.
- [Liu et al.2016] Liu, Y.; Chen, X.; Liu, C.; and Song, D. 2016. Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770.
- [Lu, Issaranon, and Forsyth2017] Lu, J.; Issaranon, T.; and Forsyth, D. 2017. Safetynet: Detecting and rejecting adversarial examples robustly. In Proceedings of the IEEE International Conference on Computer Vision, 446–454.
- [Madry et al.2017] Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; and Vladu, A. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083.
- [Moosavi-Dezfooli, Fawzi, and Frossard2016] Moosavi-Dezfooli, S.-M.; Fawzi, A.; and Frossard, P. 2016. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2574–2582.
- [Nguyen, Yosinski, and Clune2015] Nguyen, A.; Yosinski, J.; and Clune, J. 2015. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE conference on computer vision and pattern recognition, 427–436.
- [OpenCVa] OpenCV. inpainting(cv.inpaint). https://docs.opencv.org/3.0-beta/modules/photo/doc/inpainting.html.
- [OpenCVb] OpenCV. Median filter(cv.medianblur). https://docs.opencv.org/master/d4/d13/tutorial˙py˙filtering.html.
[Papernot et al.2016a]
Papernot, N.; McDaniel, P.; Jha, S.; Fredrikson, M.; Celik, Z. B.; and Swami,
The limitations of deep learning in adversarial settings.In 2016 IEEE European Symposium on Security and Privacy (EuroS&P), 372–387. IEEE.
- [Papernot et al.2016b] Papernot, N.; McDaniel, P.; Wu, X.; Jha, S.; and Swami, A. 2016b. Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE Symposium on Security and Privacy (SP), 582–597. IEEE.
- [Papernot et al.2017] Papernot, N.; McDaniel, P.; Goodfellow, I.; Jha, S.; Celik, Z. B.; and Swami, A. 2017. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, 506–519. ACM.
- [Papernot, McDaniel, and Goodfellow2016] Papernot, N.; McDaniel, P.; and Goodfellow, I. 2016. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277.
- [Sharma and Chen2017] Sharma, Y., and Chen, P.-Y. 2017. Attacking the madry defense model with -based adversarial examples. arXiv preprint arXiv:1710.10733.
- [Smith2007] Smith, R. 2007. An overview of the tesseract ocr engine. In Ninth International Conference on Document Analysis and Recognition (ICDAR 2007), volume 2, 629–633. IEEE.
- [Szegedy et al.2013] Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; and Fergus, R. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.
An image inpainting technique based on the fast marching method.Journal of graphics tools 9(1):23–34.
- [Tramèr et al.2017] Tramèr, F.; Kurakin, A.; Papernot, N.; Goodfellow, I.; Boneh, D.; and McDaniel, P. 2017. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204.
[Wang et al.2012]
Wang, T.; Wu, D. J.; Coates, A.; and Ng, A. Y.
End-to-end text recognition with convolutional neural networks.In Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), 3304–3308. IEEE.
- [Xu, Evans, and Qi2017] Xu, W.; Evans, D.; and Qi, Y. 2017. Feature squeezing: Detecting adversarial examples in deep neural networks.