The advent of industry 4.0 calls for the digitization of every aspect of industry, which includes automation of business processes, business analytics and phasing out of manually driven processes. While business processes have evolved to store large volumes of scanned digital copies of paper documents, however for many such documents the information stored needs to be extracted via text recognition techniques. While capturing these images via camera or scanner, artifacts tend to creep into the images such as background noise, blurred and faded text. In some scenarios, companies insert a watermark in the documents which poses readability issues after scanning. Text recognition engines often suffer due to the low quality of scanned documents and are not able to read the documents properly and hence, fail to correctly digitize the information present in the documents. In this paper, we attempt to perform denoising of the documents before the document is being sent to text recognition network for reading and propose a document cleaning suite based on generative adversarial training. This suite is trained for background noise removal, deblurring, watermark removal and defading and learns a mapping from the distribution of noisy documents to the distribution of clean documents.
Background noise removal is the process of removing the background noise, such as uneven contrast, see through effects, interfering strokes, and background spots on the documents. The background noise presents a problem to the performance of OCR as it is difficult to differentiate the text and background , , , 
. De-blurring is the process of removal of blur from an image. Blur is defined as distortion in the image due to various factors such as shaking of camera, improper focus of camera etc. which decreases the readability of the text in the document image and hence, deteriorates the performance of OCR. Recent works for deblurring have focused on estimating blur kernels using techniques such as GAN, CNN , dictionary-based prior , sparsity-inducing prior  and hybrid non-convex regularizer . Watermark removal aims at removing the watermark from an image while preserving the text in the image. Watermarks are low-intensity images printed on photographs and books in order to prevent copying of the material. But this watermark post scanning creates hinderance in reading the text of interest from documents. Inpainting   techniques are used in the literature to recover the original image after detecting watermarks statistically. Defading is the process of recovering text that has lightened / faded over time, which usually happens in old books and documents. This is also detrimental to the OCR performance. To remove all these artifacts that degrade the quality of documents and create hindrance in readability, we formulate the document cleaning process as an image-to-image translation task at which Generative Adversarial Networks (GANs)  are known to give excellent performance.
However, with the limited availability of paired data i.e., noisy and corresponding cleaned documents, we proposed to train CycleGAN  for unpaired datasets of noisy documents. We train CycleGAN for denoising / background noise removal, deblurring, watermark removal and defading tasks. CycleGAN eliminates the need for one-to-one mapping between images of source and target domains by a two-step transformation of source image i.e., first source image is mapped to an image in target domain and then back to source again. We evaluate the performance of our document cleaning suite on synthetic and publicly available datasets and compare them against state-of-the-art methods. We use Kaggle’s document dataset for denoising / background noise removal , the BMVC document deblurring dataset  which are publicly available online. There does not exist any document dataset for watermark removal and defading online. Therefore, we have synthetically generated document datasets for watermark removal and defading tasks, and have also made these public for the benefit of research community. Overall, our contributions in this paper are as follows :
We proposed a Document Cleaning Suite which is capable of cleaning documents via denoising / background noise removal, deblurring, watermark removal and defading for improving readability.
We proposed the application of CycleGAN  for translating a document from a noisy document distribution (e.g. with background noise, blurred, watermarked and faded) to a clean document distribution in the situations where there is shortage of paired dataset.
We synthetically created a document dataset for watermark removal and defading by inserting logos as watermarks and applying fading techniques on Google News dataset  of documents, respectively.
provides details of datasets, training, evaluation metric used and also discusses experimental results and comparisons to evaluate the effectiveness and superiority of CycleGAN for cleaning the noisy documents. Section5 concludes the paper.
2 Related Work
Generative adversarial Network (GAN) 
is the idea that has taken deep learning by storm. It employs adversarial training which essentially means pitting two neural networks against each other. One is a generator while the other is a discriminator, where the former aims at producing data that are indistinguishable from real data while the latter tries to distinguish between real and fake data. The process eventually yields a generator with the ability to do a plethora of tasks efficiently such as image-to-image generation. Other notable applications where GANs have established their supermacy are representation learning, image editing, art generation, music generation etc.    .
Image-to-image translation is the task of mapping images in source domain to images in target domain such as converting sketches into photographs, grayscale images to color images etc. The aim is to generate the target distribution given the source distribution. Prior work in the field of GANs such as Conditional GAN  forces the image produced by generator to be conditioned on the output which allows for optimal translations. However, earlier GANs require one-to-one mapping of images between source and target domain i.e., a paired dataset. In case of documents, it is not possible to always have cleaned documents corresponding to each noisy document. This persuaded us to explore unpaired image-to-image translation methods, e.g. Dual-GAN  which uses dual learning and CycleGAN  which makes use of cyclic-consistency loss to achieve unpaired image-to-image translation.
In this paper, we propose to apply CycleGAN for document cleaning task. It has two pairs of generators and discriminators. One pair focuses on converting source domain to target domain while the other pair focuses on converting target domain to source domain. This bi-directional conversion process allows for a cyclic consistency loss for CycleGAN which ensures the effective conversion of an image from source to target and then back to source again. The transitivity property of cyclic-consistency loss allows CycleGAN to perform well on unpaired image-to-image translation.
There exists quite a lot of work on deblurring of images. For example, DeblurGAN  uses conditional GANs to deblur images,  uses a multi-scale CNN to create an end-to-end system for deblurring. Ljubenovic et al. proposed class-adapted dictionary-based prior for the image . There also exists method of sparsity-inducing prior on the blurring filter, which allows for deblurring images containing different classes of images such as faces, text etc.  when they co-occur in a document. A non-convex regularization method was developed by Yao et al.  which leveraged the non-convex sparsity constraints on image gradients and blur kernels for improving the kernel estimation accuracy. 
uses a CNN to classify the image into one of the degradative sub-spaces and the corresponding blur kernel is then used for deblurring.
Very few attempts have been made in past for removing watermarks from images. Authors in 
proposed to use image inpainting to recover the original image. However, the method developed by Xu et al. detects the watermark using statistical methods and subsequently, removes it using image inpainting. To the best of our knowledge, we did not find any work on defading of images.
CycleGAN  has shown its worth in scenarios where there is paucity of paired dataset, i.e., image in source domain and corresponding image in target domain. This property of CycleGAN, of working without the need of one-to-one mapping between input domain and target domain and still being able to learn such image-to-image translations, persuades us to use them for document cleaning suite where there is always limited availability of clean documents corresponding to noisy documents. To circumvent the issue of learning meaningful transformations in case of unpaired dataset, CycleGAN uses cycle-consistency loss which says that if an image is transformed from source distribution to target distribution and back again to source distribution, then we should get samples from source distribution. This loss is incorporated in CycleGAN by using two generators and two discriminators, as shown in Figure 1. The first generator maps the image from noisy domain A () to an output image in target clean domain B (). To make sure that there exists a meaningful relation between and , they must learn some features which can be used to map back to original noisy input domain. This reverse transformation is carried out by second generator which takes as input and converts it back into an image in noisy domain. Similar process of transformation is carried out for converting images in clean domain B to noise domain A as well. It is evident in the Figure 1 that each discriminator takes two inputs - original image in source domain and generated image via a generator. The task of the discriminator is to distinguish between them so that discriminator is able to defeat generator by rejecting images generated by it. While competing against discriminator so that it stops rejecting its images, the generator learns to produce images very close to the original input images.
We use the same network of CycleGAN as proposed in 
. The generator network consists of two convolutional layers of stride 2, several residual blocks, two layers of transposed convolutions with stride 1. The discriminator network usesPatchGANs  to classify the overlapping patches of images as real or fake.
4 Experimental Results and discussion
This section is divided into the following subsections: Section 4.1 provides details of the datasets used for the document cleaning suite. In Section 4.2, we elaborate on the training details utilized to perform our experiments. Next, we give the performance evaluation metric in Section 4.3. Subsequently, Section 4.4 discusses the results obtained from the experiments we conducted and provides comparison with the baseline model i.e., Conditional GAN .
|PSNR (in dB)|
4.1 Dataset Details
We used 4 separate document datasets, one each for background noise removal, deblurring, watermark removal and defading. Their details are given below :
Kaggle Document Denoising Dataset : This document denoising dataset hosted by Kaggle  consists of noisy documents with noise in various forms such as coffee stains, faded sun spots, dog-eared pages, and lot of wrinkles etc. We use this dataset for training and evaluating CycleGAN for removing background noise from document images. We have used a training set of 144 noisy documents to train CycleGAN and tested the trained network on a test dataset of 72 document images.
Document Deblurring Dataset : We used artificial deblurring dataset of documents  available online for training CycleGANs to deblur the blurred documents. This dataset was created by taking documents from the CiteSeerX repository and were further processed via various geometric transformations and two types of blur i.e., motion blur and de-focus blur, on them to make the noise look more realistic. We have used only a subset of this dataset by random sampling of 2000 documents for training CycleGAN. For evaluation, this deblurring dataset has a test-set which consists of 16 sets of 100 documents, with each set blurred with one of the 16 different blur kernels used for creating the training dataset.
Watermark Removal Dataset : As there exists no publicly available dataset for watermarked document images, we generated our own synthetic watermark removal document dataset. To create the dataset, we first obtained text documents from Google News Dataset  and approx. 21 logos from the Internet for inserting watermarks. Then, we pasted the logos on the documents by making logos transparent with varying values of alpha channel. We used variations in the position of logo, size of logo and transparency factor for creating randomness in the watermarked documents and to make them realistic. The training set of 2000 images and test set of 200 images from this synthetic dataset was used for experimental purposes.
Document Defading Dataset : Similar to watermark removal dataset, we artificially generated faded documents from Google News Dataset  by applying various dilation operations on document images. Here again, the train and test set consisted of 2000 and 200 document images, respectively for training and evaluating the performance of CycleGAN for defading purposes.
4.2 Training Details
We use the same training procedure as adopted for CycleGan in paper . Least-squares loss is used to train the network as this loss is more stable and produces better quality images. We update the discriminators by using a history of generated images rather than the ones produced by latest generator to reduce model oscillations. We use Adam optimizer with learning rate of 0.0002 and momentum of 0.5 for training CycleGAN on noisy images of size
. The network is trained for 12, 30, 12 and 8 epochs for background noise removal, deblurring, watermark removal and defading, respectively.
For Conditional GAN , we use kernel size of with a stride
and zero-padding byfor all convolutional and deconvolutional layers of generator network. In case of discriminator network, the first three convolutional and deconvolutional layers were composed of kernels of size with a stride and zero-padding by . However, the last two layers in discriminator network uses kernel of size with stride of size . The network is trained on input images of size using Adam Optimizer with a learning rate of . We use and as values of weights for adversarial loss and perceptual loss, respectively. The network is trained for 5 epochs for each of the document cleaning tasks i.e., background noise removal, deblurring, watermark removal and defading.
4.3 Evaluation Metric
We evaluate the performance of CycleGAN using Peak Signal-to-Noise Ratio (PSNR)111Peak Signal-to-Noise Ratio: http://www.ni.com/white-paper/13306/en/ as an image quality metric. PSNR is defined as ratio of the maximum possible power of a signal and the power of distorting noise which deteriorates the quality of its representation. PSNR is usually expressed in terms of Mean-squared error (MSE). Given a denoised image (D) of size and its corresponding noisy image (I) of same size, PSNR is given as follows :
where represents the maximum pixel intensity value of image D. Higher the PSNR value, better is the image quality.
Now, we present the results obtained on document datasets using CycleGAN for document cleaning purposes. Table 1 gives the comparison of Conditional GAN and CycleGAN for denoising, deblurring, watermark removal and defading tasks. We observe that CycleGAN beats Conditional GAN on all these document cleaning tasks as shown in Table 1. Row 1 of Table 1 gives mean PSNR values of images deblurred using Conditional GAN and CycleGAN. CycleGAN obtains higher PSNR value of dB as compared to that of Conditional GAN’s PSNR ( dB) on Kaggle Document Denoising dataset . Similarly, PSNR value of CycleGAN ( dB) is better than Conditional GAN for deblurring dataset . We have also shown the PSNR comparison for deblurring test-set using a plot, as given in Figure 2 which shows the superiority of CycleGAN over Conditional GAN. Row 3 and 4 gives the PSNR values for watermark removal and defading task. Here again, CycleGAN gives better image quality.
In this paper, we proposed and developed Document Cleaning Suite which is based on the application of CycleGAN and is responsible for performing various document cleaning tasks such as background noise removal, deblurring, watermark removal and defading. Very often it is difficult to obtain clean images corresponding to a noisy image, and simulation of noise for training image-to-image translators does not adequately generalize to the real world. Instead, we trained a model to learn the mapping from an input distribution to an output distribution of images, while preserving the essence of the image. We used CycleGAN because it has been seen to provide good results for such domain adaptation scenarios where there is limited availability of paired datasets i.e., noisy and correspondig cleaned image. We demonstrated the effectiveness of CycleGAN on publicly available and synthetic document datasets, and the results demonstrate that it can clean up a variety of noise effectively.
-  2011, E.: Google news dataset. EMNLP 2011 SIXTH WORKSHOP ON STATISTICAL MACHINE TRANSLATION (2011), http://www.statmt.org/wmt11/translation-task.html#download
-  Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., Abbeel, P.: Infogan: Interpretable representation learning by information maximizing generative adversarial nets. CoRR abs/1606.03657 (2016), http://arxiv.org/abs/1606.03657
-  Farahmand, A., Sarrafzadeh, A., Shanbehzadeh, J.: Document image noises and removal methods. Proceedings of the International MultiConference of Engineers and Computer Scientists 2013 1 (2013), http://www.iaeng.org/publication/IMECS2013/IMECS2013_pp436-440.pdf
Frank, A.: Uci machine learning repository. irvine, ca: University of california, school of information and computer science. http://archive. ics. uci. edu/ml (2010)
-  Ganbold, G.: History document image background noise and removal methods. International Journal of Knowledge Content Development and Technology 5 (2015), http://ijkcdt.net/xml/05531/05531.pdf
-  Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative Adversarial Networks. ArXiv e-prints (Jun 2014)
Hradiš, M., Kotera, J., Zemčík, P., Šroubek, F.: Convolutional neural networks for direct text deblurring. In: Proceedings of BMVC 2015. The British Machine Vision Association and Society for Pattern Recognition (2015),http://www.fit.vutbr.cz/research/view_pub.php?id=10922
-  Isola, P., Zhu, J., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. CoRR abs/1611.07004 (2016), http://arxiv.org/abs/1611.07004
-  Javed, S.T., Fasihi, M.M., Khan, A., Ashraf, U.: Background and punch-hole noise removal from handwritten urdu text. In: 2017 International Multi-topic Conference (INMIC). pp. 1–6 (Nov 2017). https://doi.org/10.1109/INMIC.2017.8289451
-  Jiao, J., Sun, J., Satoshi, N.: A convolutional neural network based two-stage document deblurring. In: 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR). vol. 01, pp. 703–707 (Nov 2017). https://doi.org/10.1109/ICDAR.2017.120
-  Kupyn, O., Budzan, V., Mykhailych, M., Mishkin, D., Matas, J.: Deblurgan: Blind motion deblurring using conditional adversarial networks. CoRR abs/1711.07064 (2017), http://arxiv.org/abs/1711.07064
-  Li, H., Zhang, Y., Zhang, H., Zhu, Y., Sun, J.: Blind image deblurring based on sparse prior of dictionary pair. In: Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012). pp. 3054–3057 (Nov 2012)
-  Lin, D., Fu, K., Wang, Y., Xu, G., Sun, X.: Marta gans: Unsupervised representation learning for remote sensing image classification. IEEE Geoscience and Remote Sensing Letters 14(11), 2092–2096 (Nov 2017). https://doi.org/10.1109/LGRS.2017.2752750
-  Liu, C., Szeliski, R., Kang, S.B., Zitnick, C.L., Freeman, W.T.: Automatic estimation and removal of noise from a single image. IEEE Transactions on Pattern Analysis and Machine Intelligence (2006), http://people.csail.mit.edu/celiu/denoise/denoise_pami.pdf
-  Liu, R.W., Li, Y., Liu, Y., Duan, J., Xu, T., Liu, J.: Single-image blind deblurring with hybrid sparsity regularization. In: 2017 20th International Conference on Information Fusion (Fusion). pp. 1–8 (July 2017). https://doi.org/10.23919/ICIF.2017.8009659
-  Ljubenovic, M., Zhuang, L., Figueiredo, M.A.T.: Class-adapted blind deblurring of document images. In: 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR). vol. 01, pp. 721–726 (Nov 2017). https://doi.org/10.1109/ICDAR.2017.123
-  Mirza, M., Osindero, S.: Conditional generative adversarial nets. CoRR abs/1411.1784 (2014), http://arxiv.org/abs/1411.1784
-  Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. CoRR abs/1612.02177 (2016), http://arxiv.org/abs/1612.02177
-  Peng, Y., Qi, J., Yuan, Y.: Cm-gans: Cross-modal generative adversarial networks for common representation learning. CoRR abs/1710.05106 (2017), http://arxiv.org/abs/1710.05106
-  Qin, C., He, Z., Yao, H., Cao, F., Gao, L.: Visible watermark removal scheme based on reversible data hiding and image inpainting. Signal Processing: Image Communication 60, 160–172 (2018). https://doi.org/https://doi.org/10.1016/j.image.2017.10.003, http://www.sciencedirect.com/science/article/pii/S0923596517301868
-  Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR abs/1511.06434 (2015), http://arxiv.org/abs/1511.06434
-  Wang, L., Gao, C., Yang, L., Zhao, Y., Zuo, W., Meng, D.: Pm-gans: Discriminative representation learning for action recognition using partial-modalities. CoRR abs/1804.06248 (2018), http://arxiv.org/abs/1804.06248
-  Xu, C., Lu, Y., Zhou, Y.: An automatic visible watermark removal technique using image inpainting algorithms. In: 2017 4th International Conference on Systems and Informatics (ICSAI). pp. 1152–1157 (Nov 2017). https://doi.org/10.1109/ICSAI.2017.8248459
-  Yao, Q., Kwok, J.T.: Efficient Learning with a Family of Nonconvex Regularizers by Redistributing Nonconvexity. ArXiv e-prints (Jun 2016)
-  Yi, Z., Zhang, H., Tan, P., Gong, M.: Dualgan: Unsupervised dual learning for image-to-image translation. CoRR abs/1704.02510 (2017), http://arxiv.org/abs/1704.02510
-  Zhu, J., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. CoRR abs/1703.10593 (2017), http://arxiv.org/abs/1703.10593