Document image binarization is a fundamental problem in the field of Document analysis. Although binarization seems to be quite easy for images of uniform distribution, it can be challenging under real-world scenarios where the document images suffer from various degradations due to aging effect, inadequate maintenance, ink stains, faded ink, bleed-through background, wrinkles, warping effect, non-uniform variation of intensity and lighting conditions during document scanning.
for binarization used various thresholding techniques, which include finding a single (or multiple) appropriate threshold(s) for classifying pixels in the image as belonging to foreground or background. Recently, few deep learning frameworks[5, 6] have also been applied for binarization of document images. The objective here is not to predict a threshold but to directly output a binary mask segregating the foreground text and the background noise. These deep learning based models require a considerable amount of paired training data. The publicly available binarization datasets are not sufficient to learn various possible noise distributions (i.e., artifacts, stains, ink spills, etc.) that may occur in real-life situations.
In this paper, we intend to increase the utility of the available bounded datasets by proposing a novel adversarial learning technique. The basic idea is to generate a new set of augmented images of high perceptual quality that combine the semantic content of a clean binary image with the noisy appearance of a degraded document image. Thus, the low-level distribution of visual features in an image is modified while maintaining the semantic content. In this way, we can generate multiple degraded versions of same textual content with various noisy textures. For this purpose we propose a Texture Augmentation Network that superimposes the noisy appearance of the degraded document on the clean binary image. Then, the output image is passed through a Binarization Network called BiNet to get back the clean version of the document image. Both the networks are jointly trained in an adversarial manner. The goal is to generate harder adversarial training samples with lots of variations. On the other hand, BiNet tries to learn from the hard augmentations for better performance. By jointly training the two networks, we can enhance the adversarial robustness of our binarization model. The advantage of this technique is that the system can learn from unpaired images. It becomes very useful in case of ancient historical documents as it is difficult to get the corresponding binary images for these documents. If the system supports unpaired setting then the data collection process becomes easier. We can easily get large number of unpaired images with less effort by collecting document images and clean images independently from different sources. However, no other previous work on document binarization have tried to utilize the unpaired dataset. In this paper, the proposed system can be trained with unpaired data.
The proposed framework is summarized in Figure 1. The main contributions of our study are as follows: (1) To the best of our knowledge, our work is the first attempt to use a Generative adversarial model in document binarization problem. (2) We propose a texture augmentation network to augment image datasets, by generating adversarial examples online. (3) We employ adversarial learning technique driven by a general GAN objective where the GAN loss plays a complementary role in training TANet and BiNet jointly. Also, it is noteworthy that the method is able to learn from unpaired datasets.
In this section, we present the details of our proposed binarization model. The model consists of two networks: Texture Augmentation Network (TANet) and Binarization Network (BiNet). Given a clean document image, TANet tries to obtain a noisy version of that image by transferring the noisy texture of a reference document image comprising of various degradations. On the other hand, the BiNet tries to binarize the newly generated noisy image.
Texture Augmentation Network. To combine the semantic content of a clean document image and the noisy texture of a degraded document image, the first step would be to separate the content and texture representation explicitly. For this purpose, we employ a Content encoder and a Style encoder. Given a clean image and reference noisy element , the encoders learn to extract latent representations and , by leveraging the conditional dependence of the content and texture images. Both encoders have the same configuration with eight convolutional blocks of filter size
After extracting the content and texture representations, we perform simple concatenation to obtain a mixed representation. Then, it is passed through a Decoder network that maps the combined representations to an output image that has the same textual content as the clean image and the same texture element as the noisy input. The Decoder architecture is symmetrical to the Encoders with series of deconvolution-BatchNorm-LeakyReLU up-sampling blocks with tanh activation for the final output. The output and the clean image differ in appearance, but they have the same textual element. However, due to the down-sampling process in the content encoder, only a part of the input is stored, resulting in a significant information loss in each layer which can not be used to generate the output image. To deal with this, we adopt skip-connection between the layers of the content encoder and the decoder. We concatenate the feature-map of each down-sampling block in the content encoder with the corresponding feature map of the up-sampling block in the decoder.We represent the TANet as .
where, is the generated output and is the parameters of TANet. The image generated by the TANet should follow some constraints and objectives: (1) It should look real and can not be distinguished from the real-world noisy, degraded document images (2) It has similar texture appearance as the degraded reference document image . (3) It has same textual content as the clean document image
. To incorporate above constraints in the training process of TANet, we adopt the following loss functions for TANet.
Adversarial loss. we use an adversarial objective to constrain the output to look similar to the reference document image. Assuming is sampled according to some data distribution and is sampled from distribution , the loss is defined as
where the discriminator tries to discriminate between the output image from the degraded reference image.
Style loss. the adversarial loss focuses on getting the overall structure but sometimes it is not enough to capture the fine details of the texture. We use an additional style loss to ensure the successful transfer of texture from the reference image to the clean document image. Following [7, 8], we have used the technique of matching the gram matrices. It captures the correlations between the different feature responses extracted from certain layers of a pre-trained VGG-19 network. Mathematically, gram matrix is the inner product between the vectorised feature maps and in layer :
where, is the number of feature maps and is the activation of filter at position in layer . We use 5 layers of VGG-19 network (”conv1_1”, ”conv2_1”, ”conv3_1”, ”conv4_1”, ”conv5_1”) to define our style loss.
Content loss. It is required that the generated images have the same textual content as the clean document image. To incorporate the same in our training process, we define a masked mean square loss function. The loss penalizes the differences between the pixels of the content image and output image in the text region only. It can be defined as
where, is a binary mask that has value 1 in the text region and 0 in the background. Thus, the total objective function to train TANet can be written as
where, and are the weights to balance the multiple objectives.
Given that we have generated the noisy version of a clean document image, our system tries to get back the clean binarized image from the generated one through another network called BiNet. The network employs an image-to-image translation framework consisting of a generator and a discriminator. The objective is to train a generator networkthat takes the output of TANet and obtains a binarized version of that image .
where, is the parameters of the network . A discriminator network is used to determine how good the generator is in generating binarized images. We have used similar network architecture for the generator and the discriminator as mentioned in . During training, both the networks compete against each other in a min-max game. The training objective can be defined as
It is noted that the training, in this case, follows the ”paired” setting. For each input image to the network , there is corresponding ground truth image. Thus, we can employ full supervision on the predicted binarization results by leveraging the advantage of pixel loss along with the adversarial loss.
The adversarial loss helps to obtain sharper output image by de-noising the noisy input whereas the loss helps to preserve the content. The final objective of BiNet is
where, is a weight parameter. In the next section, we will provide some salient details of the training process and discuss the appropriate weights.
In this section we discuss about the datasets, training details, baseline methods and experimental results regarding the evaluation of our proposed binarization model.
Datasets. For training and evaluating our model, we have used some publicly available document datasets. A total of 9 datasets are used in this work: DIBCO 2009 , DIBCO 2011 , DIBCO 2013 , H-DIBCO 2010 , H-DIBCO 2012 , H-DIBCO 2014 , Bickley diary , PHIDB , and S-MS  datasets. Out of these datasets, DIBCO 2013 dataset is selected for testing purposes. For the testing, the remaining datasets are used as a training set. At first, we convert the images from these datasets to patches of size . To increase the number of patches, we augment the training patches by rotating with an angle of 90, 180, or 270. A small part (10%) of the obtained image patches is used as an evaluation set, and the rest of the images are used to train the model. In the training set, we have two set of images patches: degraded document set and as well as their binarized ground truths. The clean images are sampled from the binarized set, and reference images are sampled from the degraded document set in an unpaired manner.
To train the model, we follow a particular stage-wise training protocol. At first, TANet is trained for 10 epochs. After the training, it should be able to generate noisy version of the clean images. In the next stage, BiNet is trained using the generated noisy images for another 10 epochs. At last, TANet and BiNet are fine-tuned together for around 30 epochs. We note that during the couple training, TANet tends to generate more challenging adversarial samples that are relatively hard to be detected by BiNet. This training strategy enforces the model to learn a various type of degradations including noises and artifacts. Figure2
illustrates the texture transfer process qualitatively. At the time of testing, BiNet is used to get the binarized output of a given document image. Experiments are conducted on a server with 12 GB memory and single Nvidia Tesla K80 GPU. The model is implemented using TensorFlow library. Adam optimizer with learning rate 0.0001 is used to train the model. We take= 0.5, = 10 and = 100 throughout the experiments. We used the following metrics to quantitatively measure the performance of our proposed model with those of the state-of-the-art algorithms and some baselines: F-measure, pseudo-F-measure (), Distance reciprocal distortion metric (DRD), and the peak signal-to-noise ratio (PSNR) metrics .
Baselines: We define following baselines.
U-Net: It is a trivial encoder-decoder network with skip-connections . We take its same architecture as our generator unit of BiNet. It is noted that the network is trained in a paired setting. For each input image, there is corresponding ground truth. L2 pixel loss is used to train the complete model.
Pix2pix: It is an image-to-image translation framework inspired from . The network resembles the BiNet part of our system. It is trained using adversarial loss and L2 loss using paired data.
CycleGAN: We employ this baseline by using the concept of cycle-consistent image translation frameworks . The network utilizes unpaired data to train the model.
, we can see that our proposed method delivers the best quality results regarding all the four evaluation metrics. Also, we have obtained a low DRD score which implies that our method is also superior regarding the visual distortion. U-Net and Pix2pix work moderately, but CycleGAN obtains poor results as compared to others. Similar to CycleGAN our method also utilizes unpaired data, but the main binarization network (BiNet) of our model learns from paired samples which are created internally in our system. Thus, we can impose full supervision in the BiNet part that helps to generate high-quality results. However, in CycleGAN method, there is no scope to impose the full supervision.
In this paper, we re-visited the problem of document binarization by introducing a new adversarial learning technique that intends to increase the utility of the available bounded datasets. The noisy data augmentation is an integral part of our network that enforces the model to learn robust representation of various types of document degradations from unpaired data. Also, the experimental results suggest that our method is superior to existing state-of-the-art frameworks.
-  N. Otsu, “A threshold selection method from gray-level histograms,” IEEE transactions on systems, man, and cybernetics, vol. 9, no. 1, pp. 62–66, 1979.
-  J. Sauvola and M. Pietikäinen, “Adaptive document image binarization,” Pattern recognition, vol. 33, no. 2, pp. 225–236, 2000.
-  N. Phansalkar, S. More, A. Sabale, and M. Joshi, “Adaptive local thresholding for detection of nuclei in diversity stained cytology images,” in ICCSP. IEEE, 2011, pp. 218–220.
-  B. Gatos, I. Pratikakis, and S.J. Perantonis, “Improved document image binarization by using a combination of multiple binarization techniques and adapted edge information,” in ICPR. IEEE, 2008, pp. 1–4.
-  Q.N. Vo, S.H. Kim, H.J. Yang, and G. Lee, “Binarization of degraded document images based on hierarchical deep supervised network,” Pattern Recognition, vol. 74, pp. 568–586, 2018.
C. Tensmeyer and T. Martinez,
“Document image binarization with fully convolutional neural networks,”in ICDAR. IEEE, 2017, vol. 1, pp. 99–104.
-  L. Gatys, A.S. Ecker, and M. Bethge, “Texture synthesis using convolutional neural networks,” in NIPS, 2015, pp. 262–270.
-  L.A. Gatys, A.S. Ecker, and M. Bethge, “Image style transfer using convolutional neural networks,” in CVPR, 2016, pp. 2414–2423.
-  A. Konwer, A.K. Bhunia, A. Bhowmick, A.K. Bhunia, P. Banerjee, P.P. Roy, and U. Pal, “Staff line removal using generative adversarial networks,” arXiv preprint arXiv:1801.07141, 2018.
-  B. Gatos, K. Ntirogiannis, and I. Pratikakis, “Icdar 2009 document image binarization contest (dibco 2009),” in ICDAR. IEEE, 2009, pp. 1375–1382.
-  I. Pratikakis, B. Gatos, and K. Ntirogiannis, “Icdar 2011 document image binarization contest (dibco 2011),” in ICDAR. IEEE, 2011, pp. 1506–1510.
-  I. Pratikakis, B. Gatos, and K. Ntirogiannis, “Icdar 2013 document image binarization contest (dibco 2013),” in ICDAR. IEEE, 2013, pp. 1471–1476.
-  I. Pratikakis, B. Gatos, and K. Ntirogiannis, “H-dibco 2010-handwritten document image binarization competition,” in ICFHR. IEEE, 2010, pp. 727–732.
-  I. Pratikakis, B. Gatos, and K. Ntirogiannis, “Icfhr 2012 competition on handwritten document image binarization (h-dibco 2012),” in ICFHR. IEEE, 2012, pp. 817–822.
-  K. Ntirogiannis, B. Gatos, and I. Pratikakis, “Icfhr2014 competition on handwritten document image binarization (h-dibco 2014),” in ICFHR. IEEE, 2014, pp. 809–813.
-  F. Deng, Z. Wu, Z. Lu, and M.S. Brown, “Binarizationshop: a user-assisted software suite for converting old documents to black-and-white,” in Proceedings of the 10th annual joint conference on Digital libraries. ACM, 2010, pp. 255–258.
-  H.Z. Nafchi, S.M. Ayatollahi, R.F. Moghaddam, and M. Cheriet, “An efficient ground truthing tool for binarization of historical manuscripts,” in ICDAR. IEEE, 2013, pp. 807–811.
-  R. Hedjam, H.Z. Nafchi, R.F. Moghaddam, M. Kalacska, and M. Cheriet, “Icdar 2015 contest on multispectral text extraction (ms-tex 2015),” in ICDAR. IEEE, 2015, pp. 1181–1185.
-  O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention. Springer, 2015, pp. 234–241.
P. Isola, J. Zhu, T. Zhou, and A.A. Efros,
“Image-to-image translation with conditional adversarial networks,”arXiv preprint, 2017.
-  J. Zhu, T. Park, P. Isola, and A.A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” arXiv preprint, 2017.
-  J. Bernsen, “Dynamic thresholding of gray-level images,” in Proc. Eighth Int’l conf. Pattern Recognition, Paris, 1986, 1986.
-  W. Niblack, An introduction to digital image processing, vol. 34, Prentice-Hall Englewood Cliffs, 1986.
-  B. Gatos, I. Pratikakis, and S.J. Perantonis, “An adaptive binarization technique for low quality historical documents,” in International Workshop on Document Analysis Systems. Springer, 2004, pp. 102–113.
-  B. Su, S. Lu, and C.L. Tan, “Robust document image binarization technique for degraded document images,” IEEE transactions on image processing, vol. 22, no. 4, pp. 1408–1417, 2013.
-  N.R. Howe, “Document binarization with automatic parameter tuning,” International Journal on Document Analysis and Recognition (IJDAR), vol. 16, no. 3, pp. 247–258, 2013.