Deeply Matting-based Dual Generative Adversarial Network for Image and Document Label Supervision

by   Yubao Liu, et al.

Although many methods have been proposed to deal with nature image super-resolution (SR) and get impressive performance, the text images SR is not good due to their ignorance of document images. In this paper, we propose a matting-based dual generative adversarial network (mdGAN) for document image SR. Firstly, the input image is decomposed into document text, foreground and background layers using deep image matting. Then two parallel branches are constructed to recover text boundary information and color information respectively. Furthermore, in order to improve the restoration accuracy of characters in output image, we use the input image's corresponding ground truth text label as extra supervise information to refine the two-branch networks during training. Experiments on real text images demonstrate that our method outperforms several state-of-the-art methods quantitatively and qualitatively.




Learning Structral coherence Via Generative Adversarial Network for Single Image Super-Resolution

Among the major remaining challenges for single image super resolution (...

Two-Stage Generative Adversarial Networks for Document Image Binarization with Color Noise and Background Removal

Document image enhancement and binarization methods are often used to im...

Perception Consistency Ultrasound Image Super-resolution via Self-supervised CycleGAN

Due to the limitations of sensors, the transmission medium and the intri...

Orthogonal Transform based Generative Adversarial Network for Image Dehazing

Image dehazing has become one of the crucial preprocessing steps for any...

DeepOtsu: Document Enhancement and Binarization using Iterative Deep Learning

This paper presents a novel iterative deep learning framework and apply ...

Redefining Binarization and the Visual Archetype

Although binarization is considered passe, it still remains a highly pop...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Single image super-resolution (SR) aims to magnify low-resolution (LR) image to high resolution (HR) one. Generally speaking, there are three kind of methods for image SR: interpolation based methods

[1, 3], reconstruction based methods [10, 27] and exemplar training [6, 9]

based methods. However, the SR problem is ill-posed due to a multiplicity of solutions exist for any given LR pixel. Recently, deep neural networks

[4, 12, 23, 16] improve the performance significantly in terms of quantitatively and qualitatively. However, the performance of these networks has decreased toward text images due to their not taking the special properties of text images. Different from natural images, text images have more high-frequency information. In this paper, we focus on text image SR.

Text image SR aims to recover HR image with sharp text boundaries. Traditional methods [21, 13] add extra restrictions while solving an ill-posed problem. Nowadays, learning-based methods have witnessed great performance in low-level vision tasks. ASRS [22] restores textual images via multiple coupled dictionaries and adaptive sparse representation selection. In [5], CNN is proposed for text image SR and achieves remarkable performance. SRCNN [4] is used to improve the accuracy of OCR by pre-processing. In [17], language independent text SR is constructed using CNN and it takes less time during test than most natural images SR. Unfortunately, the text boundaries in output HR images of these methods are not sharp enough and are with some blur. Besides, some characters on the HR images may suffer from deformation when the resolution of LR images is very low.

To address these issues, we propose a matting-based dual generative adversarial network (mdGAN) for text image SR. The contributions of this paper are as follows: First, a deep matting method is used to decompose the input text image into two color images(foreground and background) and a text boundary image. Then two parallel branches are constructed to restore the two color images and the text boundary image respectively. The advantage is that it helps reduce blurring artifacts by processing the color and text boundary separately. Second, the LR image’s corresponding text label is used as extra supervise information to reduce characters deformation in output HR image. To our best knowledge, we are the first one to introduce text label to improve the quality of restoration for image SR.

The rest of this paper is organized as follows. The related work is presented in Section 2, followed by the details of our method in Section 3. Section 4 presents the experimental results. The conclusion is drawn in Section 5.

2 Related work

2.1 Image matting

Image matting is to estimate foreground color layer, background color layer and alpha matte layer of an image. Many methods

[19, 2, 20] may suffer from high-frequency chunky and low-frequency smearing artifacts due to highly reliant on color-dependent propagation. Instead of relying primarily on color information, Xu et al. [26] learn the natural structure that is presented in alpha mattes. An encoder-decoder is constructed to takes image and the corresponding trimap as input and predicts the alpha matte. Then they use a network to refine the alpha matte.

2.2 Generative adversarial networks

GANs have achieved great success in many low-level vision tasks, including image denoising, image deblurring and image super-resolution [23, 11, 25]. Despite their success, the training of GANs is known to be unstable and has great influence on performance. Many works [18, 28, 7] have been proposed to stabilize the GAN’s training dynamics and improve the sample’s diversity.

3 The proposed method

The architecture of our proposed matting-based dual generative adversarial network (mdGAN) is shown in Fig. 2. It mainly consists of three steps: (1) decomposes the input LR text image into two color layers (foreground and background) and one text boundary layer using a deep matting method; (2) recover each layer separately using two parallel branches and compose the outputs to generate the output HR image; (3) calculate character classification loss in HR image based on an extra text label and adopt the loss to refine the two-branch networks in step 2.

3.1 Matting-based image decomposition

To reduce blurring artifacts in the process of text image SR, we introduce image matting in our network. Image matting claims that an image can be decomposed into a foreground layer, a background layer and an alpha matte layer. In this paper, alpha matte is called definite text boundary layer. Foreground and background layers are called foreground and background color layers respectively. An example of three layers are shown in Fig. 3. The text boundary layer contains the soft edges of text. The foreground and background layers only contain the color information of the text and background. By processing the color and text boundary layers respectively, it contributes to reducing blurring artifacts for text image SR. In this paper, deep image matting [26] is used in our matting process.

3.2 Parallel SR branches process stage

As shown in Fig. 2, text boundary layer contains the major text shape information while foreground layer and background layer are smooth but have more color information. Therefore, we adopt two parallel SR branches to recover the three layers respectively. For foreground and background layers, ESRGAN [25] is used as branch-1 due to its stronger supervision for color consistency. Owing to strong ability of restoration for high-frequency details, SFTGAN [24] is adopted to recover the text boundary layer as the branch-2. To highlight the boundary information of , teager filtering [14] is adopted to before the is given to the branch-2. Thanks to the fact that we have used image matting to separate the input LR images into color information and boundary information, the filtering won’t influence the color information of the original image. Therefore, our method can avoid artifacts like color aliasing. Besides, to stable the training of GANs and improve the quality of restoration, we apply spectral normalization(SN) [15] to parameters in the branch-1 and branch-2. Once the three HR layers , and are obtained, the output HR image can be calculated as follows:


4 Experimental results

In this section, we demonstrate our text image SR performance by conducting experiments on one published text image SR dataset and one dataset constructed by ourselves. Besides, ablation study is carried out to demonstrate the effectiveness of each part of our architecture.

4.1 Qualitative evaluation

We evaluate the performance of our methods against several state-of-the-art image SR methods: bicubic, SRCNN [4], SRGAN [11], EDSR [12], ESRGAN [25] and RCAN [29]. All models of these methods are from scratch trained under the training datasets which are adopted in our model. For ICDAR 2015, RMSE, PSNR and SSIM results are shown in Table 1. Owing to the dataset is used to improve the OCR accuracy, thus the OCR accuracy after SR is also displayed in Table 1. We can see that our method outperforms the other methods with a large margin. Fig. 1 and Fig. 4 show the visually comparison results. Our method products sharper text boundaries and less artifacts. For super TextSR 2018, the quantitative and qualitative comparisons are shown in Table 2 and Fig. 5.

4.2 Ablation study

To evaluate the effectiveness of every part of our architecture, we measure quantitative results under baseline, with image matting, with character classifier loss and with both image matting and character classifier loss in Table 3. From Table 3 we can see that the quantitative results keep increasing with the matting and the character classification loss, which demonstrates the effectiveness of each part.

Bicubic 19.04 23.50 0.879 60.64
SRCNN 7.24 33.19 0.981 76.10
SRGAN 7.10 33.51 0.987 76.80
EDSR 7.02 33.60 0.990 77.13
ESRGAN 6.92 33.68 0.992 77.59
RCAN 6.86 33.71 0.994 77.64
Ours 6.80 33.90 0.996 77.78
Table 1: Results of different methods on the ICDAR 2015 TEXTSR dataset

5 Conclusion

In this paper, we propose mdGAN for text image SR. To reduce blurring artifacts in output HR image, the input LR image is firstly decomposed into color and text boundary layers by deep image matting. These layers are then processed SR by two parallel branches separately. The outputs of the two branches are composed to generate the output HR image. To reduce character deformation in output HR image, a character classification loss and a deep similar loss are used to refine the two branches. Besides, spectral normalization is adopted in two branches to improve the recover quality. Extensive experiments demonstrate that our method outperforms several state-of-the-art methods quantitatively and qualitatively.


  • [1] G. Anbarjafari and H. Demirel. Image super resolution based on interpolation of wavelet domain high frequency subbands and the spatial domain input image. ETRI journal, 32(3):390–394, 2010.
  • [2] D. Cho, Y.-W. Tai, and I. Kweon.

    Natural image matting using deep convolutional neural networks.


    European Conference on Computer Vision

    , pages 626–643. Springer, 2016.
  • [3] S. Dai, M. Han, W. Xu, Y. Wu, Y. Gong, and A. K. Katsaggelos. Softcuts: a soft edge smoothness prior for color image super-resolution. IEEE Transactions on Image Processing, 18(5):969–981, 2009.
  • [4] C. Dong, C. C. Loy, K. He, and X. Tang. Learning a deep convolutional network for image super-resolution. In European conference on computer vision, pages 184–199. Springer, 2014.
  • [5] C. Dong, X. Zhu, Y. Deng, C. L. Chen, and Y. Qiao. Boosting optical character recognition: A super-resolution approach. Computer Science, 2015.
  • [6] W. T. Freeman, T. R. Jones, and E. C. Pasztor. Example-based super-resolution. Computer Graphics & Applications IEEE, 22(2):56–65, 2002.
  • [7] T. Karras, T. Aila, S. Laine, and J. Lehtinen. Progressive growing of gans for improved quality, stability, and variation. In International Conference on Learning Representations, 2017.
  • [8] J. Kim, J. K. Lee, and K. M. Lee. Accurate image super-resolution using very deep convolutional networks. In

    IEEE Conference on Computer Vision & Pattern Recognition

    , 2016.
  • [9] K. I. Kim and Y. Kwon. Example-based learning for single-image super-resolution. Proc Dagm, 5096:456–465, 2008.
  • [10] K. I. Kim and Y. Kwon. Single-image super-resolution using sparse regression and natural image prior. IEEE transactions on pattern analysis & machine intelligence, (6):1127–1133, 2010.
  • [11] C. Ledig, Z. Wang, W. Shi, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, and A. Tejani. Photo-realistic single image super-resolution using a generative adversarial network. In Computer Vision and Pattern Recognition, pages 105–114, 2017.
  • [12] B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee. Enhanced deep residual networks for single image super-resolution. In The IEEE conference on computer vision and pattern recognition (CVPR) workshops, volume 1, page 4, 2017.
  • [13] D. Ma and G. Agam. A super resolution framework for low resolution document image ocr. In Document Recognition and Retrieval XX, volume 8658, page 86580P. International Society for Optics and Photonics, 2013.
  • [14] C. Mancas-Thillou and M. Mirmehdi. Super-resolution text using the teager filter. Cmu Warp Machine Aug, pages 10–16, 2005.
  • [15] T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida. Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957, 2018.
  • [16] J. Pan, S. Liu, D. Sun, J. Zhang, Y. Liu, J. Ren, Z. Li, J. Tang, H. Lu, Y.-W. Tai, et al. Learning dual convolutional neural networks for low-level vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3070–3079, 2018.
  • [17] R. K. Pandey and A. G. Ramakrishnan. Language independent single document image super-resolution using cnn for improved recognition. arXiv preprint arXiv:1701.08835, 2017.
  • [18] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. Computer Science, 2015.
  • [19] E. Shahrian, D. Rajan, B. Price, and S. Cohen. Improving image matting using comprehensive sampling sets. In Computer Vision and Pattern Recognition, pages 636–643, 2013.
  • [20] X. Shen, X. Tao, H. Gao, C. Zhou, and J. Jia. Deep automatic portrait matting. In European Conference on Computer Vision, pages 92–107. Springer, 2016.
  • [21] R. Walha, F. Drira, F. Lebourgeois, and A. M. Alimi. Super-resolution of single text image by sparse representation. In The Workshop on Document Analysis & Recognition, pages 22–29, 2012.
  • [22] R. Walha, F. Drira, F. Lebourgeois, C. Garcia, and A. M. Alimi. Resolution enhancement of textual images via multiple coupled dictionaries and adaptive sparse representation selection. International Journal on Document Analysis and Recognition (IJDAR), 18(1):87–107, 2015.
  • [23] X. Wang, K. Yu, C. Dong, and C. C. Loy. Recovering realistic texture in image super-resolution by deep spatial feature transform. arXiv preprint arXiv:1804.02815, 2018.
  • [24] X. Wang, K. Yu, C. Dong, and C. C. Loy. Recovering realistic texture in image super-resolution by deep spatial feature transform. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
  • [25] X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, Y. Qiao, and C. C. Loy. Esrgan: Enhanced super-resolution generative adversarial networks. In The European Conference on Computer Vision Workshops (ECCVW), September 2018.
  • [26] N. Xu, B. Price, S. Cohen, and T. Huang. Deep image matting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2970–2979, 2017.
  • [27] J. Yang, J. Wright, T. S. Huang, and Y. Ma. Image super-resolution via sparse representation. IEEE transactions on image processing, 19(11):2861–2873, 2010.
  • [28] H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang, and D. N. Metaxas. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 5907–5915, 2017.
  • [29] Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu. Image super-resolution using very deep residual channel attention networks. In Proceedings of the European Conference on Computer Vision (ECCV), pages 286–301, 2018.