1 Prior: pixel-distribution features
In low-level vision problems, pixel-level features are the most important features. We compare the histograms of different images in various noise levels to investigate the pixel-level features having a certain of consistency. As we can see from Fig. 1 and Fig. 2, the pixel-distribution in noisy images is more similar in higher noise level than lower noise level .
WIN inferences noise-free images based on the learned pixel-distribution features. When the noise level is the higher, the pixel-distribution features are more similar. Thus, WIN can learn more pixel-distribution features from noisy images having higher level noise. This is the reason that WIN performs even better in higher-level noise, which can be seen and verified in section 3.
2 Wider Convolution Inference Strategy
In Fig. 3, we illustrate the architectures of WIN5, WIN5-R, and WIN5-RB.
Three proposed models have the identical basic structure: layers and filters of size in most convolution layers, except for the last one with filter of sizenair2010rectified : for layers 1 to , BN is added between Conv and ReLU nair2010rectified . (b) Conv+BN: for the last layer, filters of size is used to reconstruct the . In addition, a shortcut skip connecting the input (data layer) with the output (last layer) is added to merge the input data with as the final recovered image.
2.2 Having Knowledge Base with Batch-Normal
In this work, we employ Batch Normalization (BN) for extracting pixel-distribution
statistic features and reserving training data means and variances in networks for denoising
inference instead of
using the regularizing effect of improving the generalization of a learned model.
we employ Batch Normalization (BN) for extracting pixel-distribution statistic features and reserving training data means and variances in networks for denoising inference instead of using the regularizing effect of improving the generalization of a learned model.
The regularizer-BN can keep the data distribution the same as input: Gaussian distribution. This distribution consistency between input and regularizer ensures more pixel-distribution statistic features can be extracted accurately. The integration of BNioffe2015batch into more filters will further preserve the prior information of the training set. Actually, a number of state-of-the-art studies elad2006image ; joshi2009image ; xu2015patch have adopted image priors (e.g. distribution statistic information) to achieve impressive performance.
Can Batch Normalization work without a Skip Connection? In WINs, BN ioffe2015batch cannot work without the input-to-output skip connection and is always over-fitting. In WIN5-RB’s training, BN keeps the distribution of input data consistent and the skip connection can not only introduce residual learning but also guide the network to extract the certain features in common: pixel-distribution. Without the input data as a comparison, BN could bring negative effects by keeping the each input distribution same, especially, when a task is to output pixel-level feature map. In DnCNN, two BN layers are removed from the first and last layers, by which a certain degree of the BN’s negative effects can be reduced. Meantime DnCNN also highlights network’s generalization ability largely relies on the depth of networks.
In Fig.4, learned priors (means and variances) are preserved in WINs as knowledge base for denoising inference. When WIN has more channels to preserve more data means and variances, various combinations of these feature maps can corporate with residual learning to infer the noise-free images more accurately.
3 Experimental Results
|PSNR (dB) / SSIM|
|BM3D dabov2009bm3d||RED-Net mao2016image||DnCNN zhang2016beyond||WIN5||WIN5-R||WIN5-RB-S||WIN5-RB-B|
3.1 Quantitative Result
The quantitative result on test set BSD200 is shown on Table 1 including noise levels . Moreover, we compare WIN5-RB-B, DnCNN and BM3D behaviors at different noise levels of average PSNR on BSD200-test. As we can see from Fig.5, WIN5-RB-B (blind denoising) trained for outperforms BM3D dabov2009bm3d and DnCNN zhang2016beyond on all noise levels and is significantly more stable even on higher noise levels.
In addition, in Fig.5, as the noise level is increasing, the performance gain of WIN5-RB-B is getting larger, while the performance gain of DnCNN comparing to BM3D is not changing much as the noise level is changing. Compared with WINs, DnCNN is composed of even more layers embedded with BN. This observation indicates that the performance gain achieved by WIN5-RB does not mostly come from BN’s regularization effect but the pixel-distribution features learned and relevant priors such as means and variances reserved in WINs. Both Larger kernels and more channels can promote CNNs more likely to learn pixel-distribution features.
3.2 Visual results
For Visual results, We have various images from two different datasets, BSD200-test and Set12, with noise levels applied separately.
One image from BSD200-test with noise level=10
One image from BSD200-test with noise level=30
One image from BSD200-test with noise level=50
One image from BSD200-test with noise level=70
One image from Set12 with noise level=10
One image from Set12 with noise level=30
One image from Set12 with noise level=50
D. F. Andrews and C. L. Mallows.
Scale mixtures of normal distributions.Journal of the Royal Statistical Society. Series B (Methodological), pages 99–102, 1974.
- (2) D. Ciregan, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classification. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 3642–3649. IEEE, 2012.
K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian.
BM3D image denoising with shape-adaptive principal component analysis.In SPARS’09-Signal Processing with Adaptive Sparse Structured Representations, 2009.
- (4) C. Dong, Y. Deng, C. Change Loy, and X. Tang. Compression artifacts reduction by a deep convolutional network. In Proceedings of the IEEE International Conference on Computer Vision, pages 576–584, 2015.
- (5) M. Elad and M. Aharon. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image processing, 15(12):3736–3745, 2006.
- (6) C. Farabet, C. Couprie, L. Najman, and Y. LeCun. Learning hierarchical features for scene labeling. IEEE transactions on pattern analysis and machine intelligence, 35(8):1915–1929, 2013.
- (7) K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778, 2016.
- (8) J. J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the national academy of sciences, 79(8):2554–2558, 1982.
- (9) S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
- (10) V. Jain and S. Seung. Natural image denoising with convolutional networks. In Advances in Neural Information Processing Systems, pages 769–776, 2009.
- (11) N. Joshi, C. L. Zitnick, R. Szeliski, and D. J. Kriegman. Image deblurring and denoising using color priors. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 1550–1557. IEEE, 2009.
D. Kiku, Y. Monno, M. Tanaka, and M. Okutomi.
Residual interpolation for color image demosaicking.In 2013 IEEE International Conference on Image Processing, pages 2304–2308. IEEE, 2013.
J. Kim, J. K. Lee, and K. M. Lee.
Accurate image super-resolution using very deep convolutional networks.In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR Oral), June 2016.
- (14) A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
- (15) S. Z. Li. Markov random field modeling in image analysis. Springer Science & Business Media, 2009.
- (16) X.-J. Mao, C. Shen, and Y.-B. Yang. Image restoration using convolutional auto-encoders with symmetric skip connections. arXiv preprint arXiv:1606.08921, 2016.
- (17) D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on, volume 2, pages 416–423. IEEE, 2001.
- (18) D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proc. 8th Int’l Conf. Computer Vision, volume 2, pages 416–423, July 2001.
V. Nair and G. E. Hinton.
Rectified linear units improve restricted boltzmann machines.
Proceedings of the 27th international conference on machine learning (ICML-10), pages 807–814, 2010.
- (20) Y. A. Rozanov. Markov random fields. In Markov Random Fields, pages 55–102. Springer, 1982.
- (21) K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
- (22) J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014.
- (23) C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1–9, 2015.
- (24) J. Xu, L. Zhang, W. Zuo, D. Zhang, and X. Feng. Patch group based nonlocal self-similarity prior learning for image denoising. In Proceedings of the IEEE International Conference on Computer Vision, pages 244–252, 2015.
- (25) M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In European Conference on Computer Vision, pages 818–833. Springer, 2014.
- (26) K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. arXiv preprint arXiv:1608.03981, 2016.