Unsupervised Despeckling

01/10/2018 ∙ by Deepak Mishra, et al. ∙ Indian Institute of Technology Delhi 0

Contrast and quality of ultrasound images are adversely affected by the excessive presence of speckle. However, being an inherent imaging property, speckle helps in tissue characterization and tracking. Thus, despeckling of the ultrasound images requires the reduction of speckle extent without any oversmoothing. In this letter, we aim to address the despeckling problem using an unsupervised deep adversarial approach. A despeckling residual neural network (DRNN) is trained with an adversarial loss imposed by a discriminator. The discriminator tries to differentiate between the despeckled images generated by the DRNN and the set of high-quality images. Further to prevent the developed DRNN from oversmoothing, a structural loss term is used along with the adversarial loss. Experimental evaluations show that the proposed DRNN is able to outperform the state-of-the-art despeckling approaches.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 6

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Speckle extent or size of speckle in ultrasound (US) images is a function of axial and lateral resolution of ultrasonic transducers. The transducers with small apertures are convenient to use, however, produce images with large speckle extent due to their poor lateral resolution [1]. The reduction of speckle extent in such images is required to improve the image contrast and diagnostic quality [2]. However, an excessive speckle filtering results in loss of the features presented by speckle characteristics [3, 4]. Thus, despeckling of US images is basically the reduction of speckle extent without losing the characteristics details of tissues.

The conventional approaches consider speckle as noise and try to reduce inter-pixel intensity variations [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]. Most of these approaches suffer from oversmoothing and generate images with artificial appearances. Few recent approaches, for example [4, 3, 16], prohibit the filtering in high echogenicity regions to reduce the oversmoothing effect. However, they end up with no despeckling in the images of the organs containing large volumes of parenchymal tissues like liver. Further, these approaches require a precise tuning of the implementation parameters for every input image to obtain the desired results. Speckle reduction has also been modeled as an optimization problem in [17, 18, 19]. However, these approaches result in suboptimal performances due to oversimplified assumptions for example the speckle is assumed to have multiplicative nature, which is not entirely correct for the US images [20].

Considering despeckling as an optimization problem, it can be solved using supervised and unsupervised learning. The supervised learning approaches appear to be impractical due to the unavailability of the required annotated training data. Hence, in this work, a deep unsupervised learning method is proposed to reduce speckle extent while preserving the characteristic details. A deep residual network (ResNet)

[21]

, as a part of a generative adversarial network (GAN), is trained to produce the despeckled images. GANs have recently been used in several computer vision applications for natural images

[22, 23, 24, 25, 26, 27]. However, their use in US images is been limited to simulated data generation [28]. In this work, GAN is used for US image despeckling and to best of our knowledge, this is the first attempt to use GAN for the processing of US images. Concurrent to this work, a GAN with Wasserstein distance and perceptual similarity for CT image denoising is proposed in [29], however the contrasting nature of the imaging modalities differentiates the two works.

The unsupervised training of the developed despeckling residual neural network (DRNN) is accomplished in two steps. The DRNN, which is used as the generator in GAN framework, is first trained to reproduce the inputs. Later, it is combined with a discriminator network and trained for the despeckling. A convolutional neural network (CNN) is used as the discriminator. Two sets of liver US images are created for GAN training. One set contains low-quality US images with high speckle extent whereas the second set contains the images with high contrast and considerably less speckle. During training, the generator network learns to reduce the speckle extent of the poor quality images and produce similar speckle characteristics as the better quality set. The generator is trained with the adversarial loss imposed by the discriminator. However, to prevent the generator from copying the data from the high-quality set and maintaining the characteristic details of the input, the adversarial loss is combined with a structural loss. The structural loss works as a regularizer and helps in producing the desired outputs.

The paper is organized as follows. Section II provides the details of the proposed method. Section III presents the experimental results which are compared with the existing approaches. Finally, conclusions are presented in section IV.

Ii Method

Fig. 1: Despeckling or generator network architecture. The first convolutional (Conv) layer contains 7

7 kernels and has 64 channels (ch). It is combined with batch normalization (Batch Norm) and rectified linear units (ReLU). The first layer is followed by a chain of six ResNet blocks and a final Conv layer.

Fig. 2:

Discriminator network architecture. The strided convolutional layers are used to avoid the maxpooling layers along with the Leaky ReLU

. The final output is a single value from the sigmoid activation.

In this work, US image despeckling is modeled as a single image input-output problem where the objective is to get a high-quality low speckle extent image () given the speckle contaminated low-quality US image (). GAN architecture used to accomplish the desired objective is explained below.

Ii-a Network Architecture

GANs combine generative and discriminative models to implicitly learn the distribution of output class. The architecture of the generator () used in this work is shown in Fig. 1. It contains a convolutional layer with batch normalization and rectified linear non-linearities (ReLU), followed by a chain of six ResNet blocks [21] and a final convolutional layer for the desired output. As shown in Fig. 1, each ResNet block contains three convolutional layers and weighted shortcut connection between input and output of the block. Each convolutional layer, except the final layer, has 64 channels and is combined with batch normalization and ReLU non-linearities.

Similarly, the architecture of the discriminator () is shown in Fig. 2. The findings of [23] are adopted to build this network. All convolutional layers contain 33 kernels and are combined with leaky ReLU () [30, 31] non-linearities. Further, strided convolutions, with stride of 2, are used to avoid the pooling layers. The number of channels is doubled after the strided convolution to have a uniform computational complexity of each convolutional layer. A global average pooling layer [32]

is used after the final convolutional layer to have a flexible input size and avoid overfitting associated with the conventional fully connected layers with a large number of neurons. At the end, a single neuron fully connected layer with sigmoid activation is used to get the desired output.

Ii-B Training

The generator is first trained to reproduce the input using loss as:

(1)

where represent the parameters of , is the norm and represents the expectation.

is trained for 100 epochs on the samples of size 192

192 obtained by randomly cropping the low-quality speckled images. Adam optimizer [33] with an initial learning rate of is used to optimize the parameters . The batchsize used for training is 16.

Similarly, is trained to distinguish between the samples drawn from the set of high-quality US images () and the despeckled images generated from the set of low-quality speckled US images (). The binary cross-entropy loss used in the training is:

(2)

where represents the parameters of , which is trained for 5 epochs on the samples of size 128128. The training samples are randomly cropped in equal proportion from the high-quality US images and the despeckled images generated by . As the output of is a value resulting in from the sigmoid activation, the target labels for and are 1 and 0, respectively. is trained using Adam optimizer with an initial learning rate of and batchsize 80.

The initial training of and is followed by the training of the two networks in GAN framework. Same input sizes as the initial training, but half of the batchsizes, are used due to the memory limitation of the available hardware (Nvidia GeForce GTX 1080 GPU, 8GB). is trained with the same loss as (2), however, is trained with a combination of adversarial and structural loss given by (3).

(3)

First term in (3) is the adversarial loss. In presence of this loss, the parameters are optimized such that the generated images are close to the high-quality US images. Further, the minimization of adversarial loss helps to implicitly learn the features used by to distinguish the high-quality US images and the generated despeckled images. However, to avoid the overestimation of the features, which may result in loss of input image characteristics, the structural loss, downweighted with the parameter , is used. Recent works on despeckling [3, 4] show that the structural similarity index measure (SSIM) [34] is an important metric to evaluate the effect of overfilteirng. Further, as reported in [35], a combination of and multi-scale SSIM (MS-SSIM) loss is useful to obtain perceptually correct images. Thus, the structural loss is defined as the sum of and losses, where is given by (1) and is defined as [35]:

(4)
(5)
(6)
(7)
(8)
(9)

where is the despeckled image generated by .

is the Gaussian kernel used to calculate mean and variances of the input and output images.

is the standard deviation of

where represents the size or scale of the kernel. “” and “” represent convolution and element-wise multiplication, respectively. The small constants and are added for stability. In this work and are chosen to be 0.01.

GAN is trained for 500 iterations. The parameters and are optimized by alternatively minimizing and . In a single iteration of GAN, is trained once whereas is trained for 20 times. The Adam optimizer with an initial learning rate of for and an order less for is used. In each iteration, 50% of samples in a minibatch used by are randomly selected from the high-quality set and remaining 50% are generated by from the high speckle extent low-quality images. However, instead of generating all the 50% samples using the current , 25% samples are obtained from a buffer of the despeckled samples generated during the previous iterations. The buffer is updated after every iteration by replacing the old despeckled samples with the current ones, equal to the 25% of the minibatch size. As reported in[27]

, the buffering helps in the stabilization of the GAN training. The networks are is build on keras

[36].

Iii Experiments and Results

Iii-a Data

The RAB6-D convex array transducer with larger side of the aperture as 63.6 mm from GE Healthcare is used to acquire US images with low speckle extent. On the other hand, the US images with high speckle extent are acquired using the Philips X7-2t sector array probe with considerably smaller aperture and transducer elements. Acquisition is done in accordance with the regulations prescribed for medical experiments and with the full consent of the subjects. Sample images from the two probes are shown in Fig 3. Both the US images in Fig 3 show liver of healthy subjects. However, the high speckle extent in the image from X7-2t probe affects the tissue contrast and detectability. Given the smaller aperture, the reduction in speckle extent can increase the utility of the probe for applications like intra-operative navigation and control.

(a) Image from RAB6-D scanner (b) Image from X7-2t scanner
Fig. 3: Examples of (a) low speckle extent image () acquired using RAB6-D probe, (b) high speckle extent image () acquired using X7-2t probe.

The initial trainings of and are performed using 12,000 and 4,000 samples extracted from the images acquired using the two probes. For GAN training, all the images acquired using the two probes are cropped to the respective input sizes using a moving window with 50% overlap. The total number of low-quality () and high-quality () samples available for training are 24,620 and 9,613, respectively.

(a) DPAD (b) OBNLM (c) SBF (d) RTAD
(e) ADMSS (f) EAGF (g) EPPRSRAD (h) Proposed
Fig. 4: Despeckled outputs obtained using different approaches. PDFs observed from the reference (), input () and output () are also shown. The proposed method results in the PDF closest to the reference.

Iii-B Experimental results

Performance of the proposed method is compared with DPAD [9], OBNLM [13], SBF [5], RTAD [15], ADMSS [3], EAGF [2] and EPPRSRAD [4]. The implementation parameter values for DPAD, OBNLM, SBF and EAGF are obtained from [2]. For the remaining methods, the values are taken from [4]. To compare the performance, two parenchymal regions of similar dimensions are identified in the images shown in Fig. 3

. The probability density function (PDF) of the pixel intensities inside the marked regions are used to qualitatively compare the speckle characteristics. The PDFs obtained from the low speckle extent image (

, Fig. 3(a)), input image (, Fig. 3(b)) and the despeckled images () are shown in Fig. 4. The despeckled outputs are also shown in Fig. 4. The closest match to the desired PDF is produced by the proposed method. The observation is further validated by quantitative comparison of the obtained PDFs. KL-divergence is used for this purpose and the observed values are listed in Table I. The smallest value of KL and a reasonable value of KL, resulted in by the proposed method reflects its ability to produce the desired speckle characteristics.

Further, the speckle extent in the identified regions is measured using the speckle region’s signal to noise ratio (SSNR) [4]. The SSNR value observed from the reference image () is 18.2451. Nearest to this value is resulted in by EAGF, however, it is unable to provide the desired characteristics. In contrast, the proposed method gives optimal performance with a decent SSNR value and the desired speckle characteristics. Further, the proposed method is able to produce the despeckled output in less than 0.3 seconds on GPU and in about 16.5 seconds on CPU which is considerably smaller than the time taken by the approaches like OBNLM.

Method SSNR KL KL
Input (no filter) 7.2990 0 0.7512
DPAD 9.7542 31.7018 31.5043
OBNLM 25.529 9.5010 0.9810
SBF 20.5514 8.8660 1.5440
RTAD 9.3614 1.9490 0.7325
ADMSS 7.3641 0.3242 0.7547
EAGF 19.3142 5.4180 1.0210
EPPRSRAD 7.5651 0.3428 0.7183
Proposed 12.016 3.0570 0.2107
TABLE I: Quantitative evaluation of despeckling approaches

Iv Conclusion

In this paper, an unsupervised despeckling approach is presented. The combination of adversarial and structural loss used to train the DRNN results in the despeckled images without losing the characteristic details. A better qualitative and quantitative performance of the proposed approach as compared to the state-of-the-art approaches is observed. The proposed work will provide motivation to explore the possibility of unsupervised deep learning for US image applications.

References

  • [1] K. K. Shung and G. A. Thieme, “Ultrasonic scattering in biological tissues,” CRC press, 1992.
  • [2] D. Mishra, S. Chaudhury, M. Sarkar, and A. S. Soin, “Edge aware geometric filter for ultrasound image enhancement,” Annual Conference on Medical Image Understanding and Analysis, pp. 109–120, 2017.
  • [3] G. Ramos-Llorden, G. Vegas-Sanchez-Ferrero, M. Martin-Fernandez, C. Alberola-Lopez, and S. Aja-Fernandez, “Anisotropic diffusion filter with memory based on speckle statistics for ultrasound images,” IEEE Trans. Image Process., vol. 24, no. 1, pp. 345–358, 2015.
  • [4] D. Mishra, S. Chaudhury, M. Sarkar, A. S. Soin, and V. Sharma, “Edge probability and pixel relativity based speckle reducing anisotropic diffusion,” IEEE Trans. Image Process., 2017.
  • [5] P. C. Tay, C. D. Garson, S. T. Acton, and J. A. Hossack, “Ultrasound despeckling for contrast enhancement,” IEEE Trans. Image Process., vol. 19, no. 7, pp. 1847–1860, 2010.
  • [6] J.-S. Lee, “Speckle analysis and smoothing of synthetic aperture radar images,” Computer graphics and image processing, vol. 17, no. 1, pp. 24–32, 1981.
  • [7] D. T. Kuan, A. Sawchuk, T. C. Strand, P. Chavel et al., “Adaptive restoration of images with speckle,” IEEE Trans. Acoustics, Speech and Signal Process., vol. 35, no. 3, pp. 373–383, 1987.
  • [8] Y. Yu and S. T. Acton, “Speckle reducing anisotropic diffusion,” IEEE Trans. Image Process., vol. 11, no. 11, pp. 1260–1270, 2002.
  • [9]

    S. Aja-Fernández and C. Alberola-López, “On the estimation of the coefficient of variation for anisotropic diffusion speckle filtering,”

    IEEE Trans. Image Process., vol. 15, no. 9, pp. 2694–2701, 2006.
  • [10] Y. Yue, M. M. Croitoru, A. Bidani, J. B. Zwischenberger, and J. W. Clark, “Nonlinear multiscale wavelet diffusion for speckle suppression and edge enhancement in ultrasound images,” IEEE Trans. Med. Imag., vol. 25, no. 3, pp. 297–311, 2006.
  • [11] K. Krissian, C.-F. Westin, R. Kikinis, and K. G. Vosburgh, “Oriented speckle reducing anisotropic diffusion,” IEEE Trans. Image Process., vol. 16, no. 5, pp. 1412–1424, 2007.
  • [12] F. Zhang, Y. M. Yoo, L. M. Koh, and Y. Kim, “Nonlinear diffusion in laplacian pyramid domain for ultrasonic speckle reduction,” IEEE Trans. Med. Imag., vol. 26, no. 2, pp. 200–211, 2007.
  • [13] P. Coupé, P. Hellier, C. Kervrann, and C. Barillot, “Nonlocal means-based speckle filtering for ultrasound images,” IEEE Trans. Image Process., vol. 18, no. 10, pp. 2221–2229, 2009.
  • [14] S. Balocco, C. Gatta, O. Pujol, J. Mauri, and P. Radeva, “Srbf: Speckle reducing bilateral filtering,” Ultrasound Med. Biol., vol. 36, no. 8, pp. 1353–1363, 2010.
  • [15] Y. Deng, Y. Wang, and Y. Shen, “Speckle reduction of ultrasound images based on rayleigh-trimmed anisotropic diffusion filter,” Pattern Recognit. Lett., vol. 32, no. 13, pp. 1516–1525, 2011.
  • [16] C. Munteanu, F. C. Morales, and J. Ruiz-Alzola, “Speckle reduction through interactive evolution of a general order statistics filter for clinical ultrasound imaging,” IEEE Transactions on Biomedical Engineering, vol. 55, no. 1, pp. 365–369, 2008.
  • [17] T. C. Aysal and K. E. Barner, “Rayleigh-maximum-likelihood filtering for speckle reduction of ultrasound images,” IEEE Transactions on Medical Imaging, vol. 26, no. 5, pp. 712–727, 2007.
  • [18] G. Aubert and J.-F. Aujol, “A variational approach to removing multiplicative noise,” SIAM Journal on Applied Mathematics, vol. 68, no. 4, pp. 925–946, 2008.
  • [19] J. M. Bioucas-Dias and M. A. Figueiredo, “Multiplicative noise removal using variable splitting and constrained optimization,” IEEE Transactions on Image Processing, vol. 19, no. 7, pp. 1720–1730, 2010.
  • [20] O. V. Michailovich and A. Tannenbaum, “Despeckling of medical ultrasound images,” ieee transactions on ultrasonics, ferroelectrics, and frequency control, vol. 53, no. 1, pp. 64–78, 2006.
  • [21] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.
  • [22] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, 2014, pp. 2672–2680.
  • [23] A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv preprint arXiv:1511.06434, 2015.
  • [24] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang et al., “Photo-realistic single image super-resolution using a generative adversarial network,” arXiv preprint arXiv:1609.04802, 2016.
  • [25] X. Wang and A. Gupta, “Generative image modeling using style and structure adversarial networks,” European Conference on Computer Vision, pp. 318–335, 2016.
  • [26] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” arXiv preprint arXiv:1703.10593, 2017.
  • [27] A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, and R. Webb, “Learning from simulated and unsupervised images through adversarial training,” The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
  • [28] Y. Hu, E. Gibson, L.-L. Lee, W. Xie, D. C. Barratt, T. Vercauteren, and J. A. Noble, “Freehand ultrasound image simulation with spatially-conditioned generative adversarial networks,” Molecular Imaging, Reconstruction and Analysis of Moving Body Organs, and Stroke Imaging and Treatment, pp. 105–115, 2017.
  • [29] Q. Yang, P. Yan, Y. Zhang, H. Yu, Y. Shi, X. Mou, M. K. Kalra, and G. Wang, “Low dose ct image denoising using a generative adversarial network with wasserstein distance and perceptual loss,” arXiv preprint arXiv:1708.00961, 2017.
  • [30] A. L. Maas, A. Y. Hannun, and A. Y. Ng, “Rectifier nonlinearities improve neural network acoustic models,” Proc. ICML, vol. 30, no. 1, 2013.
  • [31] B. Xu, N. Wang, T. Chen, and M. Li, “Empirical evaluation of rectified activations in convolutional network,” arXiv preprint arXiv:1505.00853, 2015.
  • [32] M. Lin, Q. Chen, and S. Yan, “Network in network,” arXiv preprint arXiv:1312.4400, 2013.
  • [33] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  • [34] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, 2004.
  • [35]

    H. Zhao, O. Gallo, I. Frosio, and J. Kautz, “Loss functions for image restoration with neural networks,”

    IEEE Transactions on Computational Imaging, vol. 3, no. 1, pp. 47–57, 2017.
  • [36] F. Chollet et al., “Keras,” 2015.