Medical ultrasound (US) is a widespread imaging modality owing its popularity to cost efficiency, portability, speed, and lack of harmful ionizing radiation. Unfortunately, the quality of medical ultrasound scans is typically affected by a number of artifacts, some of which are rooted in the physics of ultrasound image formation. Thus, for example, the band-limited characteristics of acoustic beamforming are generally responsible for the reduced spatial resolution. Additionally, the coherent nature of ultrasound image formation makes the resulting scans suffer from the degrading effect of speckle noise, which tends to reduce image contrast and obscure diagnostically relevant details. As a result, ultrasound imaging has limited applicability to clinical tasks relying on tissue characterization.
The problem of speckle noise contamination is traditionally solved by means of image despeckling. Thus, in the past decade, the arsenal of image despeckling algorithms has been enhanced by particularly effective methods based on homomorphic de-noising , non-local means (NLM) , and BM3D filtering . Although some of these methods have already been integrated in high-end ultrasound scanners, their elevated computational requirement and relatively long run-times prevent their deployment on portable devices or in real-time applications.
Recently, convolutional neural networks (CNN) have been extensively used for solving various problems of image processing [4, 5], including some important medical imaging applications, among which are X-ray CT image reconstruction  and denoising . In this paper, we demonstrate the applicability of CNN as a means for fast and accurate approximation of the image restoration results produced by advanced (yet, relatively slow) despeckling algorithms. We believe the availability of such solutions opens the doors to the possibility of using cutting-edge signal processing on low-cost (portable) devices and in real-time applications. In addition, we also demonstrate the applicability of CNN to the reconstruction of “CT-quality” ultrasound images from standard ultrasonograms.
The contributions of this paper are three-fold: Firstly, we propose an end-to-end CNN architecture for real-time despeckling of ultrasound images. Secondly, we show that our network successfully approximates the state-of-the-art de-speckling algorithms, and works about faster on GPU, and faster on CPU, which makes it amenable to real-time applications. Finally, aiming to go beyond the quality of despeckling, we propose a method for training CNN for the purpose of converting regular US scans into corresponding “CT-like” images which feature substantially improved resolution and contrast, while preserving all the relevant anatomical and pathological information.
2.1 Restoration of medical ultrasound scans
The quintessential goal of ultrasound image despeckling is to reject the spurious variations of image intensity caused by speckle noise, while preserving the anatomical integrity and discernibility of diagnostical details. More often than not, the methods of image despeckling take advantage of advanced denoising methodologies, properly adapted to cope with the peculiar statistical nature of speckle noise. In particular, the nowadays standard methods of image denoising based on multi-resolution analysis and sparse representations, Bayesian estimation and random Markov fields, diffusion-based filtering, and non-local means, have all been attempted in application to image despeckling. In view of this fact, it is hardly surprising that image despeckling faces the same challenges as the more general methods of image denoising, chief among which is the fact that their effectiveness tends to increase pro rata with computational complexity. As a result, despite the recent advances in computational technologies, many powerful despeckling and denoising tools are still unavailable for real-time applications. The most prominent examples of such methods include NLM  and BM3D  filters, both of which have polynomial time complexity, even under approximative execution. Accordingly, the principal goal of this study has been to develop CNN capable of accurately approximating the effect of the above filters at much smaller run-times.
In this work, all the proposed and reference methods are applied within the homomorphic filtering framework, which considers speckle noise to be multiplicative in its nature. Thus, a typical homomorphic filter amounts to denoising of logarithmically transformed data, followed by exponentiation of the result thus obtained. Moreover, as demonstrated in , this type of despeckling becomes particularly effective if applied to the envelops of deconvolved
RF data, with an addition of the procedure of outlier shrinkage. While the deconvolution is helpful in substantially decorrelating the speckle noise, the outlier shrinkage has an effect of gaussianization of noise behaviour via suppressing the “heavy tails” of the noise distribution produced by the log transform. The net effect of the above preprocessing is a considerable simplification of the statistical properties of the noise that can now be treated as being approximately white and Gaussian. Consequently, in what follows, all the tested method are applied to the preprocessed data, with particular denoising methods being total-variation (TV), NLM, and BM3D filtering.
2.2 Data Sources
In order to conduct numerical experiments under controllable conditions, in silico ultrasound datasets were used. Unfortunately, in view of the fact that training of CNN normally requires tens of thousands of images, generating such data by means of standard simulation software packages (such as, e.g., k-Wave or Field-II) was found to be infeasible. As a result, the in silico data were simulated using the fast simulation scheme of , which allows one to properly model all the major US artifacts, including acoustic shadowing, dispersive attenuation, refraction-related effects, etc. The simulation parameters were set so as to mimic the characteristics of a standard linear array probe with a central frequency of 5 MHz and an approximate Q-factor of 0.5. The distributions of tissue density and acoustic velocities have been deduced from the corresponding Hounsfield units of abdominal X-ray CT scans.
2.3 CNN-based approximation
In our work, we use the in-phase/quadrature (IQ) images that are derived from radio-frequency (RF) images through the process of frequency demodulation. We propose training a CNN over the dataset of pairs of IQ images and resulted conventional despeckled images to achieve a fast approximation of the despeckling algorithms mentioned in Section 2.1. Going beyond despeckling, we aim to estimate the CT image given an ultrasound one by exploiting the fact that intensity in CT images (measured in Hounsfield units) is proportional to the tissue density.
Similarly, ultrasound imaging depends on the speed of sound and acoustic impedance, which in turn are also directly influenced by the density of the tissue. Hence, it is clear that the X-ray CT and ultrasound imaging are affected by a common physical property of the tissue. Exploiting this fact, we propose to use CNN to approximate a “CT-quality” image of the tissue given its corresponding ultrasound image.
Experiments presented in this paper were conducted using a multi-resolution fully convolutional neural network (CNN) consisting of a down-sampling track followed by an up-sampling track. Our network consists of convolutional layers with symmetric skip connections from each layer in the down-sampling track to its corresponding layer in the up-sampling track, similar to . All the kernel sizes of convolutions have been set to be
. The non-linearities are set to ReLU and training is performed on mini-batches ofimages using Adam optimizer, with a learning rate of .
3.1 Fast approximation of despeckling
Dataset. The US RF data (IQ images) was obtained by applying our simulator described in Section 2.2 to CT image patches extracted from the Cancer Imaging Archive (TCIA) . Conventional despeckling algorithms (TV, BM3D, NLM) were applied on the US IQ data and the corresponding despeckled patches were generated.
Training. The dataset comprised of 8200 samples of US IQ images and the corresponding despeckled images using the conventional despeckling algorithms mentioned above. The network architecture as detailed in Section 2.3 was implemented with the two input channels corresponding to real and imaginary parts of the complex-valued IQ image, and the output as the corresponding despeckled image. Training was performed for - mini-batch iterations for hours.
Results. Post training, 420 pairs of US IQ images and their respective despeckled patches were considered for evaluation. The performance of trained network was compared to that of the conventional despeckling algorithms. The benchmarking was performed on both Intel Xeon E5 CPU and an NVIDIA Titan X GPU. It has to be noted that the GPU implementations of BM3D and NLM methods was not available for comparison.
The run-time results for each algorithm are summarized in Table 1 with their corresponding PSNR in Table 2. It can be observed that the CNN approximation of different despeckling algorithms gained a run-time speed-up in the range of on the GPU and on the CPU as compared to their off-the-shelf GPU/CPU counterparts, while maintaining an accurate approximation of the original despeckling output. A visual inspection of the results is displayed in Figure 1 showing that the CNN, upon training, can approximate any well-engineered time-consuming despeckling algorithms accurately while producing orders of magnitude of speed-up needed for real-time applications.
It is also important to note that the current results were obtained with minimal optimization of the network architecture which means that a more compact network might achieve similar results in a shorter time.
3.2 Reconstructing ‘CT-quality’ images
Datasets. The CT image patches were extracted from the TCIA datasets with corresponding ultrasound IQ images generated using the simulator described in Section 2.2.
Training. Using the network architecture proposed in Section 2.3, training was performed on a dataset comprising of 13,860 patches from ultrasound IQ images ( channels) as input and the corresponding CT patches as the output. Training was run for mini-batches for hours on an NVIDIA Titan X GPU.
Results. Results are presented in Figure 2. We compare the performance of the reconstructed CT images to conventional and CNN-approximated despeckling methods, and for all the images, the PSNR is presented with respect to the ground truth CT(columns and in Figure 2). We can observe both visually and quantitatively that our reconstructed CT images look closer to the ground truth CT when compared to the despeckled ones.
Additionally, in order to verify that our network does not overfit the dataset, and generalizes well to previously unseen data, we tested it on a different CT dataset obtained from TCIA. The average PSNR of the reconstructed CT-quality images with respect to ground truth CT was dB, which resembles the performance on the training set.
We can observe in our results (column and in Figure ) that the trained network is able to recover the overall anatomy and the contrast accurately, except for the CT-specific noise, a feature that CT and ultrasound imaging do not share. A possible explanation could be that the network learns the common manifold of the CT and US domains, but not intrinsic features like modality-specific noise.
In this paper, we have showed that conventional despeckling algorithms can be approximated by CNN and by that, provide a major speed-up in run-times. Moreover, we have showed that a CNN can reconstruct a CT-quality image from US IQ images within the same run-time range. With that being said, the networks were trained and tested on pairs of CT images and the corresponding simulated ultrasound images. However, in practice, the CT-US paired data is difficult to obtain. Our further research will concentrate on tackling this problem, possibly by developing an unsupervised/semi-supervised training framework.
-  Oleg V Michailovich and Allen Tannenbaum, “Despeckling of medical ultrasound images,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 53, no. 1, pp. 64–78, 2006.
-  Pierrick Coupé, Pierre Hellier, Charles Kervrann, and Christian Barillot, “Nonlocal means-based speckle filtering for ultrasound images,” IEEE Transactions on Image Processing, vol. 18, no. 10, pp. 2221–2229, 2009.
-  Yu Gan, Elsa Angelini, Andrew Laine, and Christine Hendon, “BM3D-based ultrasound image denoising via brushlet thresholding,” in Biomedical Imaging (ISBI), 2015 IEEE 12th International Symposium on. IEEE, 2015, pp. 667–670.
-  Kai Zhang, Wangmeng Zuo, Yunjin Chen, Deyu Meng, and Lei Zhang, “Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,” IEEE Transactions on Image Processing, 2017.
Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang,
“Learning a deep convolutional network for image super-resolution,”in
European Conference on Computer Vision. Springer, 2014, pp. 184–199.
-  Kyong Hwan Jin, Michael T McCann, Emmanuel Froustey, and Michael Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Transactions on Image Processing, vol. 26, no. 9, pp. 4509–4522, 2017.
-  Qingsong Yang, Pingkun Yan, Mannudeep K Kalra, and Ge Wang, “CT Image Denoising with Perceptive Deep Neural Networks,” arXiv preprint arXiv:1702.07019, 2017.
-  Nidhi Gupta, A.P Shukla, and Suneeta Agarwal, “Despeckling of Medical Ultrasound Images: A Technical Review,” Int. Journal of Information Engg. and Electronic Business(IJIEEB), vol. 8, no. 3, 2016.
-  Oliver Kutter, Ramtin Shams, and Nassir Navab, “Visualization and GPU-Accelerated Simulation of Medical Ultrasound from CT Images,” Comput. Methods Prog. Biomed., vol. 94, no. 3, pp. 250–266, June 2009.
-  Xiao-Jiao Mao, Chunhua Shen, and Yu-Bin Yang, “Image restoration using convolutional auto-encoders with symmetric skip connections,” CoRR, vol. abs/1606.08921, 2016.
-  Kenneth Clark et. al, “The Cancer Imaging Archive (TCIA): Maintaining and Operating a Public Information Repository,” Journal of Digital Imaging, vol. 26, no. 6, pp. 1045–1057, Dec 2013.