Synthetic Elastography using B-mode Ultrasound through a Deep Fully-Convolutional Neural Network

08/09/2019 ∙ by R. R. Wildeboer, et al. ∙ TU Eindhoven 2

Shear-wave elastography (SWE) permits local estimation of tissue elasticity, an important imaging marker in biomedicine. This recently-developed, advanced technique assesses the speed of a laterally-travelling shear wave after an acoustic radiation force "push" to estimate local Young's moduli in an operator-independent fashion. In this work, we show how synthetic SWE (sSWE) images can be generated based on conventional B-mode imaging through deep learning. Using side-by-side-view B-mode/SWE images collected in 50 patients with prostate cancer, we show that sSWE images with a pixel-wise mean absolute error of 4.8 kPa with regard to the original SWE can be generated. Visualization of high-level feature levels through t-Distributed Stochastic Neighbor Embedding reveals a high degree of overlap between data from different scanners. Also qualitatively, sSWE results seem generalisable to single B-mode acquisitions and other scanners. In the future, we envision sSWE as a reliable elasticity-related tissue typing strategy that is solely based on B-mode ultrasound acquisition.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

page 5

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Tissue elasticity is an important biomarker of cancer. Prostate cancer, for example, is characterized by increased stiffness [1], thyroid and liver nodules can be discriminated based on their elasticity [2, 3], and also breast lesions are typically diagnosed based on their elastic properties [4]. It is also increasingly used to image musculoskeletal pathologies in e.g. muscles, tendons, and ligaments [5]. Over the last few decades, this has spurred considerable advances in the development of elasticity imaging.

Ultrasound-based elasticity imaging, that is, ultrasound elastography, has played a major role in these developments [6]. So-called quasi-static ultrasound strain imaging allows for the relative assessment of tissue deformation due to externally applied stress, but as this stress is often manually delivered, the technique remains to be operator dependent and limited to superficial organs. Therefore, more recently, dynamic elastography techniques were developed where tissue deformation induced by an acoustic radiation force “push” pulse is quantified to obtain more objective and reproducible measures of elasticity [7]

. At this moment, we distinguish especially acoustic radiation force imaging (ARFI) and shear-wave elastography

[8, 5]. The first method analyses tissue displacement resulting from a “push” pulse along the beam path, whereas the latter relies on the speed of transversally-travelling shear waves to estimate tissue elasticity. The tissue elasticity is quantified by the Young’s modulus, that is, the ratio between stress and strain.

SWE requires advanced ultrafast acquisition schemes with frame rates of 1000 Hz to accurately assess tissue deformation and shear-wave dynamics [7, 9]. Moreover, ultrasound transducers have to be sufficiently equipped to allow for the generation of acoustic radiation force pulses as well as ultrafast imaging of the shear wave displacements [10]. Although several techniques and sequences have been developed to enable SWE on commercial scanners, the frame rate of conventional B-mode ultrasound cannot be reached as it requires long settling times and multiple “push” pulses to reliably generate an elastogram.

Realizing that conventional B-mode ultrasound assesses tissue echogenicity rather than tissue elasticity, we here propose that both properties can be expected to be linked through their dependence on the underlying tissue structure. In this work, we exploit this fact by designing a deep fully-convolutional neural network (DCNN) that is able to assess echogenic patterns in B-mode ultrasound that are useful for elasticity-related tissue typing (see Figure 1). Whereas deep-learning strategies were already proposed for estimation of speed of sound [11], extraction of strain images from radio frequency data [12], and for processing of conventional SWE sequences [13], we train our network to directly map B-mode ultrasound towards the corresponding elasticity images obtained through SWE.

Fig. 1: Schematic implementation of conventional SWE and synthetic SWE.

Ii Materials and Methods

Ii-a Data Acquisition

At the Martini Clinic in Hamburg, supersonic shear imaging was performed in 50 patients that were diagnosed with prostate cancer using the Aixplorer ultrasound scanner (SuperSonic Imagine, Aix-en-Provence, France). At least 3 image planes (basal, mid-gland and apical orientation) were recorded per patient, defining regions of interest (ROIs) that covered the entire prostate and smaller ROIs that only covered one side or a suspicious area. At least 9 images were obtained per patient. We extracted the Young’s modulus data from the SWE acquisitions, as well as the corresponding estimation confidence. Pre-processing involved alignment of the side-by-side B-mode and SWE data, followed by downsampling onto a conveniently-scaled 9664 grid. The B-mode images were subsequently normalized from 0 to 1. Likewise, the elastography data were normalized by 100 kPa so that clinically-relevant Young’s moduli also scale from 0 to 1. Also full-screen B-mode images were obtained in roughly the same imaging planes.

In order to establish the use of sSWE in a device that does not feature SWE itself, B-mode and quasi-static elastography recordings were performed in 10 patients at the Academic Medical Centre (University Hospital, Amsterdam) using an iU22 scanner (Philips Healthcare, Bothell, WA) equipped with a C10-3v probe. Quasi-static elastography allows for the extraction of relative stiffness by assessment of tissue compression and decompression upon cyclic manual pressure asserted by the ultrasound operator [7]. These quasi-static elastograms allow a qualitative evaluation of sSWE.

Fig. 2: Schematic representation of the proposed DCNN architecture for the synthesis of shear-wave elastography from conventional B-mode ultrasound.

Ii-B Neural Network Architecture

We designed a DCNN that serves as an end-to-end nonlinear mapping function transforming 2D B-mode ultrasound images to 2D synthetic SWE images. To this end, we employ an encoder-decoder architecture in which a hierarchy of features is consecutively extracted from the B-mode data to yield a latent feature space. These features are subsequently used to construct an SWE image by a decoding network that approximately mirrors the encoding part. This type of network has been used frequently for image segmentation and reconstruction tasks [14, 15, 16]. Our encoder-decoder architecture was appended with direct “skip” connections from the encoder filter layer to its equally-sized decoder counterpart, as introduced by [17]. By transferring the encoder layer output across the latent space and concatenating it to the larger-scale model features during decoding, we enable our network to combine fine and course level information and generate higher-resolution SWE estimations. See Figure 2 for an overview of the DCNN architecture.

The convolutional layers of the proposed network comprised a bank of 2D 3

3-pixel convolutional filters (described by the filter weights) and biases of which the results were subsequently passed through a non-linear activation function. Every convolution layer maps its input to 32 feature maps. Leaky Rectified Linear Units (Leaky ReLUs; i.e.,

with an -value of 0.1 were adopted as non-linear activation functions to minimize the risk of vanishing gradients [18].

Every two convolutional layers were followed by a 2

2 spatial max-pooling operation with stride of 2, reducing the image dimensions with a factor 2 and forcing the network to subsequently learn larger-scale features that are less sensitive to local variations. The max pooling operation reduces a kernel of four pixels into one by projecting only the highest value onto the smaller grid

[19]

. In total, the encoder consists of 6 convolutional and 3 max-pooling layers mapping the input images into the latent space, which consists of 2 convolutional layers as well. With the decoder being a mirrored version of the encoder layer, appended with a final output layer, the network comprises a total of 204,385 trainable parameters. Max-pooling layers in the decoder are replaced by upsample layers that restore the original image dimensions through nearest-neighbour interpolation. The final output layer consists of a sigmoid activation function that maps the network outputs to the normalized Young’s modulus. The use of a sigmoid activation function forces the network to be the most sensitive to values around 0.5. Due to the normalization by 100 kPa, the network therefore focuses the most clinically relevant Young’s moduli which are in the range between 25 kPa and 75 kPa

[20].

Ii-C Training Strategy

Optimization of the trainable DCNN parameters was achieved through minimization of the root-mean-square prediction error (RMSE). Given a set of SWE images and corresponding B-mode images , we iteratively update the parameters in our network such that the loss of the estimated sSWE images with regard to is minimized:

(1)

In this formulation, is the number of training images.

Network parameters were learned by employment of the stochastic optimization method Adam [21]

, in 2,500 epochs, using a mini-batch size of 64 training samples for each iteration. We chose a relatively small batch size for its looser memory requirements and a lower risk of overfitting during the training phase. All filter weights were initialized by a random uniform kernel initializer over the range [-0.05, 0.05] and all biases were initialized to zero. An adaptive learning rate reduction strategy was used to reduce the learning rate once the optimization reached a plateau for 10 epochs.

Whereas B-mode data were available for the full image space, SWE values are only estimated in a certain region of interest. Moreover, SWE analysis allows for a measure of estimation confidence and, usually, low-confidence values are displayed more transparently or not at all. We exploited this information by only propagating loss gradients for those pixels presenting an SWE label of sufficient quality (i.e., 0.75 confidence).

Generalizability was promoted through data augmentation, altering 90% of the mini-batch data before being fed into the network [22]. Data augmentation entailed mirroring and cropping of the image, contrast reduction or amplification, random rotation with a maximum of 10 degrees, and full image translation. All coordinate transformations were also applied to the SWE labels. Furthermore, we applied drop-out after each max-pooling step to avoid overfitting [23]. This regularization method involves the removal of (in our case 50% of the) nodes in a random fashion at each training epoch, while switching on all units during testing. As a consequence, inference is based on an approximate average of all these trained dropout networks [23], acting as an ensemble.

The model was implemented using Keras with the TensorFlow (Google, Mountain View, CA) back-end. Both for training and inference, we employed a Titan XP (NVIDIA, Santa Clara, CA).

Ii-D Validation methodology

Prior to training, our dataset was divided in a training set of 40 patients (consisting of 360 transrectal side-by-side B-mode-SWE images with a varying region-of-interest size) and a test set of 10 patients (90 images). All images from the training-set patients were used to maximize the training input and reduce the impact of artefacts, whereas only the three full-prostate images of each test patient were used during testing to ensure that all prostate regions equally contributed to the validation. To evaluate the performance of the DCNN, both the RMSE and mean absolute error (MAE) were monitored:

(2)

The RMSE was chosen as loss function because it more heavily penalizes large errors than the similar MAE, and thus allows us to put more weight on the accurate estimation of occasionally-occurring lesions in otherwise low-to-medium-elasticity images. For validation we also considered the mean error (ME),

(3)

a measure that reflects a potential bias towards higher or lower Young’s moduli.

Fig. 3: Examples from five test patients, with (a) B-mode ultrasound imaging, (b) shear-wave elastographic acquisition, and (c) corresponding synthetic SWE (sSWE) image by deep learning.

In order to study to what extent higher-level features are independent from the machine used from the B-mode acquisition, we encoded both the B-mode images recorded with the Philips iU22 scanner and the B-mode images from the test set obtained with the original SuperSonic Aixplorer device. Subsequently, we examined the latent feature space through t-Distributed Stochastic Neighbor Embedding (t-SNE), a probabilistic approach to dimensionality reduction [24].

Fig. 4: Examples of sSWE generalisation to full-screen B-mode acquisitions in the same test patients, with (a) B-mode ultrasound imaging, (b) corresponding shear-wave elastographic acquisition, and (c) corresponding synthetic SWE (sSWE) image by deep learning.

Iii Results

In Figure 3, sSWE examples from five test patients are depicted alongside the B-mode and corresponding SWE images. Over the test set, we were able to reach an RMSE of 9.7 kPa, an ME of -1.7 kPa and an MAE of 4.8 kPa. The negative ME reveals that the model is slightly biased towards higher SWE estimates. Qualitatively, tumour locations recognizable on SWE seem to be well estimated also by the sSWE. Outside of the prostate, the SWE as well as the sSWE are generally of lower quality. Once trained, the time needed to generate an sSWE images is in the order of 1 ms.

Fig. 5: Visualization of B-mode images from both the original SuperSonic Aixplorer ultrasound scanner and the Philips iU22 scanner encoded into high-level features by the DCNN. Reduction of the dimensionality was carried out through t-Distributed Stochastic Neighbor Embedding into two dimensions.

Using full-screen B-mode acquisition of the same imaging planes, we demonstrate the ability of sSWE to generalise to B-mode images outside the SWE module. These B-mode images exhibit a different resolution and contrast compared to the side-by-side B-mode images. Nevertheless, even though we allowed the probe to put more pressure on the prostate, generally bringing the prostate closer into view, Figure 4 shows how the results of these images as input for the trained sSWE model compare well qualitatively to the corresponding SWE images. This suggests that the DCNN extracts higher-level features that are shared among transrectal B-mode images in general.

As can be appreciated in Figure 5, depicting the results of t-SNE of the latent feature space, there is only a slight difference in how data from the iU22 and Aixplorer US scanners is mapped into the resulting two-dimensional subspace. This suggests that the information encoded in the high-level features generally persists from acquisition to acquisition. Moreover, although the Philips ultrasound machine that does not have an SWE option, Figure 6 demonstrates that stiff regions as revealed by sSWE correspond to those found by QSE, which was available on the device.

Fig. 6: Examples of sSWE results in a non-SWE ultrasound device, with (a) B-mode ultrasound imaging, (b) quasi-static elastographic acquisition, and (c) corresponding synthetic SWE (sSWE) image by deep learning.

Iv Discussion

In this work, we describe and validate a DCNN architecture that provides a robust generation of synthetic SWE images based on B-mode ultrasound. This approach is in line with other recently-proposed inter-modality image synthesis techniques, such as computed tomography from magnetic resonance images [16, 25, 26] or vice versa [27]. Validation in 30 full-prostate SWE images from 10 patients demonstrated a pixel-wise MAE of 4.8 kPa, less than 10% deviation in the clinically-relevant range of 0-70 kPa. Accordingly, it seems that B-mode ultrasound (patterns) harbours information that can be linked tissue elasticity.

A major advantage of the proposed technique is that, once the DCNN is trained, generation of sSWE images is extremely fast. One can envision B-mode acquisitions to be readily appended with sSWE in the future. Another major advantage is that the quality of the sSWE image is only dependent on the quality of the B-mode image, whereas SWE images are known to be sensitive to e.g. probe pressure, motion artefacts, and the region of interest [9].

In our example of the prostate, quick estimation of elastic properties would eventually not only support the assessment of potential disease, but also registration technology that takes into account mechanical properties [28] and the (automatic) identification of anatomical zones [29]. Moreover, sSWE features can potentially play an important role in the design of ultrasound-based computer-aided detection approaches for prostate cancer [30].

Nonetheless, these results are preliminary in the sense that only a small dataset of a specific organ and a limited number of machines has been taken into account. To provide more robust evidence for the proof-of-principle work presented in this paper, a larger and more variant SWE dataset containing different organs and acquisitions should be examined. The availability of a higher variety of data might also allow the training of a deeper network, which may result in more robust and potentially more accurate sSWE estimation. An in-depth study of SWE images that were incorrectly estimated might guide towards more effective augmentation techniques or highlight the type of acquisitions that should be more abundant in the training set for future data collection. Furthermore, as we already found indications that sSWE might be generalisable to other ultrasound machines, the use of domain adaptation techniques to ensure high-quality, machine-independent sSWE can be envisaged [31]. As shown in Figure 5

, the high-level feature values generally differ little and minimal domain adaptation strategies could already enforce full overlap. For this, for example, shift techniques could be utilized to adjust the mean and variance of the latent throughput.

A possible extension to the proposed network could be the concurrent estimation of SWE confidence, which could be used to identify low-confidence regions due to shear-wave artefacts such as signal voids in (pseudo)liquid lesions or B-mode artefacts such as shadowing or reverberation. In the future, an sSWE implementation could also be extended to predict other elasticity-related parameters than the Young’s modulus, such as viscosity [32], which is considered an additional biomarker for cancer in e.g. the prostate [33]. At the present moment, however, there is still a lack of accurate techniques that can assess tissue viscoelastic properties at high spatial resolution allowing the development of such networks.

V Conclusion

In conclusion, we have proposed a DCNN architecture that generates synthetic SWE images based on B-mode ultrasound acquisitions. Although further validation of the method is still required, development of this technique opens the possibility of elasticity-like tissue characterisation without the need for complex SWE acquisition protocols. This would enable SWE-like analysis by basic US scanners, which could even be low-end systems.

Vi Acknowledgements

This study has received funding from the Dutch Cancer Society (#UVA2013-5941) and a European Research Council Starting Grant (#280209), and was performed within the framework of the IMPULS2-program within the Eindhoven University of Technology in collaboration with Philips.

References

  • [1] J.-M. Correas, A.-M. Tissier, A. Khairoune, G. Khoury, D. Eiss, and O. Hélénon, “Ultrasound elastography of the prostate: State of the art,” Diagnostic and Interventional Imaging, vol. 94, no. 5, pp. 551–560, may 2013.
  • [2] F. Sebag, J. Vaillant-Lombard, J. Berbis, V. Griset, J. F. Henry, P. Petit, and C. Oliver, “Shear Wave Elastography: A New Ultrasound Imaging Mode for the Differential Diagnosis of Benign and Malignant Thyroid Nodules,” The Journal of Clinical Endocrinology & Metabolism, vol. 95, no. 12, pp. 5281–5288, dec 2010.
  • [3] R. G. Barr, “Shear wave liver elastography,” Abdominal Radiology, vol. 43, no. 4, pp. 800–807, 2018.
  • [4] J. M. Chang, W. K. Moon, N. Cho, A. Yi, H. R. Koo, W. Han, D.-Y. Noh, H.-G. Moon, and S. J. Kim, “Clinical application of shear wave elastography (SWE) in the diagnosis of benign and malignant breast diseases,” Breast Cancer Research and Treatment, vol. 129, no. 1, pp. 89–97, 2011.
  • [5] M. S. Taljanovic, L. H. Gimber, G. W. Becker, L. D. Latt, A. S. Klauser, D. M. Melville, L. Gao, and R. S. Witte, “Shear-Wave Elastography: Basic Physics and Musculoskeletal Applications,” RadioGraphics, vol. 37, no. 3, pp. 855–870, may 2017.
  • [6] R. M. S. Sigrist, J. Liau, A. El Kaffas, M. C. Chammas, and J. K. Willmann, “Ultrasound elastography: review of techniques and clinical applications,” Theranostics, vol. 7, no. 5, p. 1303, 2017.
  • [7] J.-L. Gennisson, T. Deffieux, M. Fink, and M. Tanter, “Ultrasound elastography: Principles and techniques,” Diagnostic and Interventional Imaging, vol. 94, no. 5, pp. 487–495, 2013.
  • [8] K. Nightingale, “Acoustic Radiation Force Impulse (ARFI) Imaging: a Review,” Current medical imaging reviews, vol. 7, no. 4, pp. 328–339, nov 2011.
  • [9] P. Bouchet, J.-L. Gennisson, A. Podda, M. Alilet, M. Carrié, and S. Aubry, “Artifacts and Technical Restrictions in 2D Shear Wave Elastography TT - Artefakte und technische Einschränkungen bei der 2D-Scherwellen-Elastografie,” Ultraschall in Med, no. EFirst, 2018.
  • [10] A. P. Sarvazyan, O. V. Rudenko, S. D. Swanson, J. Fowlkes, and S. Y. Emelianov, “Shear wave elasticity imaging: a new ultrasonic technology of medical diagnostics,” Ultrasound in Medicine & Biology, vol. 24, no. 9, pp. 1419–1435, 1998.
  • [11] M. Feigin, D. Freedman, and B. W. Anthony, “A deep learning framework for single sided sound speed inversion in medical ultrasound,” arXiv preprint arXiv:1810.00322, 2018.
  • [12] S. Wu, Z. Gao, Z. Liu, J. Luo, H. Zhang, and S. Li, “Direct reconstruction of ultrasound elastography using an end-to-end deep neural network,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.   Springer, 2018, pp. 374–382.
  • [13] T. Ahmed and M. Hasan, “SHEAR-net: An End-to-End Deep Learning Approach for Single Push Ultrasound Shear Wave Elasticity Imaging,” arXiv preprint arXiv:1902.04845, 2019.
  • [14] V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: A deep convolutional encoder-decoder architecture for image segmentation,” IEEE transactions on pattern analysis and machine intelligence, vol. 39, no. 12, pp. 2481–2495, 2017.
  • [15] H. Noh, S. Hong, and B. Han, “Learning deconvolution network for semantic segmentation,” in

    Proceedings of the IEEE international conference on computer vision

    , 2015, pp. 1520–1528.
  • [16] X. Han, “MR‐based synthetic CT generation using a deep convolutional neural network method,” Medical physics, vol. 44, no. 4, pp. 1408–1419, 2017.
  • [17] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” International Conference on Medical image computing and computer-assisted intervention, vol. 18, pp. 234–241, 2015.
  • [18] A. L. Maas, A. Y. Hannun, and A. Y. Ng, “Rectifier nonlinearities improve neural network acoustic models.”
  • [19]

    F.-J. H. Marc’Aurelio Ranzato, Y.-L. Boureau, and Y. LeCun, “Unsupervised learning of invariant feature hierarchies with applications to object recognition.”

  • [20] O. Rouvière, C. Melodelima, A. Hoang Dinh, F. Bratan, G. Pagnoux, T. Sanzalone, S. Crouzet, M. Colombel, F. Mège-Lechevallier, and R. Souchon, “Stiffness of benign and malignant prostate tissue measured by shear-wave elastography: a preliminary study,” European Radiology, vol. 27, no. 5, pp. 1858–1866, 2017.
  • [21] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  • [22]

    A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in

    Advances in neural information processing systems, 2012, pp. 1097–1105.
  • [23] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,”

    The Journal of Machine Learning Research

    , vol. 15, no. 1, pp. 1929–1958, 2014.
  • [24] L. van der Maaten and G. Hinton, “Visualizing data using t-SNE,” Journal of machine learning research, vol. 9, no. Nov, pp. 2579–2605, 2008.
  • [25]

    T. Huynh, Y. Gao, J. Kang, L. Wang, P. Zhang, J. Lian, and D. Shen, “Estimating CT image from MRI data using structured random forest and auto-context model,”

    IEEE transactions on medical imaging, vol. 35, no. 1, pp. 174–183, 2016.
  • [26] J. M. Wolterink, A. M. Dinkla, M. H. F. Savenije, P. R. Seevinck, C. A. T. van den Berg, and I. Išgum, “Deep MR to CT synthesis using unpaired data,” in International Workshop on Simulation and Synthesis in Medical Imaging.   Springer, 2017, pp. 14–23.
  • [27] C.-B. Jin, W. Jung, S. Joo, E. Park, A. Y. Saem, I. H. Han, J. I. Lee, and X. Cui, “Deep ct to mr synthesis using paired and unpaired data,” arXiv preprint arXiv:1805.10790, 2018.
  • [28] R. R. Wildeboer, R. J. G. van Sloun, A. W. Postema, C. K. Mannaerts, M. Gayet, H. P. Beerlage, H. Wijkstra, and M. Mischi, “Accurate validation of ultrasound imaging of prostate cancer: a review of challenges in registration of imaging and histopathology,” Journal of Ultrasound, vol. 21, no. 3, pp. 197–207, 2018.
  • [29] R. J. van Sloun, R. R. Wildeboer, C. K. Mannaerts, A. W. Postema, M. Gayet, H. P. Beerlage, G. Salomon, H. Wijkstra, and M. Mischi, “Deep Learning for Real-time, Automatic, and Scanner-adapted Prostate (Zone) Segmentation of Transrectal Ultrasound, for Example, Magnetic Resonance Imaging–transrectal Ultrasound Fusion Prostate Biopsy,” European Urology Focus, vol. in press, 2019.
  • [30] G. Lemaître, R. Martí, J. Freixenet, J. C. Vilanova, P. M. Walker, and F. Meriaudeau, “Computer-Aided Detection and diagnosis for prostate cancer based on mono and multi-parametric MRI: A review,” Computers in Biology and Medicine, vol. 60, pp. 8–31, may 2015.
  • [31] M. Wang and W. Deng, “Deep visual domain adaptation: A survey,” Neurocomputing, vol. 312, pp. 135–153, 2018.
  • [32] R. van Sloun, R. Wildeboer, H. Wijkstra, and M. Mischi, “Viscoelasticity Mapping by Identification of Local Shear Wave Dynamics,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 64, no. 11, pp. 1666–1673, 2017.
  • [33] M. Zhang, P. Nigwekar, B. Castaneda, K. Hoyt, J. V. Joseph, A. di Sant’Agnese, E. M. Messing, J. G. Strang, D. J. Rubens, and K. J. Parker, “Quantitative Characterization of Viscoelastic Properties of Human Prostate Correlated with Histology,” Ultrasound in Medicine & Biology, vol. 34, no. 7, pp. 1033–1042, jul 2008.