The Role of Pleura and Adipose in Lung Ultrasound AI

by   Gautam Rajendrakumar Gare, et al.

In this paper, we study the significance of the pleura and adipose tissue in lung ultrasound AI analysis. We highlight their more prominent appearance when using high-frequency linear (HFL) instead of curvilinear ultrasound probes, showing HFL reveals better pleura detail. We compare the diagnostic utility of the pleura and adipose tissue using an HFL ultrasound probe. Masking the adipose tissue during training and inference (while retaining the pleural line and Merlin's space artifacts such as A-lines and B-lines) improved the AI model's diagnostic accuracy.



page 5

page 7


Dense Pixel-Labeling for Reverse-Transfer and Diagnostic Learning on Lung Ultrasound for COVID-19 and Pneumonia Detection

We propose using a pre-trained segmentation model to perform diagnostic ...

Lightweight Residual Network for The Classification of Thyroid Nodules

Ultrasound is a useful technique for diagnosing thyroid nodules. Benign ...

An interpretable object detection based model for the diagnosis of neonatal lung diseases using Ultrasound images

Over the last few decades, Lung Ultrasound (LUS) has been increasingly u...

Weakly Supervised Contrastive Learning for Better Severity Scoring of Lung Ultrasound

With the onset of the COVID-19 pandemic, ultrasound has emerged as an ef...

Good and Bad Boundaries in Ultrasound Compounding: Preserving Anatomic Boundaries While Suppressing Artifacts

Ultrasound 3D compounding is important for volumetric reconstruction, bu...

B-line Detection in Lung Ultrasound Videos: Cartesian vs Polar Representation

Lung ultrasound (LUS) imaging is becoming popular in the intensive care ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Point-of-care ultrasound (POCUS) is a non-invasive, real-time, bedside patient monitoring tool that is ideal for working with infectious diseases such as COVID-19. POCUS does not require the transport of critically ill contagious patients to a radiology suite, making POCUS an easy choice for obtaining serial imaging to closely monitor disease progression. This has led to a significant interest in developing AI approaches for the interpretation of lung ultrasound (LUS) imaging [1, 13, 19].

A healthy lung is filled with air, which results in poor visibility of the internal anatomy in ultrasound images. Typical clinical practice for pulmonary ultrasound does not try to image the internal tissue of the lung but rather focuses on artifacts (e.g., “B Lines”) that are physically generated at the pleural membrane line. Traditionally, lung ultrasound in the ICU uses a low frequency (1-5 MHz) curvilinear or phased array (i.e., echocardiography) probe, which provides relatively deep and wide imaging. This approach is excellent for penetrating soft-tissues superficial to the lung and is standard practice for detecting prominent B-lines. Unsurprisingly, most AI research has followed typical clinical practice and focused on curvilinear probes and B-lines [13, 18].

However, low-frequency curvilinear probes provide very poor detail of the pleural line. High-frequency-linear (HFL) ultrasound probes (typically in the 5-15 MHz range) offer higher resolution of the pleural line but they can not image below 6-10 cm. Such reduced imaging depth is not a fundamental problem for lung ultrasound, except in cases of obesity or deep lung consolidation. The wavefronts generated from a linear array probe travel in parallel and generally intersect the visceral pleura in the perpendicular direction. The acoustic reflection is then displayed as a rectangular image. In contrast, the acoustic wavefront from a curvilinear probe travels radially and strikes the visceral pleura at various incident angles. In Fig. 2 we observe that on the curvilinear probe more depth is visible whereas on the linear probe the pleural line details are better seen. HFL probes are especially suitable for the easily-accessible L1 and R1 viewpoints, where their more detailed picture of the shallow lung’s visceral pleural membrane can have diagnostic value.

Precise pleural line imaging may be more useful than deeper imaging penetration, even for pulmonary diseases that do not primarily manifest in the pleura. For example, Carrer et al.’s automated method [4] to extract the pleural line and assess lung severity achieved better results from a linear probe compared to those from a curvilinear probe. Neonatal lung ultrasound analysis, which has also been extensively researched [9], typically calls for an HFL probe for its improved lung surface image quality. Although recent large datasets on COVID-19 [2, 13, 18] predominantly contain images from lower frequency curvilinear and phased array probes, they also include a small portion of linear probe images based on their observation that a linear probe is better for pleura visualization [2, 16].

Since the HFL probe provides a better view of the pleural line, it can be a better option for Covid-19 lung ultrasound diagnosis. The SARS-CoV-2 virus that causes Covid-19 binds to ACE-2 receptors of epithelial cells lining the bronchi and alveoli, and endothelial cells lining the pulmonary capillaries. The lung injury from COVID-19 involves interalveolar septae that perpendicularly abut the visceral pleura. Therefore the pleural line should be a focus of the investigation. B-lines, which radiate deeply below the pleural line, have been extensively reported on. Although they are visualized below the pleural line, they are merely reverberation artifacts that emanate from within the pleural line. Thickening and disruption of the pleural line are subtle signs of underlying lung pathology that are poorly visualized using a curvilinear probe. In some cases these pleural line abnormalities are seen in the absence of B-lines and, therefore, such cases might be misclassified if a curvilinear probe were used. We propose that there are important and clinically relevant anatomic details visible in HFL images that are lacking in curvilinear images. Focusing on the pleura line itself, where pathology manifests the earliest, may yield clinically relevant information more directly than limiting interpretation to artifact appearance.

When performing lung ultrasound with any probe, acoustic waves are first propagated through the keratinized layer of the outer skin, i.e. the epidermis. The sound then travels through fibrous tissue and capillary networks in the deeper dermis and then through adipose tissue and muscle bundles covered in the fibrous fascia. The acoustic wavefront must then traverse the 1-4 micron thick fibrous parietal pleura which lines the inside of the chest cavity before the lung’s visceral pleura is reached. The acoustic characteristics of each of these structures is affected by probe location and patient characteristics, including age, sex, anatomy, lean body mass, and fat mass. Making use of the linear probe helps see not only the pleura but also the subcutaneous (SubQ) tissue structure in detail, which would otherwise occupy relatively few pixels with a curvilinear probe. This brings in additional challenges where a purported lung-AI network could instead rely on the SubQ to make the diagnosis rather than lung regions. AI might learn associations between these soft tissue structures and specific disease characteristics. For example, it has been well established that obesity and older age are risk factors for severe COVID. It would therefore be important that training and testing of AI approaches to the diagnosis of lung diseases consider the impact of the subcutaneous tissues on diagnostic accuracy.

We can broadly categorize the various regions that constitute the linear probe ultrasound image into the subcutaneous region, the pleura, and Merlin’s space (i.e., real and artifact pixels beneath the pleural line) [10]. In the following sections of the paper, we try to determine the diagnostic prowess of these regions by generating images that emphasize these regions by masking out other regions. We study the diagnostic ability of the subcutaneous region (subq), subcutaneous+pleura (subq+pleura), the pleural region (pleural), the Merlin’s region (merlin), and the pleural+Merlin’s region (pleural+merlin). In addition, we also explore masking out indirect adipose/obesity information implicitly encoded by the depth and curvature of the pleura. So, we straighten the overall bend of the pleural line and mask out the depth by shifting up the pleura to a fixed distance from the top of the image. Refer to Fig. 3 for sample masked images.

2 Methodology

2.0.1 Problem Statement

Given an ultrasound B-mode scan clip , the task is to find a function that maps the clip to ultrasound severity score labels as defined by [13]. Because the pleural line produces distinct artifacts (A-lines, B-lines) when scattering ultrasound based on the lung condition, the classification model should learn underlying mappings between the pleural line, artifacts, and pixel values, for making the predictions.

2.1 SubQ Masking

In lung ultrasound images, the SubQ tissue has more complicated tissue structures than those in the lung region. However, these structures might degrade the performance of AI-based diagnosis. The brighter and more complicated SubQ region has a larger response to the CNN layers than the lung region, but it does not provide much information on the underlying lung diseases and might even interfere with the performance of the deep neural network. To understand the role of the pleura and the adipose in the COVID-19 diagnosis, we provide different masking of the lung ultrasound images. The ultrasound image is divided into SubQ, pleural line, and Merlin region.

2.1.1 Pleural Line Segmentation

The pleural line separates the SubQ tissue and Merlin’s region in the ultrasound images and is usually the lower bright and wide horizontal line in the ultrasound images. To segment the pleural line, we first use a 5x5 Gaussian filter to blur the image and reduce the speckle noise. We then resize it to 150x150 to reduce the influence of the speckles in the segmentation. To find the candidate pixels that belong to a bright horizontal line, we first threshold the image based on the image’s response to the Sobel filter along the y-axis with a 3x3 kernel size, and then threshold the image based on the intensity. We select the thresholds in Eq. 1 and Eq. 2 after tuning. Then in each column, we keep the lowest candidate point in the image, use dilation to fill the gap between the line, and then cluster the pixels into different regions based on the connectivity. We then keep the region that has the largest area, and move other regions to the same level as the largest region. This is done by adding an offset to the y coordinates of the candidate pixels in other regions. The offset is calculated by the difference between the minimal y coordinates of the region and the one of the largest region. We then fit a fourth-order polynomial curve using the candidate pixels and then extend the polynomial curve segment for more than 10 pixels along the tangent line at the two endpoints of the polynomial curve.


where is the raw image, and is the response of the image to Sobel filter.

With the segmented pleural line, the region above this line is the selected SubQ region, and the region below this line is the selected Merlin’s region.

2.1.2 Pleural Line Straightening

We straighten and shift up the pleura in order to mask out the adipose/obesity information indirectly encoded into the curvature and depth of the pleura. Besides, different probe pressure would create different appearances of pleural lines in the images, so we want to eliminate the effect of this arbitrary variable as well. Therefore, we straighten the pleural lines while maintaining the local “bumps” on the pleural lines so that local pleura information would not be lost. In practice, we crop the images at 5 pixels above the pleural lines to preserve the information on the pleural lines and underneath them. We take the upper boundaries of the segmented pleural lines and fit a cubic function to it. (We did not use a higher-order function since we would like to preserve the local information on the pleural lines.) We then shift each column of the image upwards or downwards so that we make the cubic curve into a horizontal straight line. (Refer to Fig. 3 for sample straightened image.)

2.2 Data

Under IRB approval, we curated our own lung ultrasound dataset consisting primarily of linear probe videos. Our dataset consists of multiple ultrasound B-mode scans of L1 and R1 (left and right superior anterior) lung regions at depths ranging from 4cm to 6cm under different scan settings, obtained using a Sonosite X-Porte ultrasound machine. The dataset consists of ultrasound scans of 93 unique patients from the pulmonary ED during COVID-19, and some patients were re-scanned on subsequent dates, yielding 210 videos.

We use the same 4-level ultrasound severity scoring scheme as defined in [15] which is similarly used in [13]. The score-0 indicates a normal lung with the presence of a continuous pleural line and horizontal A-line artifact. Scores 1 to 3 signify an abnormal lung, wherein score-1 indicates the presence of alterations in the pleural line with vertical B-line artifacts, score-2 has the presence of B-lines and score-3 signifies confounding B-lines with large consolidations (refer to [17] for sample images corresponding to the severity scores). All the manual labeling was performed by individuals with at least a month of training from a pulmonary ultrasound specialist. We have 27, 84, 75, and 24 videos labeled as scores 0, 1, 2, and 3 respectively.

Figure 1: Curvilinear (left) vs Linear (right) probe at the same L1 position during a single patient session. On the curvilinear more depth is visible whereas on the linear the pleural line details are better seen.
Figure 2: RoC plots with AUC (macro averaged) of the trained model for the video-based lung-severity scoring.

2.2.1 Data Preprocessing

We perform dataset upsampling to address the class imbalance for the training data, wherein we upsample all the minority class labeled data to get a balanced training dataset [12]

. All the images are resized to 224x224 pixels using bilinear interpolation. We augment the training data using random horizontal (left-to-right) flipping and scaling the image-pixel intensities by various scales


2.3 Architecture

We carry out all our experiments on the TSM network [11] with ResNet-18 (RN18) [6] backbone, commonly used for video classification and benchmarking methods. The TSM module makes use of 2D CNN’s with channel mixing along the temporal direction to infuse the temporal information within the network. We use bi-directional residual shift with channels shifted in both directions, as recommended in [11]. The model is fed input clips of 18 frames wide sampled from the video by dividing the video into 18 equal segments and then selecting an equally spaced frame from each segment beginning with a random start frame.

2.4 Training Strategy

2.4.1 Implementation

The network is implemented with PyTorch and trained using the stochastic gradient descent algorithm

[3] with an Adam optimizer [8] set with an initial learning rate of

, to optimize over cross-entropy loss. The model is trained on an Nvidia Titan RTX GPU, with a batch size of 8 for 50 epochs. The ReduceLRonPlateau learning rate scheduler was used, which reduces the learning rate by a factor (0.5) when the performance metric (accuracy) plateaus on the validation set. For the final evaluation, we pick the best model with the highest validation set accuracy to test on the held-out test set.

2.4.2 Metrics

For the severity classification, we report accuracy and F1 score [1, 13]. The receiver operating characteristic (ROC) curve is also reported along with its area under the curve (AUC) metric [7], wherein a weighted average is taken where the weights correspond to the support of each class and for the multi-label we consider the one-vs-all approach. [5]

3 Experiments

We train the model on the various masked inputs and compare its performance to predict video-based lung-severity score labels. We randomly split the dataset into a training set and a separate held-out test set with 78%, and 22% split ratio respectively by randomly selecting videos while retaining the same distribution across the lung-severity scores in both the datasets and ensuring no patient overlap between the test and train set. Using the train set, we perform 5-fold cross-validation to create a training and validation fold. The training set is upsampled to address the class imbalance [12]

. We report the resulting metrics on the held-out test set in form of mean and standard deviation over the five independent cross-validation runs.

Method AUC of ROC accuracy F1-score
original 0.6553 0.0425 0.4565 0.0659 0.4381 0.0701
subq 0.6154 0.0619 0.4130 0.0645 0.3656 0.0956
pleural 0.7119 0.0410 0.3783 0.0525 0.3665 0.0483
merlin 0.7076 0.0436 0.4261 0.0295 0.4183 0.0299
subq+pleural 0.6303 0.0560 0.3652 0.0374 0.3204 0.0381
pleural+merlin 0.7742 0.0648 0.5261 0.1178 0.5040 0.1467
straightened pleural+merlin 0.7642 0.0401 0.5348 0.0928 0.5166 0.1016
Table 1: Video-based 4-severity-level lung classification AUC of ROC, Accuracy, and F1 scores on a 93-patient HFL lung dataset. Highest scores shown in bold.

4 Results and Discussions

Table 1 shows the mean and standard deviation of the video-based severity scoring metrics, obtained by evaluating on the held-out test set using the models from the five independent runs. The two models with pleural+merlin input achieve the highest scores on all metrics, with the straightened version performing the best overall. The accuracy with the pleura is lower than the subq input, but combining the two gives the worst accuracy. The latter counter-intuitive result may be because the subq and pleura represent distinct diagnostic characteristics that AI may struggle to jointly model without seeing correlations in Merlin’s space. Performance on the original image is inferior to the pleural+merlin image, perhaps because eliminating subq complexity makes it easier for the model to focus on the lung region to make diagnosis, as seen in Fig 3. Individually, merlin has the best scores compared to subq and pleura. Combining pleural+merlin significantly improves the diagnostic accuracy of the model. The macro average RoC plots and AUC of the trained models are shown in Fig 2.

Fig. 3 depicts the various masked images of a frame from a test video. The Grad-CAM [14] visualization on the first video frame of the respective input trained model is also shown. We observe that both of the pleural+merlin models focused on the Pleural line and B-line artifacts, whereas the original-image-input model focused on the SubQ region. The combination of pleura+merlin helped the model to focus on B-lines artifacts better than merlin alone. For this test video, all models except subq and subq+pleura correctly predicted the severity scores, suggesting that the subq wasn’t informative to predict the diagnosis for this test video.

original subq subq+
pleural merlin pleural+


Grad CAM
Figure 3: Grad-CAM [14] visualization of layer 4 of the trained model on the various masked test images (B-mode grey). We observe that the model trained on pleural+merlin bases the predictions predominantly on the pleural line and B-line artifacts, whereas the original image trained model predominantly bases the predictions on the subcutaneous tissues above the pleural line.

5 Conclusion

We highlighted the potential advantages of an HFL probe over the commonly used curvilinear probe in pulmonary ultrasound. We discussed the significance of having a well-imaged pleural line in addition to pleural artifacts, such as B-lines and A-lines, suggesting that carrying out AI analysis of the pleural line using linear probe could provide new avenues for carrying out challenging diagnoses. We demonstrated the diagnostic characteristics of the subcutaneous, pleura, and Merlin’s-space regions of linear-probe ultrasound. From our experiments we draw that on masking out the subcutaneous region and retaining the detailed pleura along with Merlin’s space has better diagnostic prowess.

5.0.1 Acknowledgements

This present work was sponsored in part by US Army Medical contract W81XWH-19-C0083. We are pursuing intellectual property protection. Galeotti serves on the advisory board of Activ Surgical, Inc. He and Rodriguez are involved in the startup Elio AI, Inc.


  • [1] J. Born, G. Brändle, M. Cossio, M. Disdier, J. Goulet, J. Roulin, and N. Wiedemann (2020-04) POCOVID-net: Automatic detection of COVID-19 from a new lung ultrasound imaging dataset (POCUS). arXiv. External Links: Link, ISSN 23318422 Cited by: §1, §2.4.2.
  • [2] J. Born, N. Wiedemann, M. Cossio, C. Buhre, G. Brändle, K. Leidermann, A. Aujayeb, M. Moor, B. Rieck, and K. Borgwardt (2021-01) Accelerating Detection of Lung Pathologies with Explainable Ultrasound Image Analysis. Applied Sciences (Switzerland) 11 (2). External Links: Link, Document, ISSN 20763417 Cited by: §1.
  • [3] L. Bottou (2010)

    Large-scale machine learning with stochastic gradient descent

    In Proceedings of COMPSTAT 2010 - 19th International Conference on Computational Statistics, Keynote, Invited and Contributed Papers, pp. 177–186. External Links: Link, ISBN 9783790826036, Document Cited by: §2.4.1.
  • [4] L. Carrer, E. Donini, D. Marinelli, M. Zanetti, F. Mento, E. Torri, A. Smargiassi, R. Inchingolo, G. Soldati, L. Demi, F. Bovolo, and L. Bruzzone (2020-11) Automatic Pleural Line Extraction and COVID-19 Scoring from Lung Ultrasound Data. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control 67 (11), pp. 2207–2217. External Links: Document, ISSN 15258955 Cited by: §1.
  • [5] T. Fawcett (2006-06) An introduction to ROC analysis. Pattern Recognition Letters 27 (8), pp. 861–874. External Links: Document, ISSN 01678655 Cited by: §2.4.2.
  • [6] K. He, X. Zhang, S. Ren, and J. Sun (2016-12) Deep residual learning for image recognition. In

    Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition

    Vol. 2016-December, pp. 770–778. External Links: Link, ISBN 9781467388504, Document, ISSN 10636919 Cited by: §2.3.
  • [7] H. E. Kim, H. H. Kim, B. K. Han, K. H. Kim, K. Han, H. Nam, E. H. Lee, and E. K. Kim (2020-03)

    Changes in cancer detection and false-positive recall in mammography using artificial intelligence: a retrospective, multireader study

    The Lancet Digital Health 2 (3), pp. e138–e148. External Links: Link, Document, ISSN 25897500 Cited by: §2.4.2.
  • [8] D. P. Kingma and J. L. Ba (2015-12) Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, External Links: Link Cited by: §2.4.1.
  • [9] H. Y. Liang, X. W. Liang, Z. Y. Chen, X. H. Tan, H. H. Yang, J. Y. Liao, K. Cai, and J. S. Yu (2018-06) Ultrasound in neonatal lung disease. Quantitative Imaging in Medicine and Surgery 8 (5), pp. 535–546. External Links: Link, Document, ISSN 22234306 Cited by: §1.
  • [10] D. Lichtenstein (2017-06) Novel approaches to ultrasonography of the lung and pleural space: Where are we now?. Vol. 13, European Respiratory Society. External Links: Link, Document, ISSN 20734735 Cited by: §1.
  • [11] J. Lin, C. Gan, and S. Han (2018-11) TSM: Temporal Shift Module for Efficient Video Understanding. Proceedings of the IEEE International Conference on Computer Vision 2019-Octob, pp. 7082–7092. External Links: Link Cited by: §2.3.
  • [12] M. M. Rahman and D. N. Davis (2013) Addressing the Class Imbalance Problem in Medical Datasets. International Journal of Machine Learning and Computing, pp. 224–228. External Links: Document, ISSN 20103700 Cited by: §2.2.1, §3.
  • [13] S. Roy, W. Menapace, S. Oei, B. Luijten, E. Fini, C. Saltori, I. Huijben, N. Chennakeshava, F. Mento, A. Sentelli, E. Peschiera, R. Trevisan, G. Maschietto, E. Torri, R. Inchingolo, A. Smargiassi, G. Soldati, P. Rota, A. Passerini, R. J.G. Van Sloun, E. Ricci, and L. Demi (2020-08) Deep Learning for Classification and Localization of COVID-19 Markers in Point-of-Care Lung Ultrasound. IEEE Transactions on Medical Imaging 39 (8), pp. 2676–2687. External Links: Document, ISSN 1558254X Cited by: §1, §1, §1, §2.0.1, §2.2, §2.4.2.
  • [14] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra (2016-10) Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. International Journal of Computer Vision 128 (2), pp. 336–359. External Links: Link, Document Cited by: Figure 3, §4.
  • [15] Simple, Safe, Same: Lung Ultrasound for COVID-19 - Tabular View - External Links: Link Cited by: §2.2.
  • [16] G. Soldati, A. Smargiassi, R. Inchingolo, D. Buonsenso, T. Perrone, D. F. Briganti, S. Perlini, E. Torri, A. Mariani, E. E. Mossolani, F. Tursi, F. Mento, and L. Demi (2020-07) Is There a Role for Lung Ultrasound During the COVID-19 Pandemic?. Vol. 39, John Wiley and Sons Ltd. External Links: Link, Document, ISSN 15509613 Cited by: §1.
  • [17] G. Soldati, A. Smargiassi, R. Inchingolo, D. Buonsenso, T. Perrone, D. F. Briganti, S. Perlini, E. Torri, A. Mariani, E. E. Mossolani, F. Tursi, F. Mento, and L. Demi (2020-07) Proposal for International Standardization of the Use of Lung Ultrasound for Patients With COVID-19. Journal of Ultrasound in Medicine 39 (7), pp. 1413–1419. External Links: Link, Document, ISSN 15509613 Cited by: §2.2.
  • [18] W. Xue, C. Cao, J. Liu, Y. Duan, H. Cao, J. Wang, X. Tao, Z. Chen, M. Wu, J. Zhang, H. Sun, Y. Jin, X. Yang, R. Huang, F. Xiang, Y. Song, M. You, W. Zhang, L. Jiang, Z. Zhang, S. Kong, Y. Tian, L. Zhang, D. Ni, and M. Xie (2021-04) Modality alignment contrastive learning for severity assessment of COVID-19 from lung ultrasound and clinical information. Medical Image Analysis 69, pp. 101975. External Links: Link, Document, ISSN 13618423 Cited by: §1, §1.
  • [19] Y. Zhang, H. Xue, M. Wang, N. He, Z. Lv, and L. Cui (2020) Lung Ultrasound Findings in Patients With Coronavirus Disease (COVID-19). 216 (1), pp. 80–84. External Links: Link, Document, ISSN 0361-803X Cited by: §1.