Adaptive Few-Shot Learning PoC Ultrasound COVID-19 Diagnostic System

09/08/2021
by   Michael Karnes, et al.
The Ohio State University
0

This paper presents a novel ultrasound imaging point-of-care (PoC) COVID-19 diagnostic system. The adaptive visual diagnostics utilize few-shot learning (FSL) to generate encoded disease state models that are stored and classified using a dictionary of knowns. The novel vocabulary based feature processing of the pipeline adapts the knowledge of a pretrained deep neural network to compress the ultrasound images into discrimative descriptions. The computational efficiency of the FSL approach enables high diagnostic deep learning performance in PoC settings, where training data is limited and the annotation process is not strictly controlled. The algorithm performance is evaluated on the open source COVID-19 POCUS Dataset to validate the system's ability to distinguish COVID-19, pneumonia, and healthy disease states. The results of the empirical analyses demonstrate the appropriate efficiency and accuracy for scalable PoC use. The code for this work will be made publicly available on GitHub upon acceptance.

READ FULL TEXT VIEW PDF

page 2

page 5

04/25/2020

POCOVID-Net: Automatic Detection of COVID-19 From a New Lung Ultrasound Imaging Dataset (POCUS)

With the rapid development of COVID-19 into a global pandemic, there is ...
04/04/2021

Detection of COVID-19 Disease using Deep Neural Networks with Ultrasound Imaging

The new coronavirus 2019 (COVID-2019) has rapidly become a pandemic and ...
03/30/2021

Accelerating Detection of Lung Pathologies with Explainable Ultrasound Image Analysis

Care during the COVID-19 pandemic hinges upon the existence of fast, saf...
02/01/2021

Few-shot Learning for CT Scan based COVID-19 Diagnosis

Coronavirus disease 2019 (COVID-19) is a Public Health Emergency of Inte...
09/13/2020

Accelerating COVID-19 Differential Diagnosis with Explainable Ultrasound Image Analysis

Controlling the COVID-19 pandemic largely hinges upon the existence of f...
10/12/2021

Accurate and Generalizable Quantitative Scoring of Liver Steatosis from Ultrasound Images via Scalable Deep Learning

Background Aims: Hepatic steatosis is a major cause of chronic liver...

I Introduction

Coronavirus disease 2019 (COVID-19), caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), has rapidly become a global health emergency [23]. As of December 2020, the virus has spread to every country infecting more than 69 million people and resulting in 1.5 million deaths worldwide [8]. Insufficient medical resources have become a major challenge, especially in low-income countries. There is a critical need for fast, accessible and low-cost diagnostic tests in point-of-care (PoC) settings to stratify risk and efficiently allocate limited healthcare resources.

SARS-CoV-2 reverse transcriptase–polymerase chain reaction (RT-PCR) is the current diagnostic gold standard worldwide [18]

. It has an estimated sensitivity of 75% and can take several days to obtain results

[33, 9]. While chest X-ray is more widely available, its utility is limited by its low sensitivity [19]. Computed tomography (CT) has been viewed as an alternative for the diagnosis of COVID-19 [15]. However, due CT’s ionizing radiation and limited availability outside of large hospitals it is not an optimal screening tool. A rapid, accurate, and inexpensive screening tool is required for appropriate triage and diagnosis of patients with suspected COVID-19.

The use of bedside lung ultrasound (LUS) is a common practice in a wide variety of clinical settings, including emergency departments and intensive care units [26]. There have been several studies evaluating the use of LUS in patients with suspected COVID-19 infection with one reporting a sensitivity of 90%, higher than seen in X-ray [11, 7]. LUS can help identify patients with COVID-19, who have been incorrectly diagnosed by RT-PCR, and prevent the further spread. In addition to screening, LUS has been shown to be an effective imaging modality for predicting the course, stratifying the risk, and monitoring COVID-19 disease state [17]. The characteristic LUS findings of COVID-19 (a thickened or irregular pleural line, confluent B-lines, sub-pleural consolidations, and pleural effusions) demonstrate promise in trending clinical progression from onset to resolution [1]. Thus, LUS is a reliable, cost-effective, and easy-to-use tool for rapid triage, diagnosis, and early risk stratification of COVID-19.

The primary limitation of LUS diagnostics is the extensive training, experience and expertise required for the accurate identification of disease characteristics [20, 25]

. The ability to accurately interpret LUS images requires recognition of normal sonographic anatomy, normal variants, as well as pathology. As a result, LUS diagnostics are limited from their full use in point-of-care (PoC) settings. This creates a need for additional technologies to aid healthcare providers in interpreting LUS images. Machine learning (ML) algorithms are one such technology.

Originating from the field of pattern recognition, ML provides the framework for extracting coherent patterns from high dimensional noisy data. This is especially true for deep neural networks (DNN). In 2012, the power of the DNN was established with the record breaking performance of the AlexNet achieving a top-1 accuracy of 63.3% over a 1000 class classification problem

[14]

. This breakthrough was made possible by the collection of a large annotated dataset, ImageNet, with millions of images and the developments in computational power. Shortly after, the DNN became larger and more complex, improving their performance each year with a current record of 88.6% top-1 accuracy

[10]. These results solidified the position of DNN as a top approach for visual classification.

The high performances of DNN visual classification comes with a critical caveat; the training set must sufficiently represent the scenarios seen while testing. This means large annotated training sets, limited applicability, and unpredictable errors [27]

. In response there has been an effort to reduce training set sizes and improve generalization by transferring the knowledge of a DNN pretrained on large dataset to novel applications, commonly referred to as transfer learning

[30]. The primary advantage to this approach is the parameters of the DNN are frozen while adaptive layers are trained, which significantly reduces the number of trained parameters and the number of required training examples. This is important for applying ML to ultrasound datasets. The availability of annotated ultrasound datasets is increasing with some reaching tens of thousands of images. However, the majority of ultrasound datasets are less than 300 images [5].

The contribution of this work is the introduction of a novel LUS diagnostic system built on the few-shot learning (FSL) visual classification algorithm. The proposed system has low training requirements with as few as 8 images per class, while traditional DNN approaches require thousands. The results of this study demonstrate the ability of the FSL based system in extending the accessibility of rapid LUS diagnostics to resource limited clinics.

Ii Related Work

The proposed ultrasound COVID-19 diagnostic system is based on a SOTA DNN visual classification algorithm that significantly reduces training requirements associated with traditional deep learning. DNN are large regressed embedding transforms. In the classification problem, the DNN is trained to project the image space to the latent space that minimizes classification error. The learned transform filters within the network posses the network’s knowledge domain. Many approaches have been taken to adapt the learned knowledge domain to novel tasks. This can take the form of fine tuning [29, 28, 24], where the learned state is used to initialize the DNN and only the final layers of network are trained [22], or direct transformations of the DNN knowledge domain to a novel task [2, 21]. The algorithm within the proposed system falls in the domain adaptation category producing a direct transform of the DNN latent space to a targeted discriminative feature space.

FSL provides a framework for leveraging the knowledge domain of pretrained networks to novel tasks. In its basic form, high dimensional images are encoded into a metric feature space and then classified by their relations from learned reference points as shown in Figure 1. FSL follows a long line of metric based learning with many recent works focusing on incorporating DNN. One early example is the Siamese network architecture [13]. Further developed by the Matching network [32]

where the DNN was trained to estimate the set-to-set probability. In 2017, Prototypical-Net

[24] trained a DNN to directly generate a discriminative feature embedding space.

Fig. 1: Diagram of Few-Shot Visual Classification: The few-shot visual diagnostic task classifies a query image with respect to a small annotated set reference images. The ultrasound images are imported from the handheld probe to the computer, encoded, and then classified by their distances to dictionary of reference points. The clinician is presented with a report of each disease state probability, their distances from the reference points, and an attention heat map highlighting the regions of interest.

The application of ML in ultrasound diagnostics has been growing rapidly, but still limited when compared to other imaging modalities. These pioneering studies address such task as: tumor detection, fetal health monitoring, and cardiac monitoring [5, 16].

The onset of COVID-19 created a push for rapid pulmonary diagnostics. These developments have successfully utilized MRI, CT, and LUS images to detect COVID-19 characteristics in patients’ lungs. To the best of our knowledge, there have only been four studies applying ML for LUS diagnostics [6, 31], and only two focused on the detection of COVID-19[4, 3].

This work differs from the current SOTA in three major ways due to the direct consideration for clinical usage during development, primarily the ability to adapt to individual practices and provide information in such a way to be easily incorporated into the greater corpus of information used in the diagnostic process. These differences are: 1.) we present a full ML LUS diagnostic system; 2.) we incorporate a vocabulary based FSL pipeline enabling a significant reduction in training requirements; 3.) our system generates intuitively understandable distance based classifications.

Iii Approach

This section presents the methodology behind the proposed approach. The flow chart of the algorithm is shown in Figure 2

. First the theory and problem formulation of FSL is presented. This is followed by an explanation of the feature extraction and classification processes and concludes with the algorithm training process.

Fig. 2: Algorithm Overview:

Few-shot visual classification is comprised of a feature extraction and classification process. The image features are extracted using the pretrained DNN; then PCA reduced and encoded with the learned vocabulary into a feature vector representation. The support set is used to generate a dictionary of class signatures consisting of the class centroid and covariance matrix. Query images are classified by their Mahalanobis distances to the dictionary signatures using linear discriminant analysis.

Iii-a Problem Formulation

Assume there is a dataset of images, , where with labels . The FSL approach uses a sub-sample to create a set of support (reference) images, , and a set of query images to be classified, . FSL requires a few assumptions on the relationships between sets . must be a sub-sample of classes each with a specified number of samples, . and must be split such that the classes in the query set are represented in the support set. The FSL objective is to find a function that best estimates of the query set labels, , with best being defined as .

Iii-B Theory

Deep neural image classification networks are trained to estimate the class probabilities of a given image using a final output layer. This process generates learned feature extracting filters corresponding to the most discriminative features of the training set. Assuming the learned filters are sufficiently generalized, a query image can be effectively encoded in the network’s latent feature space.

(1)

The class probabilities can also be viewed as a Gaussian mixture model (GMM) of class characteristics in the latent manifold

[2]

. This view looks at the image as an instance of characteristics from a GMM source. The DNN performs a series of linear kernel transformations. From the central limit theorem, it is known that the linear combination of Guassian distributions generates a Gaussian distribution. Therefore, the DNN embedded features can be viewed as a GMM produced from the GMM of visual characteristics with a class probability defined by:

(2)

The proposed algorithm performs a series of linear transforms on the GMM of the latent manifold, preserving the mixture model throughout the process. The principle component analysis (PCA) calculates the dominant directions of the manifold through the eigenvectors of the covariance matrix. The result is a manifold oriented by decreasing variance and therefore decreasing entropy. Trimming the eigenvectors with the smallest eigenvalues reduces the GMM to the most informative distributions with the greatest entropy.

The k-means cluster vocabulary organizes the distribution structures into ’words’ across classes according to the prominent feature clusters seen in the latent features of the support set. Interpreting the latent manifold with the calculated vocabulary combines distributions into semantically relevant features based on their similarity to the ’words’ in the generated support set vocabulary.

The Mahalanobis distance transforms the GMM according to its relation to the learned class signatures. This process centers the GMM around the code word and scales it with the covariance matrix. Therefore, the Mahalanobis distances can also be considered as instances of a GMM and can be interpreted as the probability of the query image originating from the GMM of the reference class.

The final classification is completed using the linear discriminant analysis (LDA), which projects the class probabilities to the optimally discriminative space, maximizing interclass variance while minimizing the intraclass variance.

Feature Extraction

The proposed algorithm extracts activation features from the latent space of a pretrained DNN, MobileNet [12]. Let be the feature embedding function of the DNN. The feature embedding, , of the image is generated by a forward pass through the network, shown in Equation 3, where are the activations within the networks latent space. The embedded features are then compressed to using Equation 4. The latent features are reduced by PCA transform, , and interpreted by the vocabulary, , calculated from using k-means clustering. The resulting mean vector of the features are then normalized, creating the image feature vector .

(3)
(4)

Dictionary Generation

The class signatures (a.k.a. representative appearance models) of the dictionary are calculated from the image feature vectors, in the support set . The support set provides examples for each class, giving us multiple representations per class . The co-variance of the class, , is calculated from . Class Sub-representations are calculated from the k-means of the manifold, producing clusters. The hierarchical code word representations are generated from the centroids of each cluster, .

Classification

The class of a query image is predicted by the distances of its signature to those in the dictionary. The distances are calculated using Mahalanobis distances from the class signatures, .

(5)

The final classification decision is made by linear discriminant analysis (LDA) of the query image distances to each dictionary signature.

Training

The proposed approach is unique in that it requires no DNN training. Instead, a series of linear transforms is trained on small sample of reference images and project the MobileNet DNN embedded features to a optimally discriminative space. These transforms include PCA reduction, k-means vocabulary, dictionary, and LDA separation. The PCA is pretrained on an unlabeled random sub-sample of

, serving as a general context transform of the DNN response to a lower dimensional space, trimming low activation neurons. The k-means, dictionary, and LDA are trained on the sub-sampled support set of reference images,

.

Iii-C COVID-19 POCUS Dataset

Performance evaluations are conducted on the COVID-19 POCUS Dataset [3], the largest publicly available of its type, comprising PoC LUS images from COVID-19, pneumonia, and healthy patients. The dataset is split into LUS clips produced by linear and convex probes. The clips were collected from several sources, the primary being: grepmed.com, thepocusatlas.com, butterflynetwork.com and radiopaedia.org. The dataset is heterogeneous, originating from varying institutions and devices. The data is unidentified and no additional meta data, such as vitals or demographics, are provided. All image annotations are verified by medical professionals. In total, the linear clips contain images comprised of 1,457 normal, 315 pneumonia, 445 COVID frames. The convex LUS clips contain images comprised of 11,646 normal, 4,585 pneumonia, 8,188 COVID frames. The proposed algorithm was evaluated on test data randomly selected and sequestered using a 20% split. Each image was normalized and resized to (224,224).

Iv Experiments

The objective of the experiment is to analyze the training requirements and classification performance of the system. This is done by running an analysis of the algorithm’s classification performance with varying number of reference images from 8 to 64 for each class. Three binary classification scenarios are considered: healthy v. COVID-19, healthy v. pneumonia, and pneumonia v. COVID-19. The algorithm was implemented in Python on a Linux OS with open-source libraries. All experiments were ran on a Intel(R) Core(TM) i5-8600K CPU with 16 GB of RAM. The longest experimental case (using 64 samples) took 15 seconds to process. All evaluation metrics are calculated over 10 trials each containing randomly selected training and test sets.

Iv-a Results

The experimental performances was evaluated using receiver operating characteristic curves (ROC) which shows the system’s sensitivity over its selectivity. Note the ROC curves are plotted using 1-Specificity for easier reading. Only results for linear ultrasound images are shown due to limited space.

Fig. 3: Linear US ROC Curves: These plots show the ROC curves of the three classification scenarios: healthy v. COVID-19, healthy v. pneumonia, and pneumonia v. COVID-19. These plots contain curves for each considered number of training samples. Note these plots use 1-specificity.

Figure 3 shows the mean ROC curves for each experimental case. The plots are organized first by classification scenario, (healthy v. COVID-19, healthy v. pneumonia, or pneumonia v. COVID-19), and then by number of training samples per class, (8,16,32,64). The strongest trend is seen in the increase in specificity with number of training examples. A performance saturation is seen at 64 samples for all scenarios. The highest performance is seen in the healthy v. pneumonia case achieving a high sensitivity with just 8 training samples. This was followed by the pneumonia v. COVID-19 and then healthy v. COVID-19. These results show that detecting COVID-19 is a more challenging task than detecting pneumonia, but still be achieved with 64 samples per class.

Fig. 4: Attention Heat Maps: This is an example image of a COVID-19 attention heat map on a COVID-19 positive ultrasound image. The image is sampled in a grid of patches with each patch colored by its relative distance to the learned class signatures.

In the pursuit of increased decision understanding an attention heat map to highlight the image regions that correspond to the algorithm’s disease state decision is generated by the system. Only qualitative assessment is possible due to a lack of segementation annotation. Figure 4 shows an ultrasound image of a COVID-19 infected lung with the attention heat map. The image is sampled in a grid of image tiles. The distance of each tile to the learned COVID-19 signature is denoted by its color with red being the highest. This heat map highlights a sub-pleural consolidation.

V Discussion

The purpose of these experiments is to asses the potential effectiveness of the COVID-19 ultrasound diagnostic system evaluated by its ability to accurately predict disease state and to efficiently learn from limited samples. In medical decision making, the risks of type one and two errors must be considered. The ROC curves display the performance trade-offs for higher sensitivity and specificity. The results of the experiments show that the algorithm is capable of reliably detecting COVID-19 symptoms in the lungs. the results also show that the algorithm is capable of reliably distinguishing COVID-19 symptoms from pneumonia.

The results of the experiments also demonstrate a significant reduction in training requirements with the capability of learning disease models with as few as 8 training samples per class in some scenarios. A high classification performance was seen in all scenarios with 64 samples per class. This capability opens the door for clinicians to adapt the algorithm to their environmental factors, such as differences in patient demographics, equipment, and operators.

The value of the algorithm’s diagnostic performance is dependent on its ability to be incorporated into the larger clinical diagnostic process. This requires that the algorithm’s diagnostic predictions be presented in an intuitively understandable manner. Qualitative assessment of the attention heat maps demonstrates the capability of highlighting relevant regions of interest. The combination of the high prediction performances and the intuitive displays of the system delivers the aide of deep learning in a clinically viable way.

Vi Conclusion

Rapid, accurate, and inexpensive COVID-19 detection is critically needed. This paper presents the adaptive PoC ultrasound COVID-19 diagnostic system based on the novel FSL visual classification algorithm. The system was designed with a specific focus on its incorporation into the clinical diagnostic process, requiring understandable outputs, adaptability, and reliability. The system takes less then 15 seconds to train on an Intel(R) Core(TM) i5-8600K CPU. The generated disease state models are compact each requiring less than 1 MB of memory. The distance based classification provide intuitive interpretation of the system’s predictions. The attention heat maps highlight the regions of the ultrasound images that are most responsible for its classification. The results show that the system is highly capable of accurately diagnosing COVID-19 and pneumonia disease states with as few as 64 training images per disease.

References

  • [1] ACR Recommendations for the use of Chest Radiography and Computed Tomography (CT) for Suspected COVID-19 Infection. External Links: Link Cited by: §I.
  • [2] P. Bateni, R. Goyal, V. Masrani, F. Wood, and L. Sigal (2020) Improved few-shot visual classification. External Links: 1912.03432 Cited by: §II, §III-B.
  • [3] J. Born, G. Brändle, M. Cossio, M. Disdier, J. Goulet, J. Roulin, and N. Wiedemann (2020) POCOVID-net: automatic detection of covid-19 from a new lung ultrasound imaging dataset (pocus). External Links: 2004.12084 Cited by: §II, §III-C.
  • [4] J. Born, N. Wiedemann, G. Brändle, C. Buhre, B. Rieck, and K. Borgwardt (2020) Accelerating covid-19 differential diagnosis with explainable ultrasound image analysis. External Links: 2009.06116 Cited by: §II.
  • [5] L. Brattain, B. Telfer, M. Dhyani, J. Grajo, and A. E. Samir (2018) Machine learning for medical ultrasound: status, methods, and future opportunities. Abdominal Radiology 43, pp. 786–799. Cited by: §I, §II.
  • [6] L. J. Brattain, B. A. Telfer, A. S. Liteplo, and V. E. Noble (2013) Automated b-line scoring on thoracic sonography. Journal of Ultrasound in Medicine 32 (12), pp. 2185–2190. External Links: Document, Link, https://onlinelibrary.wiley.com/doi/pdf/10.7863/ultra.32.12.2185 Cited by: §II.
  • [7] D. Convissar, L. E. Gibson, L. Berra, E. A. Bittner, and M. G. Chang (2020-05) Application of Lung Ultrasound During the Coronavirus Disease 2019 Pandemic: A Narrative Review. Anesthesia and Analgesia. External Links: ISSN 0003-2999, Link, Document Cited by: §I.
  • [8] COVID-19 situation update worldwide, as of 10 December 2020. External Links: Link Cited by: §I.
  • [9] H. Feng, Y. Liu, M. Lv, and J. Zhong (2020-04) A case report of COVID-19 with false negative RT-PCR test: necessity of chest CT. Japanese Journal of Radiology, pp. 1–2. External Links: ISSN 1867-1071, Link, Document Cited by: §I.
  • [10] P. Foret, A. Kleiner, H. Mobahi, and B. Neyshabur (2020) Sharpness-aware minimization for efficiently improving generalization. External Links: 2010.01412 Cited by: §I.
  • [11] S. L. Haak, I. J. Renken, L. C. Jager, H. Lameijer, and B. (. Y. v. d. Kolk (2020-11) Diagnostic accuracy of point-of-care lung ultrasound in COVID-19. Emergency Medicine Journal (en). Note: Publisher: BMJ Publishing Group Ltd and the British Association for Accident & Emergency Medicine Section: Original research External Links: ISSN 1472-0205, 1472-0213, Link, Document Cited by: §I.
  • [12] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam (2017)

    MobileNets: efficient convolutional neural networks for mobile vision applications

    .
    External Links: 1704.04861 Cited by: §III-B.
  • [13] G. R. Koch (2015) Siamese neural networks for one-shot image recognition. Cited by: §II.
  • [14] A. Krizhevsky, I. Sutskever, and G. Hinton (2012-01) ImageNet classification with deep convolutional neural networks. Neural Information Processing Systems 25, pp. . External Links: Document Cited by: §I.
  • [15] B. Li, X. Li, Y. Wang, Y. Han, Y. Wang, C. Wang, G. Zhang, J. Jin, H. Jia, F. Fan, W. Ma, H. Liu, and Y. Zhou (2020-04) Diagnostic value and key features of computed tomography in Coronavirus Disease 2019. Emerging Microbes & Infections 9 (1), pp. 787–793. External Links: ISSN 2222-1751, Link, Document Cited by: §I.
  • [16] S. Liu, Y. Wang, X. Yang, B. Lei, L. Liu, S. X. Li, D. Ni, and T. Wang (2019) Deep learning in medical ultrasound analysis: a review. Engineering 5 (2), pp. 261 – 275. External Links: ISSN 2095-8099, Document, Link Cited by: §II.
  • [17] D. T. Marggrander, F. Borgans, V. Jacobi, H. Neb, and T. Wolf (2020-10) Lung Ultrasound Findings in Patients with COVID-19. Sn Comprehensive Clinical Medicine, pp. 1–7. External Links: ISSN 2523-8973, Link, Document Cited by: §I.
  • [18] B. A. Oliveira, L. C. de Oliveira, E. C. Sabino, and T. S. Okay (2020) SARS-cov-2 and the covid-19 disease: a mini review on diagnostic methods. Revista do Instituto de Medicina Tropical de São Paulo 62. External Links: ISSN 0036-4665, Link, Document Cited by: §I.
  • [19] O. Peyrony, C. Marbeuf-Gueye, V. Truong, M. Giroud, C. Rivière, K. Khenissi, L. Legay, M. Simonetta, A. Elezi, A. Principe, P. Taboulet, C. Ogereau, M. Tourdjman, S. Ellouze, and J. Fontaine (2020-10) Accuracy of Emergency Department Clinical Findings for Diagnosis of Coronavirus Disease 2019. Annals of Emergency Medicine 76 (4), pp. 405–412. External Links: ISSN 0196-0644, Link, Document Cited by: §I.
  • [20] A. Pinto, A. Reginelli, L. Cagini, F. Coppolino, A. A. Stabile Ianora, R. Bracale, M. Giganti, and L. Romano (2013-07) Accuracy of ultrasonography in the diagnosis of acute calculous cholecystitis: review of the literature. Critical ultrasound journal 5 Suppl 1, pp. S11. External Links: Document, ISSN 2036-3176, Link Cited by: §I.
  • [21] J. Requeima, J. Gordon, J. Bronskill, S. Nowozin, and R. E. Turner (2019) Fast and flexible multi-task classification using conditional neural adaptive processes. Cited by: §II.
  • [22] A. Rosenfeld and J. K. Tsotsos (2020) Incremental learning through deep adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence 42 (3), pp. 651–663. External Links: Document Cited by: §II.
  • [23] A. Sharma, S. Tiwari, M. K. Deb, and J. L. Marty (2020-08) Severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2): a global pandemic and treatment strategies. International Journal of Antimicrobial Agents 56 (2), pp. 106054. External Links: ISSN 0924-8579, Link, Document Cited by: §I.
  • [24] J. Snell, K. Swersky, and R. S. Zemel (2017) Prototypical networks for few-shot learning. External Links: 1703.05175 Cited by: §II, §II.
  • [25] G. Stasi and E. M. Ruoti (2015-Mar.) A critical evaluation in the delivery of the ultrasound practice: the point of view of the radiologist. Italian Journal of Medicine 9 (1), pp. 5–10. External Links: Link, Document Cited by: §I.
  • [26] L. J. Staub, R. R. Mazzali Biscaro, E. Kaszubowski, and R. Maurici (2019) Lung ultrasound for the emergency diagnosis of pneumonia, acute heart failure, and exacerbations of chronic obstructive pulmonary disease asthma in adults: a systematic review and meta-analysis. The Journal of Emergency Medicine 56 (1), pp. 53 – 69. External Links: ISSN 0736-4679, Document, Link Cited by: §I.
  • [27] J. Su, D. V. Vargas, and K. Sakurai (2019) One pixel attack for fooling deep neural networks.

    IEEE Transactions on Evolutionary Computation

    23 (5), pp. 828–841.
    External Links: Document Cited by: §I.
  • [28] F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. Torr, and T. M. Hospedales (2018) Learning to compare: relation network for few-shot learning. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    ,
    pp. 1199–1208. Cited by: §II.
  • [29] N. Tajbakhsh, J. Y. Shin, S. R. Gurudu, R. T. Hurst, C. B. Kendall, M. B. Gotway, and J. Liang (2016) Convolutional neural networks for medical image analysis: full training or fine tuning?. IEEE Transactions on Medical Imaging 35 (5), pp. 1299–1312. External Links: Document Cited by: §II.
  • [30] C. Tan, F. Sun, T. Kong, W. Zhang, C. Yang, and C. Liu (2018) A survey on deep transfer learning. In Artificial Neural Networks and Machine Learning – ICANN 2018, V. Kůrková, Y. Manolopoulos, B. Hammer, L. Iliadis, and I. Maglogiannis (Eds.), Cham, pp. 270–279. Cited by: §I.
  • [31] S. K. Veeramani and E. Muthusamy (2016) Detection of abnormalities in ultrasound lung image using multi-level rvm classification. The Journal of Maternal-Fetal & Neonatal Medicine 29 (11), pp. 1844–1852. Note: PMID: 26135771 External Links: Document, Link, https://doi.org/10.3109/14767058.2015.1064888 Cited by: §II.
  • [32] O. Vinyals, C. Blundell, T. Lillicrap, K. Kavukcuoglu, and D. Wierstra (2017) Matching networks for one shot learning. External Links: 1606.04080 Cited by: §II.
  • [33] S. Woloshin, N. Patel, and A. S. Kesselheim (2020-08) False Negative Tests for SARS-CoV-2 Infection — Challenges and Implications. New England Journal of Medicine 383 (6), pp. e38. Note: Publisher: Massachusetts Medical Society _eprint: https://doi.org/10.1056/NEJMp2015897 External Links: ISSN 0028-4793, Link, Document Cited by: §I.