Estimating the volume of abnormalities is useful for evaluating disease progression and identifying patients at risk [1, 2, 11, 12]. For example, emphysema extent is useful for monitoring COPD  and predicting lung cancer .
One common approach to automating the volume estimation is to segment the target abnormalities and subsequently measure their volume. This requires expensive manual annotations, often making it infeasible to train and validate on large datasets. Another approach is to directly regress the volume estimate (or, equivalently, a proportion of the abnormal voxels in an image). This only needs relatively cheap weak labels (e.g. image-level visual scoring).
In this paper, we explore the weakly-labeled approach and consider it a learning from label proportions (LLP) problem . LLP is similar to multiple instance learning (MIL) in that training samples are labeled group-wise. However, in MIL the label only signifies the presence of positive samples, whereas in LLP it is a proportion of positive samples in a group (i.e. a “bag”).
We propose a deep LLP approach for emphysema quantification that leverages proportion labels by incorporating prior knowledge on the nature of these labels. We consider a case where emphysema is graded region-wise using a common visual scoring system , in which grades correspond to intervals of the proportion of region tissue affected by emphysema. Our method consists of a custom loss for learning from intervals and an architecture specialized for LLP. This architecture has a hidden layer that segments emphysema, followed by a layer that computes its proportion in the input region.
Our architecture is similar to architectures proposed for MIL 3, 6]. 
learns to classify particles in high-energy physics from label proportions using a fully-connected network with one hidden layer. applies LLP to ice-water classification in natural images. This method, however, is not end-to-end: it optimizes pixel labels and network parameters in an alternating fashion. In the case of image labeling, LLP can also be addressed more simply by using a CNN (e.g., [4, 5]) together with a regression loss (e.g., root mean square or RMS).
Our methodological contribution is that we propose the first (to our knowledge) end-to-end deep learning based LLP method for image labeling. We compare the proposed interval loss to RMS and our architecture to a conventional CNN (similar to [4, 5]). We perform the latter comparison in the MIL setting (when only emphysema presence labels are used for training) and in the LLP setting. Our application-wise contributions are three-fold. Firstly, we substantially outperform previous works [8, 7] in emphysema presence and extent prediction. Secondly, we achieve near-human performance level in these tasks. Thirdly, despite being trained only using emphysema proportions, our method generates emphysema segmentations that can be used to classify the spatial distribution of emphysema (paraseptal or centrilobular) at human level.
In both MIL and LLP scenarios, a dataset consists of bags of instances ( is a number of instances). In MIL, each bag
has a binary label signifying a presence of at least one positive instance. In LLP, this label is a proportion of positive instances in a bag. In our case, the bag label is an ordinal variable(with an interpretation of emphysema grade; grade 0 corresponds to the absence of emphysema). Values of correspond to intervals of proportion , where thresh
is a vector of thresholds with the first element= 0 and the last element = 1.
We call our proposed and baseline architectures “ProportionNet” and “GAPNet”, respectively (see Fig. 1). The first layers of these architectures are the same: they both take a 3D image of a lung region as input and convert it to a set of 3D feature maps . The only difference between them is in how these feature maps are converted into the final output – a proportion .
ProportionNet first maps the features
to a single 3D emphysema probability mapand then averages the probabilities within a given region mask using “ProportionLayer” to obtain the emphysema proportion. When supervised with region label proportions, ProportionNet learns to classify every instance (an image patch, in our case) in such a way that the average label in the bag (i.e. the region) is close to the ground truth proportion.
GAPNet first pools the feature maps using a global average pooling (GAP) layer (it thus aggregates instance features into bag features) and then combines these averages into the proportion prediction using a fully-connected layer. We also consider a variation of GAPNet where GAP is replaced by masked GAP (MGAP), which averages every feature individually using as a mask.
2.2 A Loss for Learning from Proportion Intervals (LPI)
A good LPI loss would be near-constant when the predicted proportion is inside the ground truth interval and would increase as goes outside the interval’s boundaries. We propose a loss that approximates those properties: , where
is a sharper version of the sigmoid function,are tunable weights and ncat is a number of categories (see Fig. 2, left). A th term enforces that for images of grade the network predicts and images of grade get . Loss function that contains only the first term can be used as a MIL loss needing only binary labels ( will be used to classify a bag into positive or negative).
In the case of ProportionNet, to the above loss we add a term enforcing the MIL assumption that in a negative bag ( means no emphysema) there are no positive instances: .
3 Experimental Setting
3.0.1 Dataset and Preprocessing
Two low-dose CT scans (the baseline and follow up) were acquired from 1990 participants of the Danish Lung Cancer Screening Trial . Lungs were automatically segmented and divided into 6 regions (roughly corresponding to lobes). The image resolution was and slice thickness was . In every region, emphysema extent and the predominant pattern (paraseptal, centrilobular, panlobular) were independently identified by two observers. The extent was assessed as a categorical grade ranging from 0 to 5 and corresponding to 0%, 1-5%, 6-25%, 26-50%, 51-75% and 76-100% of emphysematous tissue, respectively (as in ). We only used images of the right upper region and scores of one observer to train our networks (the interobserver agreement was highest in this region). For our experiments, we randomly sampled 7 training sets of 50, 75, 100, 150, 200, 300 and 700 subjects with validation sets of the same size, except that for the largest training set the validation set contained 300 subjects. The remaining images were assigned to the test sets. The sampling was stratified to ensure similar inter-rater agreement in the training, validation and testing sets.
Using the region masks, we cropped images to select the target region and set all voxels outside of this region to a constant of -800 HU. We used shifting and flipping in the axial plane to augment our training and validation sets.
3.0.2 Network Training and Application
All networks were trained using the Adadelta algorithm for a maximum of 150 epochs, with every epoch consisting of 600 batch iterations. The batch size was 3. The images were sampled in a such way that in every batch there was one healthy image (grade 0), one grade 1 image and one image of grade 2 to 5 sampled uniformly at random (meaning that e.g. grade 5 images appeared with the same frequency as grade 2). This sampling strategy ensures that higher grade images, which are much rarer, are sufficiently represented. For our LPI loss we used thresholds, which are slightly different from the ones defined by the scoring system (given in the “Dataset” subsection and illustrated in Fig. 2, left). This was because with the standard thresholds our method systematically underestimated the extent of emphysema in grade 3-5 regions, implying that these thresholds might be biased (they were not validated to correspond to real proportions). The weights of the loss were chosen to prioritize accurate emphysema presence classification and account for the poorer inter-rater agreement for higher grade classification. was set to 0.5 and .
4.0.1 Performance Metrics
We evaluated our networks using these two metrics, averaged among the two annotators: 1) area under the receiver operating characteristic curve (AUC) measuring discrimination between grade 0 (no emphysema) and grades 1-5, and 2) average of the AUCs measuring discrimination of grades 1 vs. 2, 2 vs. 3, 3 vs. 4 and 4 vs. 5. These metrics represent emphysema presence and extent prediction performances, respectively. In Table1
we report means and standard deviations of these metrics computed over multiple test sets.
In Table 2, we use different metrics to be able to compare with other methods. Intraclass correlation (ICC) was computed between predictions of a method converted to interval midpoints and average interval midpoints of the two raters (same as in ). Spearman’s was computed between raw predictions and the averaged midpoints of the raters. AUC was computed with respect to the maximum of the presence labels of the raters (as in ).
4.0.2 Learning from Emphysema Presence Labels (MIL)
First, we trained GAPNet and ProportionNet for 75 epochs using and losses, respectively. These losses only need binary presence labels, which makes it a MIL problem. ProportionNet outperformed GAPNet in both presence and extent prediction by a large margin when trained on the small sets (see Table 1). When trained on the medium and large sets, ProportionNet was similar to GAPNet in presence detection and better in extent estimation by 2-3% of mean AUC.
To understand the contribution of region masking to the performance of ProportionNet, we also trained MGAPNet, in which GAP was replaced by region-masked GAP, using (on our small sets only due to limited computational resources). MGAPNet performed better than GAPNet in both presence (AUC ) and extent (AUC ) prediction. ProportionNet still substantially outperformed MGAPNet.
4.0.3 Learning from Emphysema Proportion Labels (LLP)
We fine-tuned the GAPNet and ProportionNet previously trained in the MIL setting (see previous subsection) for another 75 epochs using and losses, respectively. ProportionNet outperformed GAPNet in both presence and extent prediction tasks in all cases, except for the medium sets, on which the presence detection performance of both networks was the same.
We also compared with a more conventional RMS loss. We trained GAPNet from scratch for 150 epochs with RMS loss to regress emphysema scores (not proportions, as in this case there would be a relatively very little cost for confusing 0% and 1-5% grades) using the largest training set. RMS did substantially worse than and worse than ProportionNet in both presence (AUC 0.94) and extent (AUC 0.72) prediction (see also Table 2).
|Training set size\Task:||Presence||Extent||Presence||Extent|
|small sets (50, 75, 100)|
|medium sets (150, 200, 300)|
|large set (700)|
|small sets (50, 75, 100)|
|medium sets (150, 200, 300)|
|large set (700)|
4.0.4 Comparison to Other Methods and Human Experts
is a MIL method (trained using only presence labels) based on logistic regression. To compare with each one of these methods, we chose a split having the same number of images or fewer for training and validation (100 and 700 subjects to compare with and , respectively). We also evaluated several traditional densitometric methods  and report the best result (LAA%-950). As can be seen from Table 2, ProportionNet and GAPNet substantially outperformed densitometry and the methods of  and .
When compared with the expert raters, ProportionNet trained using the largest training set achieves ICCs of 0.84 and 0.81 between its predictions and raters’ annotations, whereas the inter-rater ICC is 0.83. It is slightly worse than the second rater in predicting the first rater’s emphysema presence labels (sensitivity 0.92 vs. 0.93 when specificity is 0.9) and is as good as the first rater in predicting the second rater’s labels (sensitivity 0.73, specificity 0.98).
|Training set size:||100 subjects||700 subjects||700 subjects|
| and ||0.72||-||0.63||-||-||-||-||-||0.89||0.87|
Comparison of our networks with densitometry and machine learning approaches and  (they use the same dataset). “LLP” stands for training using extent labels and “MIL” – using presence labels. “RU” and “LU” stand for right and left upper regions. Metrics used are ICC, Spearman’s and AUC.
4.0.5 Emphysema Pattern Prediction
The most common emphysema patterns are centrilobular and paraseptal (around 90% cases in upper regions). Paraseptal emphysema is located adjacent to lung pleura, whereas centrilobular can be anywhere in the lungs. We designed a simple feature to discriminate between the two, given an emphysema segmentation: a ratio between the foreground volume near the boundary and inside the region (see Fig. 2). We computed this feature using segmentations of ProportionNet trained on the largest training set. On the test set, we obtained AUC 0.89 using the first rater (sensitivity 0.65 and specificity 0.95, same as the inter-rater ones) and AUC 0.92 using the second rater as the ground truth (sensitivity 0.61 and specificity 0.96 vs. inter-rater 0.61 and 0.91). This performance is thus on a par with both raters.
5 Discussion and Conclusion
We compared two architectures for MIL and LLP (ProportionNet and GAPNet) under fair conditions: the only differences were in the few final layers that aggregated instance features into bag label predictions. ProportionNet outperformed GAPNet in both MIL and LLP settings. We can attribute this to two factors. Firstly, from our comparison between GAPNet and MGAPNet we learned that region masking is beneficial, probably because it acts as a location prior and makes compensating for variable region sizes unnecessary. However, it was not the main contributor to the performance boost. The second factor is that ProportionNet in a combination with LPI loss reflects the prior assumptions of our problem better. When ProportionNet is trained using our MIL loss ( with ), the assumption is that even a very small ( volume) pathological area makes the image positive. When trained using our LLP loss () and proportion labels, the network is guided on approximately how much of the abnormality is in the images. This loss also captures the interval nature of our labels better, as it allows for different predictions for same grade images. RMS loss, for example, tries to map all examples of one grade into one value, whereas in reality same grade images often have different proportions of emphysema. This is a probable reason for LPI outperforming RMS.
We are aware of only one work  that performed a fair comparison of different network architectures for MIL. In their case, a GAPNet-like network performed better than a ProportionNet-like network. We think that to achieve a regularization effect using ProportionNet, it is crucial to select a pooling strategy and a loss that match the prior assumptions of the target problem well.
Another important advantage of ProportionNet compared to GAPNet is that it localizes the target abnormality. In our case, the localization was good enough to classify spatial distribution of emphysema with human-level accuracy.
While in this work we focused on emphysema quantification, we expect using the proposed architecture and loss to be beneficial in other problems as well. ProportionNet can be a good regularizer for learning from visual scores related to the volume of abnormalities. It might be a good fit for estimating the volume of intracranial calcification  and lung abnormalities . Our LPI loss can be useful when labels have interval nature (e.g., ).
This research is financed by the Netherlands Organization for Scientific Research (NWO) and COSMONiO.
-  Bos, D., Portegies, M.L., van der Lugt, A., Bos, M.J., Koudstaal, P.J., Hofman, A., Krestin, G.P., Franco, O.H., Vernooij, M.W., Ikram, M.A.: Intracranial carotid artery atherosclerosis and the risk of stroke in whites: the rotterdam study. JAMA neurology 71(4), 405–411 (2014)
-  De Jong, P.A., Tiddens, H.A.: Cystic fibrosis–specific computed tomography scoring. Proceedings of the American Thoracic Society 4(4), 338–342 (2007)
-  Dery, L.M., Nachman, B., Rubbo, F., Schwartzman, A.: Weakly supervised classification in high energy physics. JHEP 2017(5), 145 (2017)
-  Dubost, F., Bortsova, G., Adams, H., Ikram, A., Niessen, W.J., Vernooij, M., De Bruijne, M.: GP-Unet: Lesion Detection from Weak Labels with a 3D Regression Network. In: MICCAI 2017, pp. 214–221 (2017)
-  He, K., Zhang, X., Ren, S., Sun, J.: Deep Residual Learning for Image Recognition. In: CVPR 2016. pp. 770–778. IEEE (jun 2016)
-  Li, F., Taylor, G.: Alter-cnn: An approach to learning from label proportions with application to ice-water classification. In: NIPSW (2015)
-  Ørting, S.N., Petersen, J., Thomsen, L.H., Wille, M.M.W., De Bruijne, M.: Detecting Emphysema with Multiple Instance Learning. In: ISBI (2018)
-  Ørting, S.N., Petersen, J., Wille, M.M.W., Thomsen, L.H., De Bruijne, M.: Quantifying Emphysema Extent from Weakly Labeled CT Scans of the Lungs using Label Proportions Learning. In: Proc. of Sixth International Workshop on Pulmonary Image Analysis (2016)
-  Patrini, G., Nock, R., Rivera, P., Caetano, T.: (Almost) No Label No Cry. NIPS 2014 (c), 1–9 (2014)
Wang, X., Yan, Y., Tang, P., Bai, X., Liu, W.: Revisiting multiple instance neural networks. Pattern Recognition 74, 15–24 (feb 2018)
-  Wille, M.M.W., Thomsen, L.H., Petersen, J., De Bruijne, M., Dirksen, A., Pedersen, J.H., Shaker, S.B.: Visual assessment of early emphysema and interstitial abnormalities on CT is useful in lung cancer risk analysis. European Radiology 26(2), 487–494 (feb 2016)
-  Wille, M.M.W., Thomsen, L.H., Dirksen, A., Petersen, J., Pedersen, J.H., Shaker, S.B.: Emphysema progression is visually detectable in low-dose CT in continuous but not in former smokers. European Radiology 24(11), 2692–2699 (2014)