Retinal Vessel Segmentation under Extreme Low Annotation: A Generative Adversarial Network Approach

09/05/2018 ∙ by Avisek Lahiri, et al. ∙ 0

Contemporary deep learning based medical image segmentation algorithms require hours of annotation labor by domain experts. These data hungry deep models perform sub-optimally in the presence of limited amount of labeled data. In this paper, we present a data efficient learning framework using the recent concept of Generative Adversarial Networks; this allows a deep neural network to perform significantly better than its fully supervised counterpart in low annotation regime. The proposed method is an extension of our previous work with the addition of a new unsupervised adversarial loss and a structured prediction based architecture. To the best of our knowledge, this work is the first demonstration of an adversarial framework based structured prediction model for medical image segmentation. Though generic, we apply our method for segmentation of blood vessels in retinal fundus images. We experiment with extreme low annotation budget (0.8 - 1.6 DRIVE and STARE datasets, the proposed method outperforms our previous method and other fully supervised benchmark models by significant margins especially with very low number of annotated examples. In addition, our systematic ablation studies suggest some key recipes for successfully training GAN based semi-supervised algorithms with an encoder-decoder style network architecture.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

With the relatively new breakthrough in large scale object recognition by Krizhevsky et al.,[17]

, Convolutional Neural Networks (CNN) and ‘deep learning’(DL) have achieved unprecedented success in numerous computer vision applications such as object detection

[33], semantic segmentation [30], video understanding [54], visual question-answering [3] to list a few. Inspired by the flexibility of CNNs to adapt to novel computer vision problems, a recent surge of interest has been instigated among the medical image processing community to leverage the rich feature learning and representation prowess of CNNs. In recent years, CNNs have been applied in numerous medical image and video understanding pipelines such as segmenting areas of interest from medical images [31, 27, 28, 35, 22] and sequences [21], medical video understanding [48], reconstruction [15], anomaly region detection [8, 50, 37]. The list is by no means exhaustive; readers are encouraged to refer to [24] for a detailed survey on applications of DL in medical image analysis.

However the success of DL comes at a price. CNN models are significantly complex with millions of trainable parameters. For example, popular architectures such as AlexNet [17] and VGG-Net[42] have 60 million and 138 million parameters respectively. Such gigantic deep architectures easily overfit on small training datasets with low training error but manifests high test error. Curating manually annotated dataset is both time consuming and costly. Even though for natural computer vision problems the current trend is to annotate large scale data with mechanical turks [5], annotating medical data often requires domain specific experts. This instigates the need for methods to train CNNs with limited amount of annotated data. Recent regularization techniques such as dropout [44]

and batch normalization

[12]

have shown promise in preventing over fitting; however a small dataset with regularization during training can easily lead to under fitting, wherein, during the training phase itself, a CNN is unable to approximate the input to output functional mapping appreciably, thereby manifesting high error rates on both training and testing data. Fine-tuning a pre-trained CNN (in most cases pre-trained for object recognition on ImageNet) for specific medical imaging tasks

[47] is the current trend to train a CNN with limited annotated data. Though promising, fine-tuning methods train a CNN by only annotating a fraction of available data, while the remaining unannotated data remains unused.

In this paper, we try to address the central question: ‘Can we learn from both unannotated and annotated data?’ We build upon our previous work[18] on semi-supervised learning which leverages the use of Generative Adversarial Networks (GAN) [11]. To demonstrate the effectiveness of the proposed method, we select a specific application - the task of segmenting blood vessels in retinal fundus images. The motivation of this paper is not to present yet another supervised deep model for retinal vessel segmentation. The main objective is to perform segmentation using as little as 0.8-1.6% of annotation samples used by contemporary deep models. For example, methods of [7, 19, 23] all use around 60,000 training patches, whereas in this work we aim to utlizie only 500-1000 annotations. Proposed framework is completely generic and our findings and recommendations can be embraced for any other medical image domain in which the the aim is to perform some discriminative task with limited amount of labeled data but plethora of unlabeled data. Our contributions in the paper can be summarized as:

Fig. 1: Visualization of training of discriminator and generator networks for proposed semi-supervised learning. Here, we show the vanilla version of generator training as presented in [11], wherein, the generator is trained to create images to maximize its log likelihood (by minimizing ) of belongingness to real class as determined by the discriminator. Discriminator training consists of two parts: (a) With limited amount of labeled examples, given an input patch, it generates a segmentation output to minimize (Eq. 3); (b) On unlabeled examples, discriminator predicts the domain of origin (real/fake) when fed with patches from real dataset(minimizing : Eq. 5) and fake patches (minimizing : Eq. 4) created by the generator.
  • We add an unsupervised loss function to our previously proposed GAN based semi-supervised framework

    [18]. This enables learning from both unlabeled and labeled data under a multitask objective setting.

  • We extend the ‘center-pixel’(CP) prediction framework in [18] to a ‘structured-prediction’(SP) setting by posing segmentation as multi-label inferencing problem.

  • Several architectural and optimization recommendations (Sec. V-C) are provided for successfully training a SP based semi-supervised GAN framework. To the best of our knowledge, this is the first demonstration of adversarial semi-supervised learning under SP framework for segmentation application in medical images.

  • For working with low number of annotated examples, our studies reveal important trade-offs between number of annotations v/s diversity of annotations (Sec. V-D).

  • Evaluations (area under ROC curve) on DRIVE and STARE datasets reveal that a) incorporation of unsupervised loss function boosts the performance of our previous method and b) proposed SP based method significantly outperforms current benchmark supervised models specifically with extremely low annotated examples (Sec. V-E).

Ii Related Work

Traditional methods for blood vessel segmentation in fundus images can be classified into

unsupervised and supervised paradigms. The former type of methods hard-code the local properties of structures to be detected into the algorithms. Exemplary works from this group leverage the concepts of line detectors [34], co-occurrence matrix [51], co-linearly aligned difference-of-Gaussian filters [4] and active contour model [56] to list a few. In supervised methods, an image patch is corresponded to a ground truth annotation patch and a learning algorithm is deployed to learn the mapping from image space to label space. Methods of ridge features with nearest neighbor classifier [45], Gabor wavelets with Bayesian classifiers [43]

, morphological operations with Gaussian mixture model classifiers

[39] are some of the exemplary works in this genre. A more detailed review of these traditional methods have been documented in [9].

Liskowski and Krawiec [23] first demonstrated the efficacy of using CNNs for retinal vessel segmentation by posing the segmentation as a two class classification problem, wherein the positive class is the blood vessel and the background or the non-vessel pixels constitute the negative class. Following [23], there has been a series of efforts towards DL based retina segmentation. One genre of effort [19, 38]

is to first pre-train deep denoising stacked autoencoder

[52] in an unsupervised setting and then later fine-tune the model with labeled examples. Other approaches [7, 10, 26] are analogous to the overall framework of [23] wherein a CNN or an ensemble of CNNs are trained for vessel detection. These algorithms are usually trained on a humongous number of annotated patches. For example, [23] was trained on 3.810 while [7] was trained on 1.210 patches. Contrary to this trend, we are interested to work with as low as only 500 annotated samples.

Iii Method

Iii-a Generative Adversarial Networks

We begin by reviewing the concept of Generative Adversarial Networks (GAN) [11]. A GAN consists of two parametrized deep neural networks, viz., generator, , and discriminator, . The task of the generator is to yield an image,

with a latent vector,

, as input. is sampled from a known distribution, . A common choice for this [11] is . The discriminator is pitted against the generator to distinguish real samples (sampled from ) from fake/generated samples. Specifically, discriminator and generator play the following min-max game on :

(1)

With enough capacity, on convergence, fools at random [11]. In medical imaging literature, recently GANs have been used for generating patho-realistic images [6, 29], fully supervised segmentation [14, 20], denoising [53, 55]. Our work has a different motivation; following the work in [40], we wish to extend GANs for semi supervised learning to formulate a data efficient learning paradigm for biomedical images.

Iii-B Semi-supervised Learning using GANs

In usual supervised setting, a classifier usually has number of output nodes while classifying an input to

classes. Usually, it outputs a vector of unnormalized logits,

, , …,

which can be turned into normalized class probabilities with softmax operation:

gives the probability of class label() to belong to class for the input . Cross entropy loss between the original class labels and predicted class distribution, is used to train the discriminative model.

To perform semi-supervised learning using GANs, we need to augment one more output node to the classifier (which is also the discriminator, in our case). This extra node for class, = , corresponds to the class of fake/generated samples coming from the generator network, . can be seen as the probability of belonging to fake class; this corresponds to the term in Equation 1. This enables us to update the parameters of the discriminator even with unlabeled real data by maximizing . Merging all of these, finally we have three different components in final discriminator loss to update parameters, of the discriminator;

(2)

where,

(3)
(4)
(5)

The three components of the loss function are:

  • is the usual cross entropy loss in which the target is to maximize the predicted probability over the correct class label.

  • encourages the classifier to place high probability for fake class when the input is a fake/generated sample.

  • penalizes the classifier if it gives high probability to fake class when input is a real unlabeled sample.

It is to be noted that the formulation in our previous work [18] did not include the component and thus it was not possible to learn the parameters of the discriminator/classifier on the large amount of unlabeled real data. The two tasks of assigning class label to an image and determining whether the image is real/fake shares some low level feature representation commonalities such as identification of textures, structural regularities. Thus setting up Equations 3 and 5 in a multi task learning setting enables more meaningful gradient updates while training the discriminator. In Fig. 1 we visualize the components of loss functions used for training the discriminator.

Iii-C Center Pixel v/s Structured Prediction

There are two major paradigms for patchwise segmentation of medical images, namely center-pixel prediction (CP) and structured prediction (SP).

Let be the domain of sampled patches from the dataset such that any , where is the resolution of the patches and usually . Also, let be the corresponding label space for , such that for a given patch, , we have its corresponding label, . is the label information at location (i, j) for patch . In case of center pixel prediction, the objective is to learn a parametrized () functional mapping, . Essentially this means that given an image patch, the function returns a single scalar value to predict the probability of the center pixel of that patch to belong to foreground or ‘vessel’ class. is optimized according to,

(6)

This was the procedure we followed in [18].

In contrast, structured prediction learns a parametrized function, . is optimized as,

(7)

In this methodology, for a given image patch, the function simulteneously predicts the probability of all the pixels in the patch belonging to the ‘vessel’ class, instead of just the center pixel.

Layer Names C C P C C P C U Con C C U Con C C C
Operations
Conv +
Droput
Conv Pool
Conv +
Droput
Conv Pool
Conv +
Droput
Upsample
(2X)
Concat
(U1+C4)
Conv+
Dropout
Conv
Upsample
(2X)
Concat
(U2+C2)
Conv
+Droput
Conv
Conv+
Softmax
Input
Resolution
48X48 48X48 48X48 24X24 24X24 24X24 12X12 12X12 24X24 24X24 24X24 24X24 48X48 48X48 48X48 48X48
Input
Channels
1 32 32 32 64 64 64 64 128 128 64 64 96 96 32 32
Output
Channels
32 32 32 64 64 64 64 64 128 64 64 64 96 32 32 2
Kernel Size 3X3 3X3 2X2 3X3 3X3 2X2 3X3 - - 3X3 3X3 - - 3X3 3X3 1X1
TABLE I: Network architecture of our U-Net discriminator

Iii-D Generator Training

In its vanilla form, the generator network can be trained by following the zero sum min-max formulation in [11], i.e., by minimizing generator loss function, according to,

(8)

In this objective, the generator tries to generate samples to fool the discriminator to place high log likelihood on fake samples towards real class. This vanilla formulation in shown in Fig. 1. However, for semi supervised learning under GAN setting, findings in [40] suggest ‘feature matching’ as a preferred method over normal GAN loss for training generator. The key idea behind feature matching is that for a successful generator, the expected intermediate activations within the discriminator over a mini-batch should be same for real and fake class. This is because ultimately, it is the cascade of these intermediate representations that compel the discriminator to ascertain a given sample into real or fake class. Let, be an intermediate representation from layer, , of discriminator; being the height, width and channel count of the representation. The feature mappping loss for the generator is defined as,

(9)

gwhere is any distance metric. This loss captures the expected distance between the intermediate representations of layer for a batch of real and fake samples. In our case we experimented with L distance as a representation of .

Iii-E Joint Training of Generator and Discriminator

We follow the joint iterative optimization setup as in [11] to simultaneously train the generator and discriminator network. For the update of discriminator, we keep the generator fixed and parameters of discriminator are updated based on . Conversely, during generator update, we fix the discriminator and update parameters of generator based on ; can be either for vanilla version of GAN or feature matching loss as in Eq. 9.

Iv Implementation Details

Iv-a Practical Realization of Discriminator

As discussed in Sec. III-B, for a semi supervised GAN with classes, the modified discriminator needs to have output nodes to account for the extra ‘FAKE’ class. However, as pointed out in [40], having output nodes is an over parametrization because, subtracting a general function, , from each of the logits, i.e., , does not change the softmax evaluation. Thus we can even set the , in which case, becomes the usual supervised loss with nodes and the discriminator output can be written as, . For more details, readers are directed to [40]. This trick enables the discriminator network to be an usual deep neural network with output nodes (in case of classification) or, as in our case, output channels(each channel representing pixel wise class probability).

We adopt the state-of-the-art encoder-decoder based U-Net architecture as proposed in [36]. The U-Net model consists of an encoder section which creates a bottleneck starting from original image patch with a series of convolutional layers with dropout and pooling. In the decoder section, we gain back original resolution by upsampling and deconvolutional layers. In between, there are skip connections to concatenate lower and higher order features and easier flow of gradients. The detailed architecture is shown in Table I. Unless otherwise stated, we use dropout[44]

with keep probability of 0.8. Leaky Relu activation is used after every convolution with negative gradient of 0.2.

Iv-B Generator Architecture

For realizing the generator, we follow the principles in [32]. First, the 100D vector is passed through a linear layer and reshaped to a spatial resolution of , where, is the input patch resolution to the discriminator(W = 48 in our case). Then, we follow up with three transposed convolutional layers(also commonly known as deconvolutional layers) [25] to increase resolution by 2X in each step to finally reach . Each layer is followed by Relu non linearity except the last layer which used tanh non linearity to scale output values in the range [-1, 1].

Iv-C Optimization

We use mini batch stochastic gradient descent optimization with Adam optimizer

[16] to train both generator and discriminator network. Learning rate for both the network are set to 10

. Batch size is kept at 64 and training usually progresses for 50 epochs in about 10 hours.

Method Annotated Patches
0.5K 1K 2K 3K
Lahiri et al.[18] 0.82 0.84 0.85 0.81
Proposed(CP) 0.86 0.88 0.89 0.90
TABLE II: Comparison of AUC on DRIVE dataset between our previous [18] and current method with additional unsupervised loss, (Eq. 5). Here we use center pixel(CP) model of [18].

V Experiments

V-a Datasets and Preprocessing

We conduct experiments on DRIVE [46] and STARE111Available at: http://cecas.clemson.edu/ ahoover/stare/ datasets. DRIVE dataset has a clear demarcation of training and test set with 20 images in each category. Such breakup is not provided on STARE. Following recent practice [23], we follow a 1-held-out strategy, where we randomly select 1 image for testing and remaining 19 as train set. Results reported on STARE are average of 20 such trials.

The retinal images were converted to gray scale. It has been shown in [19] that the green channel in color fundus imaging is most discriminative in segmenting blood vessels. Following this, the green channel is given more weight in RGB to gray scale conversion. The contrast of the fundus images are improved using Contrast Limited Adaptive Histogram Equalization (CLAHE) and effect of non-uniform illumination is mitigated. Further, Gamma adjustment improves segmentation performance. Patches of resolution, 4848 are then extracted from the images.

V-B Benefit of Unsupervised Loss

One of the major extensions in this paper over our prior work [18] is the addition of unsupervised loss, , Eq. 5. ‘But, does unsupervised loss help?’ To investigate this, we first experimented with center pixel prediction method and exact same network architecture as in [18] on DRIVE. From Table II, it can be seen that addition of significantly ( value ) outperforms our previous framework consistently at different levels of annotation budget.

V-C GAN Hacks

Finding Nash Equlibrium in a zero-sum minmax game such as in GANs is difficult (often resulting in oscillations) with stochastic gradient descent updates. This is a burning issue within GAN community. In this section, we present a detailed ablation study on various aspects of stabilizing GAN training with a basic U-Net as a baseline. Since The U-Net model is an essential component of many recent medical imaging applications, our findings in this section can serve as a guideline for any GAN based application which deploys U-Net at its core. In Table III we report the AUC on DRIVE test set by training with 1K labeled samples with Feature Matching and vanilla GAN and also comparing the efficacy of different GAN stabilization techniques. Similar trends were also observed for STARE.

Feature Matching[40] Vanilla GAN[11]
Max Pool
0.66 0.68 0.72 0.70 0.69 0.62
Average Pool
0.81 0.83 0.80 0.84 0.77 0.75
Instance Norm + Average Pool
0.84 0.86 0.87 0.87 0.82 0.79
Weight Norm + Average Pool
0.87 0.89 0.86 0.92 0.89 0.84
TABLE III: Ablation study of two genres of GAN training (Feature Matching v/s Vanilla GAN) for semi supervised learning. Results show AUC on DRIVE with 1K labeled patches with U-Net architecture (Refer Table I for architecture details). Different choices of pooling and kernel weight normalizations are reported. For feature matching, different options of matching feature statistics from convolutional layers, , , , and

are explored while for vanilla GAN, domain prediction (real/fake) is taken from the last Softmax layer,

.

V-C1 Max Pool v/s Average Pool

In an encoder-decoder architecture like the U-Net, it is common to use Max Pool operations for spatial reduction of intermediate feature layers. This results in sparse gradient operations which have been shown to hamper GAN training[32]. Specifically, a Max Pool operator, , operating on a receptive field of resolution of a given feature map location, , results in finding the max value in neighborhood.

(10)

Instead of using Max Pool, we benefited by using Average Pooling, which also achieves spatial reduction but with dense gradient operations. In line in notations with Eq. 10, we define Average Pool operator, as,

(11)

In Table III we see that Average Pool results in drastic improvement of performance compared to Max Pool. This observation holds true for training of both Feature Matching and vanilla variants of GAN. Based on our findings, we thus recommend the use of Average Pool over Max Pool in U-Net like architectures while training GANs.

V-C2 Normalization

Normalization of intermediate activations/weights play a decisive role in success of training GANs. With the onset of ‘DCGAN’ [32], BatchNormalization (BN) [13] has become the de facto choice of normalizing weights of a deep network for GAN training. While BN indeed speeds of training of GANs, recent works, specially in the domain of style transfer, recommends the use of Instance Normalization (IN) [49] for better training of GANs. Our initial experiments also manifested better efficacy of IN over BN and thus in Table III we report performance of models trained with IN + Average Pooling. IN + Max Pool did not show any significant improvement over only Max Pool which bolsters the fact that sparse gradient operations such as Max Pool are detrimental for GAN training. We further improved the performance by adopting the recent Weight Normalization (WN) technique proposed by Salimans et al. [41]222Implementation available at: https://github.com/TimSalimans/weight_norm. For a linear layer,

(12)

where , , , WN re-parametrizes with and a trainable scalar, according to,

(13)

As shown in [41], decoupling of norm of the weight vector, , from the direction of the weight vector, , helps in faster(and better) convergence of stochastic gradient descent optimization. Our GAN training also benefited using WN. In Table III we report performance with WN + Average Pooling; this combination gives the best performance across all our experiments and unless otherwise stated, this should be taken as the default setting wherever we are using structured prediction with U-Net for all our further discussion on experiments.

V-C3 Selecting layer(s) for Feature Matching

In their original implementation, 333Available at: https://github.com/openai/improved-gan Salimans et al. [40] used the penultimate layer of the discriminator for matching features from a batch of real and fake samples. We hypothesize that for low level vision tasks, matching features from such deeper layers of a network is not a prudent approach. For cases in which the end task is simple classification, such as in [40], it makes sense to only focus on higher order features from deeper parts of the network. Features essential for classification are agnostic to local perturbations. But in our case, the fully convolutional discriminator is responsible for semantic segmentation - assigning class label to each pixel of a patch. This requires low level information along with high level features. In fact, our initial experiments with feature matching on the penultimate layer of discriminator yielded the worst AUC performance. For initial investigation, we trained a separate model with a different layer (Refer to Table I for details of network layer) selected from the discriminator to adapt. In Table III we report the AUC values on DRIVE dataset for those models. It appears that selecting the extremely shallow or deep layer hurts the performance. It is prudent to match intermediate layers for our low level vision task. As a step further, we experimented with matching the layers at the points of concatenation, Con and Con. We achieved best performance when we match features at Con layer which is a combination of layers C and U (upsampled from C). Henceforth, our results(Tables IV, V, Figures 2, 3) will be reported with this setting. It is also be seen that training the generator with the original GAN formulation [11] hurts semi-supervised performance and thus moving forward we experimented with only feature matching paradigm. The proof of concepts learnt so far on DRIVE were also extended on STARE dataset experiments, unless otherwise stated.

Fig. 2: AUC (Y-axis) on DRIVE and STARE datasets at different budgets (0.5K, 1K) of annotated samples conglomerated from different number of training images (X-axis). It is observed that models trained with a lower budget of annotated patches but sampled from a larger pool of images results in better performance than models trained with more annotations on smaller pool of images. This recommends diversity of samples over quantity of samples specifically while designing low annotation deep learning models.

V-D More Patches or More Images ?

Till now, we have been referring only to the total number of patches as the annotation budget constraint. However, when working with such low annotation setting, it is important to investigate the importance of number of annotations and diversity of annotations. For example, we can have a budget of 500 annotated samples; these 500 patches can be taken from a single image in worst case or equally sampled from all training set images in best case. In Fig. 2 we compare the performance of our model trained with two budgets - 0.5K and 1K. Annotations are sampled uniformly from different subsets of entire training set. It is observed that at a given budget of annotation, if we sample from more number of training images, then performance improves. This is especially evident in experiments performed on the STARE dataset which has more number of patient samples compared to DRIVE. Another interesting observation is that models trained with lower annotation budget but sampled from a larger pool of training data tends to perform better than models trained on higher annotation budget but from smaller pool of images. For example, on STARE, we achieve an AUC of 0.85 with 0.5K annotations sampled from 15 images compared to 0.81 achieved by training on 1K annotation from 10 images. Similar trends can be found on DRIVE dataset. These observations suggest that, for a given annotation budget, one should try to collect more images and perform less annotation per image.

V-E Comparison with State-of-the-art

In Tables IV and V we compare performance of our U-Net GAN model with the vanilla U-Net model 444Implementation adapted from https://github.com/orobix/retina-unet. At full supervision with 60K labeled samples, the vanilla U-Net achives AUC of 0.97 on DRIVE and 0.96 on STARE and thus U-Net serves as a very strong baseline for supervised training. At very low number of annotated patches, our model consistently outperforms U-Net across both datasets. Also, we gain distinct gain over our previous semi-supervised framework [18]. We also compared against two contemporary benchmark supervised benchmark models of [7] and [23] and achieved consistent gains at different levels of annotation on both datasets. The current work thus sets up a new benchmark for such low annotation retinal vessel segmentation across two real life fundus datasets.

Genre Method Annotated Patches
0.5K 1K 3K 10K
Dasgupta et al.[7] 0.85 0.87 0.89 0.92
Supervised Liskowski et al.[23] 0.83 0.84 0.87 0.92
U-Net 0.89 0.90 0.92 0.95
Semi Supervised Lahiri et al. [18] 0.82 0.84 0.85 0.93
Proposed (SP) 0.92 0.94 0.96 0.97
TABLE IV: Comparison of competing supervised and semi supervised methods on DRIVE dataset.
Genre Method Annotated Patches
0.5K 1K 3K 10K
Dasgupta et al.[7] 0.82 0.84 0.87 0.91
Supervised Liskowski et al.[23] 0.84 0.86 0.89 0.93
U-Net 0.86 0.89 0.90 0.94
Semi Supervised Lahiri et al. [18] 0.80 0.81 0.83 0.90
Proposed (SP) 0.90 0.92 0.94 0.96
TABLE V: Comparison of competing supervised and semi supervised methods on STARE dataset.

Vi Conclusion and Discussion

In this paper we extended our previous work [18] on semi supervised segmentation of retinal vessels from fundus images. We showed how unlabeled training samples can be leveraged via unsupervised adversarial loss function leading to final boost of segmentation performance. Our rigorous experiments recommend a series of best practices while training an encoder-decoder like architecture in a GAN framework. It was consistently seen across both datasets, then in the regime of low annotation space, GAN based semi-supervised learning performs better than state-of-the-art supervised models. Our work thus opens up new opportunites to leverage deep learning framework on domains where availability of annotated data is scarce. We made an important observation that diversity of annotation is more important than actual number of annotations. This observation is particularly promising since getting medical imaging data is still easier with help of para medics than annotating them with experts. Since, we made no assumption on the distribution of underlying data, our findings should be seamlessly applicable for other medical imaging domains as well.

Vii Acknowledgement

The authors would like to thank the repository owners of [2, 1] for open sourcing their code on which we built upon. Avisek is funded by a Google PhD Fellowship in Machine Perception.

Fig. 3: Some sample visualizations of segmented vessels on DRIVE and STARE dataset at 0.5K and 1K patch annotation budget. For each figure, we also show a zoomed up section. Even with 0.5K samples, our method appreciable efficacy at segmenting finer vessels compared to supervised U-Net model. The effect is more pronounced on STARE dataset which consists of data from patient group with various opthalmic disorders.

References

  • [1]

    Dcgan in tensorflow.

    https://github.com/carpedm20/DCGAN-tensorflow. Accessed: 2018-08-30.
  • [2] Retina blood vessel segmentation with a convolution neural network (unet). https://github.com/orobix/retina-unet. Accessed: 2018-08-30.
  • [3] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. Lawrence Zitnick, and D. Parikh. Vqa: Visual question answering. In Proceedings of the IEEE International Conference on Computer Vision, pages 2425–2433, 2015.
  • [4] G. Azzopardi, N. Strisciuglio, M. Vento, and N. Petkov. Trainable cosfire filters for vessel delineation with application to retinal images. Medical image analysis, 19(1):46–57, 2015.
  • [5] M. Buhrmester, T. Kwang, and S. D. Gosling. Amazon’s mechanical turk: A new source of inexpensive, yet high-quality, data? Perspectives on psychological science, 6(1):3–5, 2011.
  • [6] P. Costa, A. Galdran, M. I. Meyer, M. Niemeijer, M. Abràmoff, A. M. Mendonça, and A. Campilho. End-to-end adversarial retinal image synthesis. IEEE transactions on medical imaging, 37(3):781–791, 2018.
  • [7] A. Dasgupta and S. Singh. A fully convolutional neural network based structured prediction approach towards the retinal vessel segmentation. In Biomedical Imaging (ISBI 2017), 2017 IEEE 14th International Symposium on, pages 248–251. IEEE, 2017.
  • [8] Q. Dou, H. Chen, L. Yu, L. Zhao, J. Qin, D. Wang, V. C. Mok, L. Shi, and P.-A. Heng. Automatic detection of cerebral microbleeds from mr images via 3d convolutional neural networks. IEEE transactions on medical imaging, 35(5):1182–1195, 2016.
  • [9] M. M. Fraz, P. Remagnino, A. Hoppe, B. Uyyanonvara, A. R. Rudnicka, C. G. Owen, and S. A. Barman. Blood vessel segmentation methodologies in retinal images–a survey. Computer methods and programs in biomedicine, 108(1):407–433, 2012.
  • [10] H. Fu, Y. Xu, S. Lin, D. W. K. Wong, and J. Liu. Deepvessel: Retinal vessel segmentation via deep learning and conditional random field. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 132–139. Springer, 2016.
  • [11] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
  • [12] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
  • [13] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
  • [14] S. Izadi, Z. Mirikharaji, J. Kawahara, and G. Hamarneh. Generative adversarial networks to segment skin lesions. In Biomedical Imaging (ISBI 2018), 2018 IEEE 15th International Symposium on, pages 881–884. IEEE, 2018.
  • [15] K. H. Jin, M. T. McCann, E. Froustey, and M. Unser. Deep convolutional neural network for inverse problems in imaging. IEEE Transactions on Image Processing, 26(9):4509–4522, 2017.
  • [16] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • [17] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
  • [18] A. Lahiri, K. Ayush, P. K. Biswas, and P. Mitra. Generative adversarial learning for reducing manual annotation in semantic segmentation on large scale miscroscopy images: Automated vessel segmentation in retinal fundus image as test case. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops

    , pages 42–48, 2017.
  • [19] A. Lahiri, A. G. Roy, D. Sheet, and P. K. Biswas. Deep neural ensemble for retinal vessel segmentation in fundus images towards achieving label-free angiography. In Engineering in Medicine and Biology Society (EMBC), 2016 IEEE 38th Annual International Conference of the, pages 1340–1343. IEEE, 2016.
  • [20] Z. Li, Y. Wang, and J. Yu. Brain tumor segmentation using an adversarial network. In International MICCAI Brainlesion Workshop, pages 123–132. Springer, 2017.
  • [21] L. Lin, W. Yang, C. Li, J. Tang, and X. Cao. Inference with collaborative model for interactive tumor segmentation in medical image sequences. IEEE transactions on cybernetics, 46(12):2796–2809, 2016.
  • [22] P. Liskowski and K. Krawiec. Segmenting retinal blood vessels with deep neural networks. IEEE transactions on medical imaging, 35(11):2369–2380, 2016.
  • [23] P. Liskowski and K. Krawiec. Segmenting retinal blood vessels with deep neural networks. IEEE transactions on medical imaging, 35(11):2369–2380, 2016.
  • [24] G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, J. A. van der Laak, B. van Ginneken, and C. I. Sánchez. A survey on deep learning in medical image analysis. Medical image analysis, 42:60–88, 2017.
  • [25] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3431–3440, 2015.
  • [26] K.-K. Maninis, J. Pont-Tuset, P. Arbeláez, and L. Van Gool. Deep retinal image understanding. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 140–148. Springer, 2016.
  • [27] P. Moeskops, M. A. Viergever, A. M. Mendrik, L. S. de Vries, M. J. Benders, and I. Išgum. Automatic segmentation of mr brain images with a convolutional neural network. IEEE transactions on medical imaging, 35(5):1252–1261, 2016.
  • [28] P. Naylor, M. Laé, F. Reyal, and T. Walter. Nuclei segmentation in histopathology images using deep neural networks. In Biomedical Imaging (ISBI 2017), 2017 IEEE 14th International Symposium on, pages 933–936. IEEE, 2017.
  • [29] D. Nie, R. Trullo, J. Lian, L. Wang, C. Petitjean, S. Ruan, Q. Wang, and D. Shen. Medical image synthesis with deep convolutional adversarial networks. IEEE Transactions on Biomedical Engineering, 2018.
  • [30] H. Noh, S. Hong, and B. Han. Learning deconvolution network for semantic segmentation. In Proceedings of the IEEE International Conference on Computer Vision, pages 1520–1528, 2015.
  • [31] S. Pereira, A. Pinto, V. Alves, and C. A. Silva. Brain tumor segmentation using convolutional neural networks in mri images. IEEE transactions on medical imaging, 35(5):1240–1251, 2016.
  • [32] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016.
  • [33] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91–99, 2015.
  • [34] E. Ricci and R. Perfetti. Retinal blood vessel segmentation using line operators and support vector classification. IEEE transactions on medical imaging, 26(10):1357–1365, 2007.
  • [35] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234–241. Springer, 2015.
  • [36] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234–241. Springer, 2015.
  • [37] H. R. Roth, L. Lu, A. Seff, K. M. Cherry, J. Hoffman, S. Wang, J. Liu, E. Turkbey, and R. M. Summers. A new 2.5 d representation for lymph node detection using random sets of deep convolutional neural network observations. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 520–527. Springer, 2014.
  • [38] A. G. Roy and D. Sheet. Dasa: Domain adaptation in stacked autoencoders using systematic dropout. In Pattern Recognition (ACPR), 2015 3rd IAPR Asian Conference on, pages 735–739. IEEE, 2015.
  • [39] S. Roychowdhury, D. D. Koozekanani, and K. K. Parhi. Blood vessel segmentation of fundus images by major vessel extraction and subimage classification. IEEE journal of biomedical and health informatics, 19(3):1118–1128, 2015.
  • [40] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems, pages 2234–2242, 2016.
  • [41] T. Salimans and D. P. Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Advances in Neural Information Processing Systems, pages 901–909, 2016.
  • [42] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  • [43] J. V. Soares, J. J. Leandro, R. M. Cesar, H. F. Jelinek, and M. J. Cree. Retinal vessel segmentation using the 2-d gabor wavelet and supervised classification. IEEE Transactions on medical Imaging, 25(9):1214–1222, 2006.
  • [44] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting.

    The Journal of Machine Learning Research

    , 15(1):1929–1958, 2014.
  • [45] J. Staal, M. D. Abràmoff, M. Niemeijer, M. A. Viergever, and B. Van Ginneken. Ridge-based vessel segmentation in color images of the retina. IEEE transactions on medical imaging, 23(4):501–509, 2004.
  • [46] J. Staal, M. D. Abràmoff, M. Niemeijer, M. A. Viergever, and B. Van Ginneken. Ridge-based vessel segmentation in color images of the retina. IEEE transactions on medical imaging, 23(4):501–509, 2004.
  • [47] N. Tajbakhsh, J. Y. Shin, S. R. Gurudu, R. T. Hurst, C. B. Kendall, M. B. Gotway, and J. Liang. Convolutional neural networks for medical image analysis: Full training or fine tuning? IEEE transactions on medical imaging, 35(5):1299–1312, 2016.
  • [48] A. P. Twinanda, S. Shehata, D. Mutter, J. Marescaux, M. de Mathelin, and N. Padoy. Endonet: A deep architecture for recognition tasks on laparoscopic videos. IEEE transactions on medical imaging, 36(1):86–97, 2017.
  • [49] D. Ulyanov, A. Vedaldi, and V. S. Lempitsky. Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis. In CVPR, volume 1, page 3, 2017.
  • [50] M. J. van Grinsven, B. van Ginneken, C. B. Hoyng, T. Theelen, and C. I. Sánchez. Fast convolutional neural network training using selective data sampling: Application to hemorrhage detection in color fundus images. IEEE transactions on medical imaging, 35(5):1273–1284, 2016.
  • [51] F. M. Villalobos-Castaldi, E. M. Felipe-Riverón, and L. P. Sánchez-Fernández. A fast, efficient and automated method to extract vessels from fundus images. Journal of Visualization, 13(3):263–270, 2010.
  • [52] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol.

    Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion.

    Journal of Machine Learning Research, 11(Dec):3371–3408, 2010.
  • [53] J. M. Wolterink, T. Leiner, M. A. Viergever, and I. Išgum. Generative adversarial networks for noise reduction in low-dose ct. IEEE transactions on medical imaging, 36(12):2536–2545, 2017.
  • [54] L. Yao, A. Torabi, K. Cho, N. Ballas, C. Pal, H. Larochelle, and A. Courville. Describing videos by exploiting temporal structure. In Proceedings of the IEEE international conference on computer vision, pages 4507–4515, 2015.
  • [55] X. Yi and P. Babyn. Sharpness-aware low-dose ct denoising using conditional generative adversarial network. Journal of digital imaging, pages 1–15, 2018.
  • [56] Y. Zhao, L. Rada, K. Chen, S. P. Harding, and Y. Zheng. Automated vessel segmentation using infinite perimeter active contour model with hybrid region information with application to retinal images. IEEE transactions on medical imaging, 34(9):1797–1807, 2015.