Krishna Chaitanya

is this you? claim profile

0

  • Semi-Supervised and Task-Driven Data Augmentation

    Supervised deep learning methods for segmentation require large amounts of labelled training data, without which they are prone to overfitting, not generalizing well to unseen images. In practice, obtaining a large number of annotations from clinical experts is expensive and time-consuming. One way to address scarcity of annotated examples is data augmentation using random spatial and intensity transformations. Recently, it has been proposed to use generative models to synthesize realistic training examples, complementing the random augmentation. So far, these methods have yielded limited gains over the random augmentation. However, there is potential to improve the approach by (i) explicitly modeling deformation fields (non-affine spatial transformation) and intensity transformations and (ii) leveraging unlabelled data during the generative process. With this motivation, we propose a novel task-driven data augmentation method where to synthesize new training examples, a generative network explicitly models and applies deformation fields and additive intensity masks on existing labelled data, modeling shape and intensity variations, respectively. Crucially, the generative model is optimized to be conducive to the task, in this case segmentation, and constrained to match the distribution of images observed from labelled and unlabelled samples. Furthermore, explicit modeling of deformation fields allow synthesizing segmentation masks and images in exact correspondence by simply applying the generated transformation to an input image and the corresponding annotation. Our experiments on cardiac magnetic resonance images (MRI) showed that, for the task of segmentation in small training data scenarios, the proposed method substantially outperforms conventional augmentation techniques.

    02/11/2019 ∙ by Krishna Chaitanya, et al. ∙ 18 share

    read it

  • Learning to Segment Medical Images with Scribble-Supervision Alone

    Semantic segmentation of medical images is a crucial step for the quantification of healthy anatomy and diseases alike. The majority of the current state-of-the-art segmentation algorithms are based on deep neural networks and rely on large datasets with full pixel-wise annotations. Producing such annotations can often only be done by medical professionals and requires large amounts of valuable time. Training a medical image segmentation network with weak annotations remains a relatively unexplored topic. In this work we investigate training strategies to learn the parameters of a pixel-wise segmentation network from scribble annotations alone. We evaluate the techniques on public cardiac (ACDC) and prostate (NCI-ISBI) segmentation datasets. We find that the networks trained on scribbles suffer from a remarkably small degradation in Dice of only 2.9 with respect to a network trained on full annotations.

    07/12/2018 ∙ by Yigit B. Can, et al. ∙ 6 share

    read it

  • PHiSeg: Capturing Uncertainty in Medical Image Segmentation

    Segmentation of anatomical structures and pathologies is inherently ambiguous. For instance, structure borders may not be clearly visible or different experts may have different styles of annotating. The majority of current state-of-the-art methods do not account for such ambiguities but rather learn a single mapping from image to segmentation. In this work, we propose a novel method to model the conditional probability distribution of the segmentations given an input image. We derive a hierarchical probabilistic model, in which separate latent spaces are responsible for modelling the segmentation at different resolutions. Inference in this model can be efficiently performed using the variational autoencoder framework. We show that our proposed method can be used to generate significantly more realistic and diverse segmentation samples compared to recent related work, both, when trained with annotations from a single or multiple annotators.

    06/07/2019 ∙ by Christian F. Baumgartner, et al. ∙ 4 share

    read it

  • A Lifelong Learning Approach to Brain MR Segmentation Across Scanners and Protocols

    Convolutional neural networks (CNNs) have shown promising results on several segmentation tasks in magnetic resonance (MR) images. However, the accuracy of CNNs may degrade severely when segmenting images acquired with different scanners and/or protocols as compared to the training data, thus limiting their practical utility. We address this shortcoming in a lifelong multi-domain learning setting by treating images acquired with different scanners or protocols as samples from different, but related domains. Our solution is a single CNN with shared convolutional filters and domain-specific batch normalization layers, which can be tuned to new domains with only a few (≈ 4) labelled images. Importantly, this is achieved while retaining performance on the older domains whose training data may no longer be available. We evaluate the method for brain structure segmentation in MR images. Results demonstrate that the proposed method largely closes the gap to the benchmark, which is training a dedicated CNN for each scanner.

    05/25/2018 ∙ by Neerav Karani, et al. ∙ 0 share

    read it

  • The validity of RFID badges measuring face-to-face interactions

    Face-to-face interactions are important for a variety of individual behaviors and outcomes. In recent years a number of human sensor technologies have been proposed to incorporate direct observations in behavioral studies of face-to-face interactions. One of the most promising emerging technologies are active Radio Frequency Identification (RFID) badges. They are increasingly applied in behavioral studies because of their low costs, straightforward applicability, and moderate ethical concerns. However, despite the attention that RFID badges have recently received, there is a lack of systematic tests on how valid RFID badges are in measuring face-to-face interaction. With two studies we aim to fill this gap. Study 1 (N = 11) compares how data assessed with RFID badges correspond with video data of the same interactions (construct validity) and how this fit can be improved using straightforward data processing strategies. The analyses show that the RFID badges have a sensitivity of 50 gaps of less than 75 seconds are interpolated. The specificity is relatively less affected by this interpolation process (before interpolation 97 interpolation 94.7 Study 2 (N = 73) we show that self-report data of social interactions correspond highly with data gathered with the RFID badges (criterion validity).

    11/28/2018 ∙ by Timon Elmer, et al. ∙ 0 share

    read it