ASC-Net: Unsupervised Medical Anomaly Segmentation Using an Adversarial-based Selective Cutting Network

12/16/2021
by   Raunak Dey, et al.
Shanghai Jiao Tong University
7

In this paper we consider the problem of unsupervised anomaly segmentation in medical images, which has attracted increasing attention in recent years due to the expensive pixel-level annotations from experts and the existence of a large amount of unannotated normal and abnormal image scans. We introduce a segmentation network that utilizes adversarial learning to partition an image into two cuts, with one of them falling into a reference distribution provided by the user. This Adversarial-based Selective Cutting network (ASC-Net) bridges the two domains of cluster-based deep segmentation and adversarial-based anomaly/novelty detection algorithms. Our ASC-Net learns from normal and abnormal medical scans to segment anomalies in medical scans without any masks for supervision. We evaluate this unsupervised anomly segmentation model on three public datasets, i.e., BraTS 2019 for brain tumor segmentation, LiTS for liver lesion segmentation, and MS-SEG 2015 for brain lesion segmentation, and also on a private dataset for brain tumor segmentation. Compared to existing methods, our model demonstrates tremendous performance gains in unsupervised anomaly segmentation tasks. Although there is still room to further improve performance compared to supervised learning algorithms, the promising experimental results and interesting observations shed light on building an unsupervised learning algorithm for medical anomaly identification using user-defined knowledge.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 9

page 11

page 12

page 13

page 14

page 16

page 17

03/05/2021

ASC-Net : Adversarial-based Selective Network for Unsupervised Anomaly Segmentation

We introduce a neural network framework, utilizing adversarial learning ...
04/12/2018

Deep Autoencoding Models for Unsupervised Anomaly Segmentation in Brain MR Images

Reliably modeling normality and differentiating abnormal appearances fro...
06/24/2021

Where is the disease? Semi-supervised pseudo-normality synthesis from an abnormal image

Pseudo-normality synthesis, which computationally generates a pseudo-nor...
07/09/2020

Brain Tumor Anomaly Detection via Latent Regularized Adversarial Network

With the development of medical imaging technology, medical images have ...
11/10/2020

Self-Supervised Out-of-Distribution Detection in Brain CT Scans

Medical imaging data suffers from the limited availability of annotation...
10/25/2018

An Adversarial Learning Approach to Medical Image Synthesis for Lesion Removal

The analysis of lesion within medical image data is desirable for effici...
03/03/2022

Constrained unsupervised anomaly segmentation

Current unsupervised anomaly localization approaches rely on generative ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In the field of computer vision and medical image analysis, unsupervised image segmentation has been an active research topic for decades 

[17, 24, 30, 31, 37]

, due to its potential of applying to many applications without requiring the data to be manually labelled. Anomaly detection in images, which identifies out-of-distribution data samples or region of interests, often desires unsupervised algorithms, because of the complexity and large variation of anomalies and the challenge of obtaining a large amount of high-quality annotations for anomalies.

Recently, advances in Generative Adversarial Networks (GANs) 

[18] have given rise to a class of anomaly detection algorithms, which are inspired by AnoGAN [35] to identify abnormal events, behaviors, or regions in images or videos [12, 16, 36, 27]. The AnoGAN approach learns a manifold of normal images by mapping the image space to a latent space based on GANs. To detect the anomaly in a query image, AnoGAN needs an iterative search in the latent space to reconstruct its closest corresponding image, which is subtracted from the query image for locating the anomaly. That is, the AnoGAN family, including f-AnoGAN [34] and other following-up works [4, 5, 20, 40, 41], focus on the reconstruction of the corresponding normal images for a query image, instead of directly working on the anomaly detection. As a result, their image reconstruction quality heavily affects the performance of anomaly detection or segmentation.

To center the focus on the anomaly without needing faithful reconstruction, we propose an adversarial-based selective cutting neural network (ASC-Net), as shown in Figure 

1. Our goal is to decompose an image into two selective cuts and simplify the task of anomaly segmentation, according to a collection of normal images as a reference distribution. Typically, this reference distribution is defined by a set of images provided by users or experts who have vague knowledge and expectation of normal control cases. In this way, one cut of an image will fall into the reference distribution, while other image content that is outside of the reference image distribution will be grouped into the other cut. These two cuts allow to reconstruct the original input image semantically, resulting in a simplified reconstruction where clustering normal and abnormal regions could be easily achieved by performing an intensity thresholding.

In the architecture design shown in Figure 1, we consider the above two cuts simultaneously in the U-Net framework [32]

, which is extended with two upsampling branches and one connecting to a GAN’s discriminator network. This discriminator allows introducing the knowledge contained in the reference image distribution and makes the main network decompose images into softly disjoint two regions. That is, the generation of our selective cuts is under the constraint of the reference image distribution. We keep the reconstruction after the two-cut generation simple to preserve the semantic information in the two cuts and make the anomaly segmentation possible based on thresholding. As a result, we obtain a joint estimation of anomaly segmentation and a reduced corresponding normal image, thus bypassing the need for a perfect reconstruction. Except for a collection of normal images in the reference distribution, we do not have any other labels at the image level or annotations at the pixel/voxel level for the anomaly; hence, our method becomes an unsupervised solution for anomaly detection and segmentation in medical images.

Figure 1: Overview of our proposed adversarial-based selective cutting network (ASC-Net) for unsupervised anomaly segmentation in medical scans.

We evaluate our proposed unsupervised anomaly segmentation network on three public datasets, i.e., MS-SEG2015 [7], BraTS-2019 [1, 2, 26], and LiTS [6]

datasets. For the MS-SEG2015 dataset, an exhaustive study on comparing multiple existing autoencoder-based models, variational-autoencoder-based models, and GAN-based models is performed in 

[3]. Compared to the best Dice scores reported in [3], we have significant gains in performance, with an improvement of 23.24% in mean Dice score without any post-processing and 20.40% with post-processing111Different from that in [3], we use a simple open-and-closed operation as the post-processing.. For BraTS dataset, our experiments show that f-AnoGAN, the one performs the best after post-processing in [3], has difficulty reconstructing the normal images required for anomaly segmentation. By contrast, under the two-fold cross-validation settings we obtain a mean Dice score of 63.67% for the BraTS brain tumor segmentation before post-processing and 68.01% after post-processing. Similarly, for the LiTS liver lesion segmentation, we obtain 32.24% before post-processing and improve the mean Dice score to 50.23% using a simple post-processing scheme of open and closed sets. We also evaluate our method on a private dataset to further demonstrate its effectiveness and generalization in practice, which achieves over 90% mean Dice score after post-processing.

Overall, the contributions of our proposed method in this paper are summarized below:

  • [noitemsep,topsep=0pt]

  • Proposing an adversarial based framework for unsupervised anomaly segmentation, which bypasses the normal image reconstruction and works on anomaly segmentation directly. This framework presents a general strategy to generate two selective cuts with incorporating human knowledge via a reference image set.

  • To the best of our knowledge, when we proposed model ours was the first one to apply an unsupervised segmentation algorithm to both BraTS 2019 and LiTS liver lesion public datasets. Also, our method outperforms the AnoGAN family and other popular methods presented in [3] on the publicly available MS-SEG2015 dataset.

This paper is an extension of our MICCAI paper published in 2021 [15]. To further demonstrate the effectiveness of our proposed method, we provide more experimental results, including comparisons to a new algorithm that becomes available online after the MICCAI conference and additional evaluations on a private dataset. In addition, compared to the conference version, this paper presents more detailed descriptions and explanations on motivations and design choices, new observations while working on the private dataset, and thorough discussions on the merits and limitations of our method.

2 Related Work

In this paper, we perform unsupervised segmentation for anomalies in medical images via clustering. Therefore, we review the related work from these two perspectives, i.e., clustering and anomaly segmentation in an unsupervised manner.

2.1 Unsupervised Segmentation Based on Clustering

Before the prevalence of deep neural networks, the primary means of medical image segmentation involve a combination of classical clustering operations followed by edge detection, watershed transformation, k-means, such as in 

[19, 29, 28]

. Since deep learning technique becomes popular in 2012, it opens a new venue to develop more efficient and effective image segmentation methods. W-Net 

[39] is a recent representative unsupervised image segmentation approach, which introduces the classical graph clustering method [37] into deep neural networks to find k-clusters in an image. Since W-Net requires the computation of the similarity score matrix for all pixels, resulting in its high-computational cost for high resolution images. Moreover, W-Net has difficulty in generating consistent cut outputs due to the uncertainty in the network initialization.

2.2 Unsupervised Anomaly Detection and Segmentation

The primary school of thought in the case of unsupervised anomaly detection and segmentation has revolved around the idea of training frameworks to learn the manifold of healthy samples, which helps filter out anomalies in a query image during testing for reconstruction and comparison. GAN-based and AutoEncoder (AE) based algorithms are the two most representative categories in addressing the unsupervised anomaly detection and segmentation problem.

GAN-Based Methods. The majority of GAN-based applications [5, 20, 40, 41] are motivated from the pioneering paper AnoGAN [35], which is later improved with a faster version f-AnoGANs [34]. The principle idea of all methods in the AnoGAN’s family involves training a neural network in an adversarial fashion to learn and generate perfect healthy images. In this way, any unhealthy or anomalous image would not be constructive faithfully, and thus a residual or subtraction between the regenerated image during testing and its output image would provide the anomaly.

AE-Based Methods. Following the success of AnoGAN idea of anomaly detection and accompanied by advances in Variational Autoencoders (VAE) [21, 22], adversarial autoencoders in [25] replace the KL-divergence loss in VAE with an adversarial network to ensure the generation of meaningful samples from prior distribution. Similar combination has also been proposed in [23] to create a merger between GANs and VAEs, which has been utilized for anomaly segmentation in brain MRI scans [4]. Different variations of VAEs have also been employed in [9, 10, 43] for brain tumor detection.

2.3 Comparison with Existing Methods

One main difference between existing methods and ours is the role that image reconstruction plays in anomaly detection. AnoGANs and approaches inspired by AnoGAN [4, 5, 20, 41], as well as the VAE variants, focus on regenerating the corresponding normal images for a query image, but not directly working on the anomaly detection. The performance of these methods on anomaly detection highly depends on the image reconstruction quality, which makes the methods susceptible to GANs instability issue during reconstruction. Failing to reconstruct a corresponding normal image will lead to inaccurate prediction results for the anomaly detection and segmentation task. On the contrary, we aim to develop a solution that is constrained by the underlying normal images and generates a reduced reconstruction of the input image for segmentation based on a simple thresholding. Our method does generate a healthy component of an input image; however, its generation quality is not used as a termination criteria for our algorithm. Instead, a clear histogram separation with distinct peaks of the reduced reconstruction is the requirement for the termination. This design choice frees our framework from the instability of the adversarial training scheme.

Another main difference is the way to define disjoint groups of an image. Unsupervised segmentation and anomaly detection both aim to produce clusters that separate image pixels into disjoint groups. In the anomaly detection algorithms, the AnoGANs family [34, 35] separates a query image into the normal and abnormal groups by generating high-quality normal images based on GANs, while graph-based methods like W-Net [39] could perform two cuts to obtain two clusters, i.e., normal and anomalous. Our approach, on the contrary, defines the disjoint groups at semantic level of the input image, based on the complement laws in set theory. In this way, we decompose the input image into two sets, according to the group of pixels in the sets, not simply based on the pixel intensity itself. As a result, we achieve better/robust results compared to methods in AnoGAN’s family and W-Net222We tried W-Net, but it cannot provide consistent outputs at each run..

A recent work [42] introduces a self-learning approach, which creates an artificial dataset to pretrain a segmentation network and test on similar datasets. That is, for different segmentation tasks, this approach needs to create a similar artificial dataset with Pseduo annotations, which has the risk of introducing label noise and segmentation artifacts. Unlike this method, our framework is general and does not produce artificial datasets for different tasks. For example, we are able to segment brain Lesions and brain tumors using non-tumor slices collected from the BraTS dataset, as shown in the experiments.

3 Adversarial-based Selective Cutting Network (ASC-Net)

3.1 Network Framework

Our network aims to decompose an input image into two discontinuous sub-images, which is achieved by the main module shown in Fig. 1. To guide the decomposition, we incorporate the user knowledge defined by a reference distribution of normal images, using the discriminator . These two modules are the core components of our proposed ASC-Net, which is followed by a simple clustering step (the thresholding ) to obtain the segmentation mask of the anomaly.

Figure 2: Visualization of the “disjoincy” between images (top) and (bottom) generated by two cuts of ASC-Net. Left to right: the generated image, its histogram, and the following four columns representing the histogram equalized images of the thresholded peaks with the first peak being the first image, etc. The first peak of is disjoint with the last peak of , etc.

Main Module . The main module aims to generate two selective cuts, which separate the normal and abnormal information in an input image. The follows an encoder-decoder architecture like the U-Net, including one encoder and two decoders. The encoder extracts features of an input image , which could be an image located within or outside of the reference distribution , a collection of normal images. One decoder, the second upsampling branch, is designed to generate a “fence” cut that is constrained by an image fence formed by . The aims to generate an image and tries to fool the discriminator . The other decoder, the first unsampling branch, is designed to generate another “wild” cut , which captures leftover image content that is not included in . As a result, the produces another images to complement the fence-cut output . Since the wile-cut output is complementary to the fence-cut output, image information that can not be covered by the reference distribution would be included in the while-cut output, like the anomaly. The complementary relation between these two cuts and

is enforced by a positive Dice loss, i.e., a “disjoincy” loss as discussed in the paragraph of Loss Functions. Figure 

2 demonstrates the “disjoincy” of and , like their complementary histogram distribution and different thresholded images at different peaks.

Except for a weak guidance of the “disjoincy” loss for the “wild” cut, we adopt a reconstruction branch to make sure our two selective cuts include enough information to constrcut a coarse version of the original input. The reconstructor consists of a

convolution layer with the Sigmoid as the activation function, which is applied on the concatenation of the two-cut outputs

and to regenerate the input image back. This reconstructor ensures that the does not generate an image far from the input image and also ensures that the does not generate an empty image if the anomaly or novelty exists. Figure 3 shows the histogram separation of the reconstructed images, compared to the original input images which present complex histogram peaks and have difficulty in separating the brain tumor from backgorund and other tissues via a simple thresholding. The discontinuous histogram distribution of is inherited from the two generated sub-images and through a simple weighted combination. As a result, the segmentation task becomes relatively easy to be done on the reconstructed image .

Figure 3: Histogram comparison of two sample images. Left to right: the input image, its histogram, its reconstructed image using ASC-Net, and the histogram of the reconstructed image. The histograms of the input images vary greatly, while the ones of their reconstructions show peaks at similar ranges, which enables a thresholding-based pixel-level separation.

Discriminator . The GAN discriminator tries to distinguish the generated image , according to a reference distribution defined by a set of images , which are provided by the user or experts. The typically includes images collected from the same group, for instance, normal brain scans, which share similar structures and lie on a manifold. Introducing allows us to incorporate our vague prior knowledge about a task into a deep neural network. Typically, it is non-trivial to explicitly formulate such prior knowledge; however, it could be implicitly represented by a selected image set. The is an essential component that makes our ASC-Net possible to generate selective cuts according to the user’s input, without requiring other supervisions or pixel-level annotations.

Thresholding . The reconstruction branch consistently provides us a reduced version of the original input image, where an unsupervised segmentation is easy to achieve via clustering. To cluster the reconstructed image into two groups at the pixel level, we choose the thresholding approach with the threshold values obtained using the histogram of . We observed that for an anomaly that is often brighter than the surrounding tissues like the BraTS brain tumor, the intensity value at the rightmost peak of the histogram is a desired threshold; while an opposite case like darker LiTS liver lesions, the value at the leftmost peak would be the threshold. We also observed that the histograms of the reconstructed images for different inputs reflect the same cut-off point for the left or right peaks, which allows using one threshold for an entire dataset.

Loss Functions. The main module includes three loss functions: (i) the image generation loss for , , (ii) the “disjoincy” loss between and , , and (iii) the reconstruction loss, . In particular, the tries to generate an image that fools the discriminator by minimizing . Here, is the number of samples in the training batch. The tries to generate an image that is complement to by minimizing the positive Dice score . The last reconstruction takes an Mean-Squared-Error (MSE) loss between the input image and the reconstructed image : . The discriminator tries to reject the the output but accept the images from the reference distribution , by minimizing the following loss function: . Here, is the number of the images in . Even though and are tied in an adversarial setup, here we do not use the Earth Mover distance [33] in the loss function, since we would like to identify both positive samples and negative samples with equal precision. Therefore, we use Mean Absolute Error (MAE) instead.

3.2 Architecture Details and Training Scheme

We use the same network architecture for all of our experiments as shown in Fig. 1. The encoder consists of four blocks of two convolution layers with a filter size of

followed by a max pooling layer with a filter size of

and batch normalization after every convolution layer. After every pooling layer we also introduce a dropout of 0.3. The number of feature maps in each of the convolution layer of a block are 32, 64, 128, and 256. Following these blocks is a transition layer of two convolution layers with feature maps of size 512 followed by batch normalization layers. The

and decoders are connected to the and mirror the layers with the pooling layers replaced with 2D transposed convolutional layers, which have the same number of feature maps as the blocks mirror those in the encoder. Similar to a U-Net, we also introduce skip connections across similar levels in the encoder and decoders. The reconstructor is simply a Sigmoid layer applied to the concatenation of and , resulting in a simplified CompNet [13]. The Discriminator mimics the architecture of the

, except for the last layer where a dense layer is used for classification. All the intermediate layers have ReLU activation function and the final output layers have the Sigmoid activation. The only exception is the output of the discriminator

, which has a Tanh activation function to separate and images from the to the maximum extent.

We use Keras with Tensorflow backend and Adam optimizer with a learning rate of 5e-5 to implement our architecture. We follow two distinct training stages:

  • [noitemsep, nolistsep]

  • In the first stage, we train and in cycles. We start training with with True labels and with False labels. These training samples are shuffled randomly. Following , we train with as input and the weights of frozen while preserving the connection between and . The objective of the is to morph the appearance of into to fool

    with the frozen weights. We call these two steps one cycle, and in each step there may be more than one epochs of training for

    or .

  • In the second stage, and continue to be trained alternatively; however, the input images to are changed, since the training purpose at this stage is to focus on the differences between the and , while ignoring the noisy biases created by the in transforming to . To achieve this, we augment the reference distribution with its generated images via , i.e., . We treat them as true images, and the union set is used to update .

Figure 4: Sample results of MS-SEG2015, Brats-2019, LiTS, and our private dataset (top to bottom) obtained from the various branches of our ASC-Net. Left to right: the input image , the output image of the fence cut , which is contrast enhanced to present the content contained in the brain region, the output image of the wild cut , the reconstructed image , the ground-truth mask , our predicted segmentation mask , and the predicted region of interst . None of these include any of the post-processed images.

4 Applications

We evaluate our model on three unsupervised anomaly segmentation tasks: MS lesion segmentation, brain tumor segmentation, and liver lesion segmentation. We use the MS-SEG2015 [7] training set, BraTS [1, 2, 26], and LiTS [6] datasets in these tasks. Also, a private dataset is collected to further evaluate the effectiveness of our method.

4.1 Datasets and Experimental Settings

MS-SEG2015. The training set consists of 21 scans from 5 subjects with each scan dimensions of . We resize the axial slices to , so that we can share the same network design as the rest of the experiments.

BraTS 2019. This dataset consists of 335 MRI brain scans collected from 259 subjects with High Grade Glioma (HGG) and 76 subjects with Low Grade Gliomas (LGG) in the training set. The 3D dimensions of the images are .

LiTS. The training set of LiTS consists of 130 abdomen CT scans of patients with liver lesions, collected from multiple institutions. Each scan has a varying number of slices with dimensions of 512512. We resize these CT slices to to share the same network architecture with other tasks.

Private Dataset. The in-house dataset collected from Zhongnan Hospital in Wuhan, China consists of non-skull-stripped brain MRI images of different modalities, including T2 FSE (Fast Spin Echo, short for T2) and T2 Flair (short for Flair) scans. In this dataset, we have 55 normal control subjects for establishing the reference distribution, 26 subjects with brain tumors for training, and 41 subjects with brain tumors and manually-segmented masks for testing. The images are all of different resolutions and thus we resample them to resolution on 2D slices.

For all experiments, the image intensity is normalized to over the 3D volume; however, we perform the 3D segmentation task in the slice-by-slice manner using axial slices. To balance the sample size in and , we randomly sample and duplicate the number difference to the respective set.

Model w/o post (%) w post (%)
AE(dense) 4.5 15.07.5
Context AE 5.1 18.811.6
VAE 5.1 20.012.4
Context VAE (original) 6.0 26.711.2
Context VAE (gradient) 5.9 12.78.8
GMVAE (dense) 6.3 17.412.1
GMVAE (dense restoration) 6.3 22.312.4
GMVAE (spatial) 2.8 6.97.3
GMVAE (spatial restoration) 2.7 11.811.0
f-AnoGAN 8.9 27.814.0
Constrained AE 9.7 20.9 10.0
Constrained AAE 6.8 19.017.0
VAE (restoration) 5.8 21.112.2
AnoVAEGAN 6.6 20.013.3
Ours 32.9435.98 48.2047.84
Table 1: Experimental comparison of anomaly segmentation on MS-SEG2015 across different methods via experiments conducted in [3] and ours. The best results reported in [3] are colored in blue.
Dice (%) Public Datasets Private Dataset
BraTS 2019 LiTS Flair (CV) T2 (CV) T2 (Full)
w/o post-processing 63.6716.24 32.2420.74 79.8949.32 88.5746.28 85.7947.63
w post-processing 68.0114.63 50.2332.22 91.5846.81 90.2146.37
best baseline 71.630.84 [42] 40.780.43 [42]
Table 2: Experimental comparison of anomaly segmentation on remaining two public datasets and one private dataset. For post processing the segmentation outputs of the private dataset, we do not use the open-and-closed operation, because the predicted segmentation masks already consist of connected regions and such post-processing would make the result worse. Instead, we use the Flair modality to remove the false positives, like the CSF regions, which is easily mixed with tumors in the T2 modality based on our observations.Their result was reported on BraTS 2018. See detaled discussion in Section 4.2. CV is short for cross validation, and “Full” indicates using the entire private dataset.

4.2 Experimental Results

MS Lesion Segmentation (MS-SEG2015). In this task, we randomly sample non-tumor, non-zero, Brats-2019 training set slices to make our reference distribution as in [3], while they use their own privately annotated healthy dataset. Meanwhile, the non zero 2D slices of the MS-SEG2015 training set are used in the main module . We train this network using three cycles in the first stage and one cycle in the second stage and take the threshold at 254 intensity333The intensity range for computing the image histogram is [0, 255]. based on the right most peak of the image histogram.

We obtain a subject-wise mean Dice score of 32.94% without any post-processing. By using a simple post-processing with erosion and dilation with filters, this number improves to 48.20% mean Dice score. In comparison, a similar study conducted by [3] consisting of a multitude of algorithms including AnoVAEGAN [4] and f-AnoGANS, obtained a best mean score of 27.8% Dice after post-processing by f-AnoGANS. Before post-processing the best method was Constrained AutoEncoder [8] with a score of 9.7% Dice. An exhaustive list is presented in Table 1. Figure 4 shows sample images of our results.

Brain Tumor Segmentation (BraTS 2019). In this task, we perform patient-wise two-fold cross-validation on the Brats-2019 training set. In each training fold, we use a 90/10 split after removing empty slices. The 2D slices from the 90% split without tumors are used to make our reference distribution ; while the 2D slices with tumors from the 90% split and all the slices from the 10% split are used for training our model. As a result, the sample size of for fold one and two amounts to 11,745 and 12,407 respectively, while the size of amounts to 11,364 and 10,786, respectively. We train this network using two cycles in the first stage and one cycle in the second stage.

Figure 5: Query images (top) and their reconstructions (bottom) using f-AnoGANs [34].
Figure 6: Result comparison between T2 Flair (short for Flair) and T2 FSE (short for T2), Part I. Using the Flair scans has the potential of detecting additional anomalies; while using the T2 scans would mix tumors with other regions like CSF (the last three rows), which can be cleaned by simply using the flair input image to post process the T2 predictions (the second to the last column). Left to right: the Flair input image ( ), the reconstructed Flair input (), the predicted segmentation mask on the Flair input (), the T2 input image (), the reconstructed T2 input (), the predicted segmentation mask on the T2 input (), the predicted segmentation mask after post-processing by using the Flair input (), and the ground-truth mask ().
Figure 7: Result comparison between T2 Flair (short for Flair) and T2 FSE (short for T2), Part II. Using the Flair scans suffers the issues of under segmentation (the first two rows) and over segmentation (the last two rows); while using the T2 scans often suffers the under-segmentation issue, since the over-segmentation issue can be mostly handled by post-processing similar to Fig. 6. Besides, using the Flair image usually obtains a better tumor segmentation with cleaner edges compared to the T2 sample. Left to right: the Flair input image ( ), the reconstructed Flair input (), the predicted segmentation mask on the Flair input (), the T2 input image (), the reconstructed T2 input (), the predicted segmentation mask on the T2 input (), the predicted segmentation mask after post-processing by using the Flair input (), and the ground-truth mask ().
Figure 8: Result comparison between T2 Flair (short for Flair) and T2 FSE (short for T2), Part III. Using the Flair scans fails to detect some anomalies; while using the T2 scans can consistently obtain the rough locations of the anomalies, although it still suffers the under segmentation issue. We also see a cluster flip for the Flair case in row 3, where the tumor is part of the left most peak and analysis of these kinds of flips is left for future works. Left to right: the Flair input image ( ), the reconstructed Flair input (), the predicted segmentation mask on the Flair input (), the T2 input image (), the reconstructed T2 input (), the predicted segmentation mask on the T2 input (), the predicted segmentation mask after post-processing by using the Flair input (), and the ground-truth mask ().

We obtain a subject-wise mean Dice score of 63.67% for the brain tumor segmentation. Utilizing a simple post-processing scheme of erosion and dilation with filter, we improve our mean Dice score to 68.01%. Figure 4 shows samples generated by our ASC-Net and Table 2 shows our before and after post-processing results. We attempted to apply f-AnoGANs [34] by following their online instructions and failed to generate good reconstructions as shown in Figure 5. The failure of AnoGANs in the reconstruction brings to light the issue with the regeneration based methods and the complexity and stability of GAN-based image reconstruction.

A most recent work in [27] trains their algorithm using 1,112 healthy scans from the Human Connectome Project (HCP) young adult dataset [38]

and tests on 50 random BraTS 2018 scans, obtaining a mean dice score of 67.2% and 15.5% standard deviation. Following our simple post-processing scheme, our algorithm performs better, increasing the mean Dice score by 0.81% and reducing the standard deviation by 0.97%, on two-fold cross validation across 335 scans. Another work in 

[42] tests on the BraTS 2018 training set, obtaining a mean dice score of 71.63% and standard deviation of 0.84%. While their method outperforms ours, it is worth to mention that the self-supervised method is highly specialized to a particular task of tumor segmentation. It may happen that the object to be segmented is difficult to synthesize artificially or perfectly, resulting in a bottleneck of the pipeline. Furthermore, one assumption of a self-supervised learning algorithm is that the object to be learnt is known beforehand. Thus, a model trained for brain tumors cannot be applied readily to other anomalies, e.g., brain lesions. Our method, on the other hand, has no such limitations and does not need Pseudo dataset generation for a new task. That is, our method is a general approach for anomaly detection and segmentation. Also, our method performs better than [42] on the liver lesion segmentation task after post-processing.

Liver Lesion Segmentation (LiTS). To generate the image data for this task, we remove the non-liver region by using the liver mask generated by CompNet [13] and take all non-zero images. We have 11,926 2D slices without liver lesions used in the reference distribution . The remaining 6,991 images is then used for training the model. We perform slice-by-slice two-fold cross-validation and train the network using two cycles in both first and second stages. To extract the liver lesions, we first mask out the noises in the non-liver region of the reconstructed image and then invert the image to take a threshold value at 242, the rightmost peak of the inverted image.

We obtain a slice-wise mean Dice score of 32.24% for this liver lesion segmentation, which improves to 50.23% by using a simple post processing scheme of erosion and dilation with filter. Sampled results are shown in Fig. 4. Compared with [42], which obtains a mean Dice score of 40.78% and a standard deviation of 0.43%, we improve the mean Dice score by almost 10%, but has a much larger standard deviation. Unlike [42], where the network is pre-trained on a artificial tumor dataset, and hence the pipeline customized for tumor segmentation, our method do not need such information beforehand. We notice that our standard deviation for BraTS dataset is similar to [27]. This is because novelty/anomaly detection algorithms without a pre-defined task would suffer from the co-morbidities issues discussed in Section 5.

Our method still has room to improve, compared with supervised methods. A recent study [14] reports a cross-validation result of 67.3% under a supervised setting. Note that the annotation in the LiTS lesion dataset is imperfect with missing small lesions [11, 14]. Since we use the imperfect annotation to select images for the reference distribution, some slices with small lesions may be included and treated as normal examples. Also, the faint liver boundary causes a fixed penalty to be incurred per slice, which reduces the dice score. This kind of segmentation noise could be better handled by using a more sophisticated post-processing strategy.

Brain Tumor Segmentation (Private Dataset without Skull-Stripping). In this dataset we have access to good quality reference distributions on different modalities; therefore, we further investigate the performance of our method using T2 and Flair modalities. We perform two-fold cross-validation experiments on those 41 subjects with annotations, using firstly the Flair modality and then followed by T2. In this setting, we do not use the images from the 26 subjects without tumor masks. The reference distribution for each experiment consists of image slices collected from the 55 normal control patients.

The Flair experiment results are obtained using a training of two cycles in the first stage and four cycles in the second stage to reach the peak separation, and the threshold value is taken at the intensity of 170. For T2 we use two cycles in the first stage and two cycles in the second stage, and the threshold is taken at 220. Both thresholds are taken based on the rightmost peaks of the histograms of reconstructed images. We obtain a subject-wise mean dice score of 79.89% on the Flair scans and 88.57% on the T2 scans. Despite the lower score compared to T2, the Flair modality provides the potential of identifying additional anomalies, which may not be limited to HGG or LGG, as shown in the third column of Fig. 6. However, since the focus of this experiment is to segment HGG and LGG only, using the T2 modality outperforms the Flair in term of the Dice scores as reported in Table 2, and the predicted masks as shown in Fig. 6. Aside from that, on the Flair scans our method suffers both under- and over-segmentation as shown in Fig. 7 and struggles to segment tumors using one uniform threshold as shown in Fig. 8. Typically, we use the rightmost peak as the threshold for brighter tumors; however, the peaks separating tumors in these cases of Fig. 8 occur as the leftmost peak. Such flip further lowers the segmentation score, even though the algorithm is able to separate the anomaly as one of the two cuts.

Although our method has consistently better performance of segmenting brain tumors on T2, as shown from Fig. 6 to Fig. 8. In the case of T2, the primary disadvantages occur due to the inclusion of other regions, such as Cerebrospinal Fluid (CSF), eyeballs, etc., which appear dark in the Flair modality. In order to alleviate these false positives on T2 scans, we multiply the predicted masks with the Flair input images. Then we re-calibrate the output by taking a threshold at the intensity of 50 (roughly 0.2 in the range [0,1]) to generate our final mask. This post-processing is our new choice for the private dataset. We did not use the erosion/dilation operation because it is more efficient for cases with discontinuous segmentation results, which is the one our public datasets suffer with, as shown in Fig. 4, but not our private dataset. This new post-processing improves the performance to a patient-wise mean dice score of 91.58% on T2 scans with two-fold cross validation.

Since the two-fold cross-validation experiment highlights the strengths using the T2 modality with respect to segmenting HGG and LGG, we next perform a third experiment, utilizing the entire dataset with T2 scans. In this task, the reference distribution is composed of 1224 2D slices obtained from 55 patients, and the training input image passed to the main module is composed of 846 2D slices from 26 subjects without annotations. The training input images consist of a mixture of patients with and without tumors. The test set similarly consists of patients with and without tumors from 41 annotated subjects with 1151 2D slices. We train our model using two cycles in the first stage and one cycle in the second stage. The threshold value of 220 is selected based on the rightmost peak of the image histogram, resulting in a subject-wise mean dice score of 85.79%. As reported in Table 2, we achieve a much better result compared to the similar task on BraTS, since we have a good reference distribution curated from scans without anomaly present and also good segmentation masks for evaluation. our post-processing step for the private dataset further improves the performance to a patient-wise mean dice score of 90.21%.

Figure 9: Stability: The first image is the input image, the second is the ground truth. The rest of images are reconstruction from various re-runs of the framework with variable training cycles and stage. All runs are able to isolate the anomaly in question.

5 Discussion and Future Work

In this paper we have presented a framework that performs two-cut split in an unsupervised fashion guided by an reference distribution using GANs. Unlike the methods in AnoGAN’s family, our ASC-Net focuses on the anomaly detection with the normal image reconstruction as a byproduct, thus still producing competative results where reconstruction dependent methods such as f-AnoGAN fail to work on. The current version of our ASC-Net aims to solve the two-cut problem, which will be tasked to handle more than two selective cuts in the future. Theoretical understanding of the proposed network is also required, which is left as a future work.

Figure 10: Termination of network training affects the reconstruction result. Left to right columns in each image: the input images, the images reconstructed via two cycles in the first stage and one in the second stage, and the images reconstructed via adding one cycle in the second stage.
Figure 11: Demonstrating potential better interpretation on prediction compared to the ground truth, with samples collected from MS-SEG (left) and BraTS (right). Left to right: the input image, the corresponding ground-truth mask, our reconstructed image, our prediction with possible missing lesions highlighted in red ellipses.

Termination and stability. The termination point of this network training is periodic. The general guideline is that the peaks should be well separated and we terminate our algorithm at three or four peak separation. However, continuing to train further may not always result in the improvement for the purpose of segmentation due to accumulation of holes as shown in Fig. 10, even though visually the anomaly is captured in more intricate detail. However, we encourage training longer as it reduces false positives and provides detailed anomaly reconstruction, though the Dice metric might not account for it. In our experiments, we specify the number of cycles in each stage. However, due to the random nature of the algorithm and the lack of a particular purpose and guidance, the peak separation may occur much earlier, then training should be stopped accordingly. The reported network in our Brats-2019 experiments has an average Dice score of over the network trained longer as shown in Fig. 10. Regarding the stability, Figure 9 demonstrates an anomaly estimated by different networks that are trained with different number of training cycles. We observe that while the appearance of changes, we still obtain the anomaly as a separate cut since our framework works without depending on the quality of reconstruction.

Figure 12: Thresholding makes a difference, especially for small lesions or tumors. Left to right: the input image sampled from the LiTS dataset, the corresponding ground-truth mask, our reconstructed image, the mask generated with a threshold at 242 (used for the entire dataset), and the mask generated with a threshold at 238 (selected based on the reconstructed image of this subject).

Limitations. The low Dice scores reported for the public datasets could be because we have to select non-tumor slices as our reference distribution, which does not account for other co-morbidities. This affects the performance of the framework as it has no other guidance and would consider co-morbidities as detected anomaly as well. This statement is partially supported by our better brain tumor segmentation results on the private dataset, which has better healthy scans for forming the reference distribution. Moreover, although having more false positives due to the potential co-morbidities, our method provides possibility of bringing other potential anomalies into the user’s attention or even better anomaly masks than the ground truth. As shown in Fig. 6 and 11, compared to the ground-truth tumor masks, our method identifies more regions that present similar appearance as annotated tumor regions.

One possible improvement on our method is the automatic or subject-specific selection of the threshold, which is used to obtain the final segmentation mask. In the current work, we choose one threshold for the entire test set, which is probably not the optimal solution. For example, in the liver lesion segmentation experiment, if we were to take a threshold at 238 based on the single sample’s rightmost peak, we get a better final mask for the particular sample, compared to taking the threshold at 242 based on the entire data set (see Fig. 

12).

Regarding the use of our post-processing scheme for the public datasets, we observe that it could remove some noise like the faint liver boundary as shown in Fig. 4; however, at the same time it could remove the small tiny lesions detected by the network, as shown in Fig. 12. Improvement over the network output without post-processing needs further investigation, which is left for the future work.

Overall, our framework demonstrates critical insights that may be missed from annotations across datasets curated from different institutions; it can also locate very tiny small tumors accurately if an appropriate threshold is taken. A better use of our method could be to assist in the initial discovery of anomalous or novel markers. Following the initial discovery, human inspection or domain-knowledge-guided post-processing could provide better segmentation results to be used in practice.

Acknowledgements

This work was supported by Shanghai Municipal Science and Technology Major Project 2021SHZDZX0102 and NSF 1755970.

References

  • [1] S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J. S. Kirby, J. B. Freymann, K. Farahani, and C. Davatzikos (2017) Advancing the cancer genome atlas glioma mri collections with expert segmentation labels and radiomic features. Scientific data 4, pp. 170117. Cited by: §1, §4.
  • [2] S. Bakas, M. Reyes, A. Jakab, S. Bauer, M. Rempfler, A. Crimi, R. T. Shinohara, C. Berger, S. M. Ha, M. Rozycki, et al. (2018)

    Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the brats challenge

    .
    arXiv preprint arXiv:1811.02629. Cited by: §1, §4.
  • [3] C. Baur, S. Denner, B. Wiestler, N. Navab, and S. Albarqouni (2021) Autoencoders for unsupervised anomaly segmentation in brain mr images: a comparative study. Medical Image Analysis, pp. 101952. Cited by: 2nd item, §1, §4.2, §4.2, Table 1, footnote 1.
  • [4] C. Baur, B. Wiestler, S. Albarqouni, and N. Navab (2018) Deep autoencoding models for unsupervised anomaly segmentation in brain mr images. In International MICCAI Brainlesion Workshop, pp. 161–169. Cited by: §1, §2.2, §2.3, §4.2.
  • [5] A. Berg, J. Ahlberg, and M. Felsberg (2019) Unsupervised learning of anomaly detection from contaminated image data using simultaneous encoder training. arXiv preprint arXiv:1905.11034. Cited by: §1, §2.2, §2.3.
  • [6] P. Bilic, P. F. Christ, E. Vorontsov, G. Chlebus, et al. (2019) The liver tumor segmentation benchmark (lits). arXiv:1901.04056. Cited by: §1, §4.
  • [7] A. Carass, S. Roy, A. Jog, J. L. Cuzzocreo, E. Magrath, A. Gherman, J. Button, J. Nguyen, F. Prados, C. H. Sudre, et al. (2017) Longitudinal multiple sclerosis lesion segmentation: resource and challenge. NeuroImage 148, pp. 77–102. Cited by: §1, §4.
  • [8] X. Chen and E. Konukoglu (2018) Unsupervised detection of lesions in brain mri using constrained adversarial auto-encoders. arXiv preprint arXiv:1806.04972. Cited by: §4.2.
  • [9] X. Chen, N. Pawlowski, M. Rajchl, B. Glocker, and E. Konukoglu (2018) Deep generative models in the real-world: an open challenge from medical imaging. arXiv preprint arXiv:1806.05452. Cited by: §2.2.
  • [10] X. Chen, S. You, K. C. Tezcan, and E. Konukoglu (2020) Unsupervised lesion detection via image restoration with a normative prior. Medical image analysis 64, pp. 101713. Cited by: §2.2.
  • [11] G. Chlebus, A. Schenk, J. H. Moltz, B. van Ginneken, H. K. Hahn, and H. Meine (2018)

    Automatic liver tumor segmentation in ct with fully convolutional neural networks and object-based postprocessing

    .
    Scientific reports 8 (1), pp. 1–7. Cited by: §4.2.
  • [12] A. Del Giorno, J. A. Bagnell, and M. Hebert (2016) A discriminative framework for anomaly detection in large videos. In European Conference on Computer Vision, pp. 334–349. Cited by: §1.
  • [13] R. Dey and Y. Hong (2018) CompNet: complementary segmentation network for brain mri extraction. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 628–636. Cited by: §3.2, §4.2.
  • [14] R. Dey and Y. Hong (2020) Hybrid cascaded neural network for liver lesion segmentation. In 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), pp. 1173–1177. Cited by: §4.2.
  • [15] R. Dey and Y. Hong (2021) ASC-net: adversarial-based selective network for unsupervised anomaly segmentation. In MICCAI, pp. 236–247. Cited by: §1.
  • [16] S. M. Erfani, S. Rajasegarar, S. Karunasekera, and C. Leckie (2016) High-dimensional and large-scale anomaly detection using a linear one-class svm with deep learning. Pattern Recognition 58, pp. 121–134. Cited by: §1.
  • [17] N. Giordana and W. Pieczynski (1997)

    Estimation of generalized multisensor hidden markov chains and unsupervised image segmentation

    .
    IEEE Transactions on Pattern Analysis and Machine Intelligence 19 (5), pp. 465–475. Cited by: §1.
  • [18] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: §1.
  • [19] V. Grau, A. Mewes, M. Alcaniz, R. Kikinis, and S. K. Warfield (2004) Improved watershed transform for medical image segmentation using prior information. IEEE transactions on medical imaging 23 (4), pp. 447–458. Cited by: §2.1.
  • [20] M. Kimura and T. Yanagihara (2018) Anomaly detection using gans for visual inspection in noisy training data. In Asian Conference on Computer Vision, pp. 373–385. Cited by: §1, §2.2, §2.3.
  • [21] D. P. Kingma and M. Welling (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Cited by: §2.2.
  • [22] D. P. Kingma and M. Welling (2014) Stochastic gradient vb and the variational auto-encoder. In Second International Conference on Learning Representations, ICLR, Vol. 19, pp. 121. Cited by: §2.2.
  • [23] A. B. L. Larsen, S. K. Sønderby, H. Larochelle, and O. Winther (2016-20–22 Jun) Autoencoding beyond pixels using a learned similarity metric. In Proceedings of The 33rd International Conference on Machine Learning, M. F. Balcan and K. Q. Weinberger (Eds.), Proceedings of Machine Learning Research, Vol. 48, New York, New York, USA, pp. 1558–1566. External Links: Link Cited by: §2.2.
  • [24] T. Lee and M. S. Lewicki (2002) Unsupervised image classification, segmentation, and enhancement using ica mixture models. IEEE Transactions on Image Processing 11 (3), pp. 270–279. Cited by: §1.
  • [25] A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey (2015) Adversarial autoencoders. arXiv preprint arXiv:1511.05644. Cited by: §2.2.
  • [26] B. H. Menze, A. Jakab, S. Bauer, J. Kalpathy-Cramer, K. Farahani, J. Kirby, Y. Burren, N. Porz, J. Slotboom, R. Wiest, et al. (2014) The multimodal brain tumor image segmentation benchmark (brats). IEEE transactions on medical imaging 34 (10), pp. 1993–2024. Cited by: §1, §4.
  • [27] S. Naval Marimont and G. Tarroni (2021) Implicit field learning for unsupervised anomaly detection in medical images. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 189–198. Cited by: §1, §4.2, §4.2.
  • [28] H.P. Ng, S.H. Ong, K.W.C. Foong, P.S. Goh, and W.L. Nowinski (2006) Medical image segmentation using k-means clustering and improved watershed algorithm. In 2006 IEEE Southwest Symposium on Image Analysis and Interpretation, Vol. , pp. 61–65. External Links: Document Cited by: §2.1.
  • [29] H. Ng, S. Ong, K. Foong, P. Goh, and W. Nowinski (2006) Medical image segmentation using k-means clustering and improved watershed algorithm. In 2006 IEEE southwest symposium on image analysis and interpretation, pp. 61–65. Cited by: §2.1.
  • [30] R. J. O’Callaghan and D. R. Bull (2004) Combined morphological-spectral unsupervised image segmentation. IEEE transactions on image processing 14 (1), pp. 49–62. Cited by: §1.
  • [31] J. Puzicha, T. Hofmann, and J. M. Buhmann (1999) Histogram clustering for unsupervised image segmentation. In Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149), Vol. 2, pp. 602–608. Cited by: §1.
  • [32] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Cited by: §1.
  • [33] Y. Rubner, C. Tomasi, and L. J. Guibas (2000)

    The earth mover’s distance as a metric for image retrieval

    .
    International journal of computer vision 40 (2), pp. 99–121. Cited by: §3.1.
  • [34] T. Schlegl, P. Seeböck, S. M. Waldstein, G. Langs, and U. Schmidt-Erfurth (2019) F-anogan: fast unsupervised anomaly detection with generative adversarial networks. Medical image analysis 54, pp. 30–44. Cited by: §1, §2.2, §2.3, Figure 5, §4.2.
  • [35] T. Schlegl, P. Seeböck, S. M. Waldstein, U. Schmidt-Erfurth, and G. Langs (2017) Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In International conference on information processing in medical imaging, pp. 146–157. Cited by: §1, §2.2, §2.3.
  • [36] P. Seeböck, S. Waldstein, S. Klimscha, B. S. Gerendas, R. Donner, T. Schlegl, U. Schmidt-Erfurth, and G. Langs (2016) Identifying and categorizing anomalies in retinal imaging data. arXiv preprint arXiv:1612.00686. Cited by: §1.
  • [37] J. Shi and J. Malik (2000) Normalized cuts and image segmentation. IEEE Transactions on pattern analysis and machine intelligence 22 (8), pp. 888–905. Cited by: §1, §2.1.
  • [38] D. C. Van Essen, K. Ugurbil, E. Auerbach, D. Barch, T. E. Behrens, R. Bucholz, A. Chang, L. Chen, M. Corbetta, S. W. Curtiss, et al. (2012) The human connectome project: a data acquisition perspective. Neuroimage 62 (4), pp. 2222–2231. Cited by: §4.2.
  • [39] X. Xia and B. Kulis (2017) W-net: a deep model for fully unsupervised image segmentation. arXiv preprint arXiv:1711.08506. Cited by: §2.1, §2.3.
  • [40] H. Zenati, C. S. Foo, B. Lecouat, G. Manek, and V. R. Chandrasekhar (2018) Efficient gan-based anomaly detection. arXiv preprint arXiv:1802.06222. Cited by: §1, §2.2.
  • [41] H. Zenati, M. Romain, C. Foo, B. Lecouat, and V. Chandrasekhar (2018) Adversarially learned anomaly detection. In 2018 IEEE International Conference on Data Mining (ICDM), pp. 727–736. Cited by: §1, §2.2, §2.3.
  • [42] X. Zhang, W. Xie, C. Huang, Y. Zhang, and Y. Wang (2021) Self-supervised tumor segmentation through layer decomposition. arXiv preprint arXiv:2109.03230. Cited by: §2.3, §4.2, §4.2, Table 2.
  • [43] D. Zimmerer, S. A. Kohl, J. Petersen, F. Isensee, and K. H. Maier-Hein (2018) Context-encoding variational autoencoder for unsupervised anomaly detection. arXiv preprint arXiv:1812.05941. Cited by: §2.2.