3D Anisotropic Hybrid Network: Transferring Convolutional Features from 2D Images to 3D Anisotropic Volumes

11/23/2017 ∙ by Siqi Liu, et al. ∙ Siemens Healthineers 0

While deep convolutional neural networks (CNN) have been successfully applied for 2D image analysis, it is still challenging to apply them to 3D anisotropic volumes, especially when the within-slice resolution is much higher than the between-slice resolution and when the amount of 3D volumes is relatively small. On one hand, direct learning of CNN with 3D convolution kernels suffers from the lack of data and likely ends up with poor generalization; insufficient GPU memory limits the model size or representational power. On the other hand, applying 2D CNN with generalizable features to 2D slices ignores between-slice information. Coupling 2D network with LSTM to further handle the between-slice information is not optimal due to the difficulty in LSTM learning. To overcome the above challenges, we propose a 3D Anisotropic Hybrid Network (AH-Net) that transfers convolutional features learned from 2D images to 3D anisotropic volumes. Such a transfer inherits the desired strong generalization capability for within-slice information while naturally exploiting between-slice information for more effective modelling. The focal loss is further utilized for more effective end-to-end learning. We experiment with the proposed 3D AH-Net on two different medical image analysis tasks, namely lesion detection from a Digital Breast Tomosynthesis volume, and liver and liver tumor segmentation from a Computed Tomography volume and obtain the state-of-the-art results.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 10

page 11

page 12

page 13

page 14

page 15

page 16

page 17

Code Repositories

AH-Net

The Pytorch implementation of the 3D Anisotropic Hybrid Network described in the paper "3D Anisotropic Hybrid Network: Transferring Convolutional Features from 2D Images to 3D Anisotropic Volumes"


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

3D volumetric images (or volumes) are widely used for clinical diagnosis, surgical planning and biomedical research. The 3D context information provided by such volumetric images are important for visualising and analysing the object of interest. However, given the added dimension, it is more time consuming and sometimes harder to interpret 3D volumes than 2D images by machines. Many previous studies use convolutional neural networks (CNN) to extract the representation of structural patterns of interests in human or animal body tissues.

Due to the special imaging settings, many imaging modalities come with anisotropic voxels, meaning not all the three dimensions have equal resolutions. For examples, in the 3D volumes of Digital Breast Tomosynthesis (DBT), and sometimes Computed Tomography (CT), the image resolution in plane/slice (or within-slice resolution) is more than ten times higher than that of the resolution (or between-slice resolution). Thus, the slices preserve much more information than the dimension. In DBT images, only the spatial information within the plane can be guaranteed. However, the 3D context between slices, even with slight misalignment, still carries meaningful information for analysis.

Directly applying 3D CNN to such images remains a challenging task due to the following reasons: (1) It may be hard for a small

kernel to learn useful features from anisotropic voxels, because of the different information density along each dimension. (2) Theoretically more features are needed in 3D networks compared to 2D networks. The capability of 3D networks is bounded by the GPU memory, constraining both the width and depth of the networks. (3) Unlike 2D computer vision tasks which nowadays can make use of the backbone networks pretrained using millions of 2D images

[20], 3D tasks mostly have to train from scratch , and hence suffer from the lack of large 3D datasets. In addition, the high data variations make the 3D networks harder to be trained. Also, 3D CNNs trained on such small image datasets with relatively small 3D context are hard to generalize to unseen data.

Besides the traditional 3D networks built with and kernels, there are other methods for learning representations from anisotropic voxels. Some studies process 2D slices separately with 2D networks [14]. To make a better use of the 3D context, more than one image slice is used as the input for 2D networks [12, 22]. The 2D slices can also be viewed sequentially by combining a fully convolutional network (FCN) architecture with Convolutional LSTM to view the adjacent image slices as a time series to distil the 3D context from a sequence of abstracted 2D context [4]. There are also a few studies using anisotropic convolutional kernels to distribute more learning capability on the plane than on the axis [2, 11, 21].

In this paper, we propose the 3D Anisotropic Hybrid Network (AH-Net) to learn informative features from images with anisotropic resolution. To obtain the 3D AH-Net, we firstly train a 2D fully convolutional ResNet [16] which is initialized with pre-trained weights and uses multiple 2D image slices as inputs. The feature encoder of such a 2D network is then transformed into a 3D network by extending the 2D kernel with one added dimension. Then we add a feature decoder sub-network to extract the 3D context. The feature decoder consists of anisotropic convolutional blocks with and convolutions. Different anisotropic convolutional blocks are combined with dense connections [8]. Similar to U-Net [19], we use skip connections between the feature encoder and the decoder. A pyramid volumetric pooling module [23] is stacked at the end of the network before the final output layer for extracting multiscale features.

Since AH-Net can make use of 2D networks pre-trained with large 2D general image datasets such as ImageNet

[20], it is easier to train as well as to generalize. The anisotropic convolutional blocks enable it to exploit the 3D context. With end-to-end inference as a 3D network, AH-Net runs much faster than the conventional multi-channel 2D networks regarding the GPU time required for processing each 3D volume.

2 Related Work

It is hard for conventional 3D neural networks with isotropic kernels to extract robust representations from 3D volumes with anisotropic resolution. The most intuitive approach is to re-sample the images to isotropic resolutions [15]. This would work when the difference between three dimensions are small, and the spatial information between different slices is accurate. When the resolution is much smaller than the resolution, the majority of voxels added by image resampling are redundant, thus introducing unnecessary extra computational cost. It may also result in loss of information if downsampling happens in the direction.

Instead of using 3D networks, some studies deal with the voxel anisotropy using 2D networks. DeepEM3D-Net [22] has only two 3D convolution layers to integrate 3D information in the early stages and performs 2D convolution for the rest of the following layers in an FCN. The input to DeepEM3D-Net is a stack of 2D image slices. The resultant 3D segmentation is obtained by concatenating the 2D output slices. HDenseNet [12] applies 2D networks on all image slices at first. Then a 3D DenseUNet is applied on the concatenated 3D output volume to obtain the final result. Different from our proposed network, HDenseNet does not have shared convolutions between the 2D and 3D networks. Also, we use anisotropic 3D convolutional blocks to replace the isotropic 3D convolutions.

A bi-directional convolutional LSTM (BDC-LSTM) and an FCN model are combined to view slices as a time series [4]. BDC-LSTM is trained to exploit the 3D contexts by applying a series of 2D convolutions on the plane in a recurrent fashion to interpret 3D contexts while propagating contextual information in the

-direction. The FCN model is used for extracting the initial 2D feature maps which are used as the inputs to BDC-LSTM. The final output is obtained from the BDC-LSTM model with a softmax layer. Though the idea of fusing the 2D features to maintain the between-slice consistency is similar to our proposed method, we believe this can be achieved with stacked anisotropic convolution blocks, which are easier to train and to generalize than the convolutional LSTM.

Some studies use 3D convolutional kernels with anisotropic sizes to distribute more learning capability to the plane. For example, convolutions are used in [2]. However, large convolution kernels would bring higher computational cost. Two more recent studies [17, 21, 11] use small kernels to simulate the large anisotropic kernels. The convolution modules in [11] starts with a convolution, followed by two convolutions. Similar to our work, all the isotropic convolutions are replaced by and convolutions in [17, 21]. Several possible designs of combining the and kernels are discussed in a recent paper [21] that focuses on video learning. Our network is different to the ones in [17, 21] since we use the anisotropic 3D convolutions only in the feature decoder while the encoder is locked with pre-trained weights transferred from a 2D network. It allows the proposed AH-Net to use any 2D fully convolutional networks pre-trained on large-scale datasets for initializing the encoder network.

Figure 2: The network architecture for pre-training the 2D encoder network Multi-Channel Global Convolutional Neural Network (MC-GCN). The ResNet50 is used as the back-bone network, initialized with ImageNet images. The global convolutional network modules and refinement modules [16] are added to the encoder network to increase the receptive field during the pre-training as well as to increase the output response map to the original resolution. Conv represents a convolution layer with the kernel size

and the stride size

in each dimension. The upsampling module (Up) consists of a Conv projection layer and a bi-linear upsampling layer.
Figure 3: The architecture of 3D AH-Net. The feature encoder with AH-ResNet blocks is transferred from the pre-trained 2D network with and convolutions. The features are then processed with the AH-Net decoders which are designed with and convolutional blocks. Feature summation is used instead of concatenation as in [3] to support more feature maps with less memory consumption. The pyramid pooling [23]

is used for extracting the multiscale feature responses. We hide the batch normalization

[9]

and ReLu layers for brevity. The weights of the blocks with black borders are transformed from the 2D MC-GCN.

Figure 4:

Transforming the 2D convolutional weight tensor

to 3D , where and are the number of features and channels of a layer, respectively. The 1st layer weight tensor is transformed to . The other convolutional kernels are transformed by adding an extra dimension.

3 Anisotropic Hybrid Network

The AH-Net consists of a feature encoder and a feature decoder. The encoder, transformed from a 2D network, is designed for extracting the deep representations from 2D slices with high resolution. The decoder built with densely connected blocks of anisotropic convolutions is responsible for exploiting the 3D context and maintaining the between-slice consistency. The network training is performed in two stages: the encoder is learned; then the 3D decoder is added and fine-tuned with the encoder parameters locked. To perform end-to-end hard voxel mining, we use the Focal Loss (FL) originally designed for object detection [13].

3.1 Learning a multi-channel 2D feature encoder

We train a 2D multi-channel global convolutional network (MC-GCN) similar to the architecture proposed in [16] to extract the 2D within-slice features at different resolutions, as shown in Fig. 2. In this paper, we choose the ResNet50 model [7] as the back-bone network which is initialized by pre-training with the ImageNet images [20], although other pre-trained networks would work similarly. The network is then fine-tuned with 2D image slices extracted from the 3D volumes. The input to this network is three neighbouring slices (treated as RGB channels). Thus, the entire architecture of the ResNet50 remains unchanged. The multi-channel 2D input could enable the 2D network to fuse the between-slice context at an early stage. A decoder is added to accompany the encoder to upscale the response map to the original resolution. We choose the decoder architecture with the global convolutional networks (GCN) and refinement blocks [16]. The GCN module simulates a large convolutional kernel by decomposing it into two 1-D kernels ( and ). Two branches containing the 1D kernels permuted in different orders are merged by summation. The output of each GCN module contains the same number of output maps as the final outputs. The large kernels simulated by GCNs ensure that the network has a large receptive field at each feature resolution. Each refinement block contains two

convolutions with a ReLU activation in the middle. The input of each refinement block is also added to its output to form a residual connection. At the end of each encoder resolution level, the features are fed into GCN modules with the kernel sizes of

, respectively. The output features are fed into a refinement block and summed with the features upsampled from a lower resolution level. The summed features are fed into another refinement block and upsampled with a convolution and a bi-linear upsampling layer. The final output has the same resolution as the image input. The decoder has only a small number of parameters with little computational cost. The light-weight decoder makes the encoder features easier to be transferred to the 3D AH-Net since the majority of the feature learning relies on the encoder network.

3.2 Transferring the learned 2D net to 3D AH-Net

The architecture of the proposed 3D anisotropic hybrid network (AH-Net) is shown in Fig. 3. After the 2D MC-GCN network converges, we extract the parameters of its encoder and transfer them to the corresponding encoder layers of AH-Net. The decoder part of the 2D MC-GCN is discarded and instead we design a new decoder for the AH-Net that consists of multiple levels of densely connected blocks, followed by a pyramid volumetric pooling module. The parameters of the new decoder are randomly initialized. The input and output of AH-Net are now 3D patches, similar to other conventional 3D CNN. The transformation of convolution tensors from 2D to 3D is illustrated in Fig. 4, which aims to perform 2D convolutions on 3D volumes slice by slice in the encoder part of AH-Net.

3.2.1 Notations

A 2D convolutional tensor is denoted by , where , , , and respectively represent the number of output channels, the number of input channels, the height and width of the convolution layer. Similarly, a 3D weight tensor is denoted by where is the filter depth. We use to denote the dimension permutation of a tensor , resulting in a new tensor with the and dimensions switched. adds an identity dimension between the and dimensions of the tensor and gives . We define a convolutional layer as Conv , where , and are the kernel sizes; , and

are the stride step size in each direction. Max pooling layers are denoted by MaxPool

. The stride is omitted when a layer has a stride size of 1 in all dimensions.

3.2.2 Input layer transform

The input layer of the 2D MC-GCN contains a convolutional weight tensor inherited from its ResNet50 back-bone network. The 2D convolutional tensor is transformed into 3D as

(1)

in order to form a 3D convolution kernel that convolves neighbouring slices. To keep the output consistent with the 2D network, we only apply stride- convolutions on the plane and stride on the third dimension. This results in the input layer Conv . To downsample the dimension, we use a MaxPool to fuse every pair of the neighbouring slices. An additional MaxPool is used to keep the feature resolution consistent with the 2D network.

3.2.3 ResNet block transform

All the 2D convolutional tensors and in the ResNet50 encoder are transformed as

(2)

and

(3)

In this way, all the ResNet Conv blocks as shown in Fig. 3 only perform 2D slice-wise convolutions on the 3D volume within the plane. The original downsampling between ResNet blocks is performed with Conv . However, in a 3D volume, a Conv skips a slice for every step on the dimension. This would miss important information when the image only has a small number of slices along the dimension, especially for detection tasks. We therefore use a Conv following by a MaxPool to downsample the 3D feature maps between the ResNet blocks as shown in the AH-Downsample block in Fig. 3. This MaxPooling simply takes the maximum response along the direction between 2 neighbouring slices. Unlike the previous studies that avoided downsampling along the direction [11], we find it important for allowing the use of large and deep networks on 3D data with limited GPU memory.

3.3 Anisotropic hybrid decoder

Accompanying to the transformed encoder, an anisotropic 3D decoder sub-network is added to exploit the 3D anisotropic image context. In the decoder, anisotropic convolutional blocks with Conv , Conv and Conv are used. The features are passed into an bottleneck block at first with a Conv surrounded by two layers of Conv . The output is then forwarded to another bottleneck block with a Conv in the middle and summed with itself before forwarding to the next block. This anisotropic convolution block decomposes a 3D convolution into 2D and 1D convolutions. It receives the inputs from the previous layers using a 2D convolution at first, preserving the detailed 2D features. Conv mainly fuses the within-slice features to keep the dimension output consistent.

Three anisotropic convolutional blocks are connected as the densely connected neural network [8] using feature concatenation for each resolution of encoded features. Similar to LinkNet [3], the features received from each resolution of the encoder are firstly projected to match the number of features of the higher encoder feature resolution using a Conv

. They are then upsampled using the 3D tri-linear interpolation and summed with the encoder features from a higher resolution. The summed features are forwarded to the decoder blocks in the next resolution.

At the end of the decoder network, we add a pyramid volumetric pooling module [23] to obtain multi-scaled features. The output features of the last decoder block are firstly down-sampled using 4 different Maxpooling layers, namely MaxPool , MaxPool , MaxPool and MaxPool to obtain a feature map pyramid. Conv layers are used to project each resolution in the feature pyramid to a single response channel. The response channels are then interpolated to the original size and concatenated with the features before downsampling. The final outputs are obtained by applying a Conv projection layer on the concatenated features.

3.4 Training AH-Net using Focal Loss

Training AH-Net using the same learning rate on both the pre-trained encoder and the randomly initialized decoder would make the network difficult to optimize. To train the 3D AH-Net, all the transferred parameters are locked at first. Only the decoder parameters are fine-tuned in the optimization. All the parameters can be then fine-tuned altogether afterwards to the entire AH-Net jointly.

The training of 3D fully convolution networks tend to pre-mature on the easy voxels quickly and converge slowly on the hard voxels, which are sometimes the objects of interests in medical images. For example, FCNs would learn the background voxels with uniform distributions quickly. For small-scaled patterns, such as lesions and object boundaries, the numeric errors tend to be small in the averaged losses. It would thus make the training insensitive to the subtle differences between the network outputs and the ground truth maps. We use the Focal Loss (FL), derived from the Focal Loss for object detection

[13], to perform the hard-voxel-mining with the AH-Net. We introduce FL regarding the L2 loss that we use in our first DBT image experiment. The cross-entropy form of FL that we use in the second CT image experiment can be found in [13]. Assuming the L2 loss

is used for supervisely learning a regression map,

(4)

where is the maximum numeric value expected for the L2 loss. The focusing parameter down-weights the easy voxels. A large

value would make the training focus more on the large numeric errors generated on the hard-voxels. We replace the original L2 loss with FL after a few epochs when the L2 loss barely decreases. The training loss could keep descending for more epochs under FL with the output details progressively enhanced.

4 Experimental Results

To demonstrate the efficacy and efficiency of the proposed 3D AH-net, we conduct two experiments, namely lesion detection from a Digital Breast Tomosynthesis (DBT) volume and liver tumor segmentation from a Computed Tomography (CT) volume. We use ADAM [10] to optimise all the compared networks with , and . We use the initial learning-rate to fine-tune the 2D Multi-Channel GCN. Then, the learning rate is increased to

to fine-tune the AH-Net after the 2D network is transferred. We find that 3D networks need a larger learning-rate to converge within a reasonable amount of time. All the networks are implemented in Pytorch (

http://pytorch.org).

4.1 Breast lesion detection from DBT

We use an in-house database containing 2809 3D DBT volumes acquired from 12 different sites globally. DBT is an advanced form of mammography, which uses low-dose X-Rays to image the breast. Different from 2D mammography that superimposes 3D information into one 2D image, DBT creates 3D pictures of the breast tissue and hence allows radiologists to read these pictures and detect breast cancer more easily, especially in dense breast tissues. The plane of DBT images has a high spatial resolution of which is much larger than the -dimension of . The structures in the -dimension are not only is compressed during the imaging process, but the 3D volumetric information also has large variations due to imaging artefacts.

We have experienced radiologists annotate and validate the lesions in DBT volumes, which might contain zero to several lesions. Each lesion is approximately annotated with a 3D bounding box. To train the proposed networks as lesion detection networks, we generate 3D multi-variant Gaussian heatmaps that have the same sizes as the original images as

(5)

where is a 3D coordinate ; is the center coordinate of each lesion 3D bounding box; is the covariant matrix of the -th Gaussian determined by the height, width and depth of the 3D bounding box. Please note that we do not directly predict the bounding box coordinates as the general object detection methods such as Faster RCNN [18] because it is sometimes challenging to define the exact boundary of a breast lesion. Also, the voxel-wise confidence maps of lesion presence could be more helpful for clinical decision support than bounding boxes.

#Volumes #Positives #Lesions
Train 2678 1111 1375
Test 131 58 72
Table 1: The numbers of volumes (#Volumes), lesion-positive volumes (#Positive) and lesions (#Lesions) in the evaluated DBT dataset.

We randomly split the database into the training and the testing sets as described in Table. 1. A volume or a 3D patch is considered positive if at least one lesion is annotated by the radiologist. We ensure the images from the same patient could only be found either in the training or the testing set. For training, we extract 3D patches. 70% of the training patches are sampled as positives with at least one lesion included, considering the balance between the voxels within and without a breast lesion. The patches are sampled online asynchronously with the network training to form the mini-batches.

Along with the proposed networks, we also train 2D and 3D U-Nets with the identical architecture and parameters [19, 5]

as two base-line comparisons. The 2D U-Net is also trained with input having three input channels. The 3D U-Net is trained with the same patch sampling strategies as the AH-Net. All the networks are trained till convergence then the L2 loss function is replaced with the Focal Loss described in Section 

3.4 for hard-voxel mining. The number of convolutional layers and parameters is shown in Table. 2. Using 2D networks, such as the MC-GCN and the 2D U-Net, to process 3D volumes involves repeatedly feeding duplicated images slices. Thus, they could be slower than the 3D networks when they are used for processing 3D volumes. We measure the GPU inference time of four networks by forwarding a 3D DBT volume of size 1000 times on an NVIDIA GTX 1080Ti GPU. The time spent on operations such as volume slicing is not included in the timing. The mean GPU time () is shown in Table. 3. The GPU inference of AH-Net is 43 times faster than MC-GCN though AH-Net has more parameters. The speed gain could be brought mostly by avoiding repetitive convolutions on the same slices required by multi-channel 2D networks.

Network #Conv Layers #Parameters
2D-UNet 15 28,254,528
3D-UNet 15 5,298,768
*ResNet50 53 23,507,904
GCN 94 23,576,758
AH-Net 123 27,085,500
Table 2: The number of convolutional layers (#Conv Layers) and model float parameters (#Parameters) respectively in 2D-UNet, 3D-UNet, ResNet50, GCN and AH-Net. ResNet50 is shown here as a reference to be compared with GCN with a simple decoder added.
2D U-Net 3D U-Net MC-GCN 3D AH-Net
699.3 2.3 775.2 17.7
Table 3: The GPU inference time () of different networks on a volume computed by averaging 1000 inferences with a NVIDIA GTX 1080Ti.
Figure 5: The visual comparisons of the network responses on 2 different DBT volumes from 2D GCN and the 3D AH-Net with the encoder weights transferred from it. Each volume is visualized with the maximum intensity projection of the plane (top-left), the plane (bottom) and the plane (right). The ground truth lesion centres are shown on the left. With the additional AH-Net Decoders, 3D AH-Net could effectively detect the missing lesion in the first volume (upper row) and remove the false positives in the second volume (lower row).
Figure 6: The Free Response Operating Characteristic (FROC) curves regarding the lesion detection performance.
FP=0.01 FP=0.05 FP=0.10 FP=0.15 FP=0.20 FP=0.25
2D U-Net 0.4238 0.4767 0.5181 0.5723 0.6166 0.6506
3D U-Net 0.2448 0.3877 0.4381 0.5592 0.5738 0.5733
GCN 0.3385 0.6727 0.6727 0.6909 0.7018 0.7272
AH-Net 0.4931 0.6000 0.7272 0.7454 0.7818 0.7818
Table 4: The quantitative metrics of the compared networks on the DBT dataset. True positive rate (TPR) sampled at five different numbers of false positive (FP) findings allowed are shown in the first five columns.

Non-maximal suppression is performed on the network output map to obtain the lesion locations. The network responses at the local maximal voxels are considered as the confidence scores of the cancerous findings. Fig. 5 shows some visual comparison of the networks output.

By altering a threshold to filter the response values, we can control the balance between the False Positive Rate (FPR) and True Positive Rate (TPR). The lesion detected by the network is considered a true positive finding if the maximal point resides in a 3D bounding box annotated by the radiologist. Similarly, if a bounding box contains a maximal point, we consider it is detected by the network. The maximal points are otherwise considered as false positive findings. We evaluate the lesion detection performance by plotting the Free Response Operating Characteristic (FROC) curves, which measures the True Positive Rate (TPR) against the number of false positive (#FP) allowed per volume. TPR represents the percentage of lesions that have been successfully detected by the network. FPR represents the percentage of lesions that the network predicted that are false positives. As shown in Fig.6, the proposed AH-Net out-performs both the 2D and 3D U-Net with large margins. Compared to the performance of the 2D network (Multi-Channel GCN), the 3D AH-Net generates higher TPR for a majority of thresholds, except the region around 0.05 per volume false positives. It is noticeable that AH-Net also obtains nearly 50% TPR even when only 0.01 false positive findings are allowed per volume. Interestingly, the performance of 3D-UNet is slightly worse than that of 2D-UNet, though the DBT volumes have three dimensions. This might be caused by the anisotropic resolution of DBT images and the limited number of parameters constrained by the GPU memory. The FROC numbers are summarised in Table. 4.

Figure 7: The example liver lesion segmentation results from 3D AH-Net. The segmented contours of liver (blue) and liver lesion (pink) are overlaid on 3 slices viewed from different orientations (Axial, Coronal and Sagittal). The segmentations are rendered in 3D on the right.

4.2 Liver and liver tumor segmentation from CT

The second evaluation dataset was obtained from the liver lesion segmentation challenge in MICCAI 2017 (lits-challenge.com

), which contains 131 training and 70 testing 3D contrast-enhanced abdominal CT scans. Liver lesion is one of the most commonest cancer worldwide. It is estimated that 28920 people will die of liver lesion and 40710 new cases will be diagnosed in 2017

[1]. Automatic segmentation of liver and lesion is challenging due to the heterogeneous and diffusive appearance of both liver and lesions. Also, the number, shape, location of the lesions varies a lot among different volumes. The data and ground-truth masks were provided by various clinical sites around the world. The ground truth masks contain both liver and lesion labels. Most CT scans consist of anisotropic resolution: the between-slice resolution ranges from 0.45mm to 6.0mm while the within-slice resolution varies from 0.55mm to 1.0mm. All scans cover the abdominal regions but may extend to head and feet. Other than the liver lesion, other diseases may also exist in these data, which further increases the task difficulty.

In preprocessing, the abdominal regions are truncated from the CT scans using the liver center biomarker detected by a reinforcement learning based algorithm

[6]. While this step makes the network concentrate on the targeting region, its accuracy is not critical as we choose a relatively large crop region which usually ranges from the middle of the lung to the top of the pelvis. The image intensity is truncated to the range of [-125,225] HU based on the intensity distribution of liver and lesion in the training data. Due to the limited number of training data, we applied random rotation (within degree in the plane), random scaling (within in all directions), and random mirror (within plane) to reduce overfitting.

We first train the MC-GCN with pre-trained ResNet50 as the back-bone network. The input size of stacked 2D slices is with three channels. After convergence, the weights of the encoder part of MC-GCN are transformed to the corresponding layers of a 3D AH-Net, which is then finetuned using 3D patches with size . The weights of other layers are randomly initialized. In the training of both networks, the cross-entropy loss is used at the beginning until convergence, which is then replaced by the Focal Loss for hard voxel mining [13].

The performance of AH-Net is listed in Table 5

, together with other six top-ranked submissions retrieved from the LITS challenge leaderboard. These submissions employ various types of neural network architectures: 2D, 3D, 2D-3D hybrid, and model fusion. Two evaluation metrics are adapted: (1) Dice Global (DG) which is the dice score combining all the volumes into one; (2) Dice per Case (DPC) which averages of the dice scores of every single case. The Dice score between two masks is defined as

. Our results achieve the state-of-the-art performance in three of the four metrics, including the dice global score of the lesions, dice global and dice per case score of the livers, which proves the effectiveness of AH-Net for segmenting 3D images with diverse anisotropic resolution. Some example results are shown in Fig.7.

Lesion Liver
Method DG DPC DG DPC
leHealth 0.794 0.702 0.964 0.961
H-DenseNet [12] 0.829 0.686 0.965 0.961
hans.meine 0.796 0.676 0.963 0.960
medical 0.783 0.661 0.951 0.951
deepX 0.820 0.657 0.967 0.963
superAI 0.814 0.674 - -
GCN 0.788 0.593 0.963 0.951
3D AH-Net 0.834 0.634 0.970 0.963
Table 5: The liver lesion segmentation (LITS) challenge results with the dice global (DG) and dice per case (DPC). The compared results were obtained from the LITS challenge leaderboard (lits-challenge.com/#results).

5 Conclusion

In this paper, we propose the 3D Anisotropic Hybrid Network (3D AH-Net) which is capable of transferring the convolutional features of 2D images to 3D volumes with anisotropic resolution. By evaluating the proposed methods on both a large-scale in-house DBT dataset and a highly competitive open challenge dataset of CT segmentation, we show our network could obtain the state-of-the-art results. AH-Net generalizes better than the traditional 3D networks, such as 3D U-Net [5] due to the features transferred from a 2D network and the anisotropic convolution blocks. The GPU inference of AH-Net is also much faster than piling the results from a 2D network. Though AH-Net is designed for anisotropic volumes, we believe it could also be applied to volumes with resolution closed to being isotropic, such as CT and MRI.

Disclaimer: This feature is based on research, and is not commercially available. Due to regulatory reasons, its future availability cannot be guaranteed.

References

  • [1] American Cancer Society. Cancer Facts and Figures 2017. American Cancer Society., 2017.
  • [2] T. Brosch, L. Y. W. Tang, Y. Yoo, D. K. B. Li, A. Traboulsee, and R. Tam. Deep 3D Convolutional Encoder Networks With Shortcuts for Multiscale Feature Integration Applied to Multiple Sclerosis Lesion Segmentation. IEEE Transactions on Medical Imaging, 35(5):1229–1239, 5 2016.
  • [3] A. Chaurasia and E. Culurciello. LinkNet: Exploiting Encoder Representations for Efficient Semantic Segmentation. 6 2017.
  • [4] J. Chen, L. Yang, Y. Zhang, M. Alber, and D. Z. Chen.

    Combining Fully Convolutional and Recurrent Neural Networks for 3D Biomedical Image Segmentation, 2016.

  • [5] . Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger. 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation. 6 2016.
  • [6] F. C. Ghesu, B. Georgescu, S. Grbic, A. K. Maier, J. Hornegger, and D. Comaniciu. Robust Multi-scale Anatomical Landmark Detection in Incomplete 3D-CT Data. pages 194–202. Springer, Cham, 9 2017.
  • [7] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. Proceedings of the IEEE, 2016.
  • [8] G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten. Densely Connected Convolutional Networks. 8 2016.
  • [9] S. Ioffe and C. Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. 2 2015.
  • [10] D. P. Kingma and J. Ba. Adam: A Method for Stochastic Optimization. 12 2014.
  • [11] K. Lee, J. Zung, P. Li, V. Jain, and H. S. Seung. Superhuman Accuracy on the SNEMI3D Connectomics Challenge. 5 2017.
  • [12] X. Li, H. Chen, X. Qi, Q. Dou, C.-W. Fu, and P. A. Heng. H-DenseUNet: Hybrid Densely Connected UNet for Liver and Liver Tumor Segmentation from CT Volumes. 9 2017.
  • [13] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár. Focal Loss for Dense Object Detection. 8 2017.
  • [14] F. Liu, Z. Zhou, H. Jang, A. Samsonov, G. Zhao, and R. Kijowski. Deep convolutional neural network and 3D deformable approach for tissue segmentation in musculoskeletal magnetic resonance imaging. Magnetic Resonance in Medicine, 7 2017.
  • [15] P. Moeskops, M. A. Viergever, A. M. Mendrik, L. S. de Vries, M. J. N. L. Benders, and I. Isgum. Automatic Segmentation of MR Brain Images With a Convolutional Neural Network. IEEE Transactions on Medical Imaging, 35(5):1252–1261, 5 2016.
  • [16] C. Peng, X. Zhang, G. Yu, G. Luo, and J. Sun. Large Kernel Matters – Improve Semantic Segmentation by Global Convolutional Network. 3 2017.
  • [17] Z. Qiu, T. Yao, and T. Mei. Learning Spatio-Temporal Representation With Pseudo-3D Residual Networks, 2017.
  • [18] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks, 2015.
  • [19] O. Ronneberger, P. Fischer, and T. Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. 5 2015.
  • [20] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision, 115(3):211–252, 12 2015.
  • [21] G. Wang, W. Li, S. Ourselin, and T. Vercauteren. Automatic Brain Tumor Segmentation using Cascaded Anisotropic Convolutional Neural Networks. 9 2017.
  • [22] T. Zeng, B. Wu, and S. Ji. DeepEM3D: approaching human-level performance on 3D anisotropic EM image segmentation. Bioinformatics, 33(16):2555–2562, 3 2017.
  • [23] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. Pyramid Scene Parsing Network. 12 2016.

Appendix A Visual Cases of the DBT dataset

We selected some example slices from the DBT dataset to demonstrate the advantage of our proposed AH-Net for the Breast cancer screening. From Fig. 8 to Fig. 12, we show slices from five test DBT volumes that both the MC-GCN and the proposed 3D AH-Net could successfully detect the suspected breast lesion. The original DBT slice is shown on the left with the lesion annotated by our radiologist. Please note the original annotation is a 3D box. The figures in the middle and on the right are response maps from MC-GCN and 3D AH-Net overlaid on the original image, respectively. The detection locations obtained with non-maximal suppression are displayed with cross markers. As shown in the images, the proposed network can detect breast lesions varying in sizes and appearances. The confidence of the 3D AH-Net is usually higher than that of MC-GCN. From Fig. 13 to Fig. 17, we show five volumes that MC-GCN failed to detect the lesions since the lesions were not distinguishable from other breast tissues using the information within the slice. In contrast, 3D AH-Net was able to detect the lesions from such volumes using the 3D context between slices. As shown in Fig. 18 to Fig. 22, there are also volumes with lesions that both network failed to detect. Such lesions normally reside in the dense breast tissues. The boundary between these lesions and the normal breast tissues usually have low contrast. The networks sometimes also confuse them with other roundish structures in the breast such as lymph nodes or skin moles.

Figure 8: Example DBT slice 1 with a lesion that can be detected by both MC-GCN and 3D AH-Net. Though the lesion is blended in the dense breast tissues, our network is able to detect it according to the speculations around the lesion boundary.
Figure 9: Example DBT slice 2 with a lesion that can be detected by both MC-GCN and 3D AH-Net. The lesion is small and can also be identified with the architectural distortion in the surrounding tissues.
Figure 10: Example DBT slice 3 with a lesion that can be detected by both MC-GCN and 3D AH-Net. The lesion is blended in the dense breast tissues.
Figure 11: Example DBT slice 4 with a lesion that can be detected by both MC-GCN and 3D AH-Net. The lesion has clear boundaries and speculations.
Figure 12: Example DBT slice 5 with a lesion that can be detected by both MC-GCN and 3D AH-Net. The small lesion causes architectural distortion in the surrounding tissues.
Figure 13: Example DBT slice 6 with a lesion that can only be detected by 3D AH-Net. The lesion is highly blended within the dense breast tissues which makes it challenging for both the radiologists and the networks to detect through a single slice. In contrast, the lesion can be detected by considering the consistency of the structure across a few neighbouring slices.
Figure 14: Example DBT slice 7 with a lesion that can only be detected by 3D AH-Net. The lesion is highly blended within the dense breast tissues which makes it challenging for both the radiologists and the networks to detect through a single slice. In contrast, the lesion can be detected by considering the consistency of the structure across a few neighbouring slices.
Figure 15: Example DBT slice 8 with a lesion that can only be detected by 3D AH-Net. The lesion is small and hard to be distinguished from other breast tissues. The lesion can be detected by considering the consistency of the structure across a few neighbouring slices.
Figure 16: Example DBT slice 9 with a lesion that can only be detected by 3D AH-Net. The lesion is highly blended within the dense breast tissues which makes it challenging for both the radiologists and the networks to detect with only a 2D view of the structure. The lesion can be detected by considering the consistency of the structure across a few neighbouring slices.
Figure 17: Example DBT slice 10 with a lesion that can only be detected by 3D AH-Net. The lesion is highly blended within the dense breast tissues which makes it challenging for both the radiologists and the networks to detect through a slice 2D slice. The lesion can be detected by considering the consistency of the structure across a few neighbouring slices.
Figure 18: Example DBT slice 11 with a lesion that neither network is able to detect. The contrast between lesion and the normal tissue is too low.
Figure 19: Example DBT slice 12 with a lesion that neither network is able to detect. The contrast between lesion and the normal tissue is too low.
Figure 20: Example DBT slice 13 with a lesion that neither network is able to detect. The contrast between lesion and the normal tissue is too low.
Figure 21: Example DBT slice 14 with a lesion that neither network is able to detect. Although the lesion has a roundish shape, it is hard for the network to distinguish them from the lymph nodes or skin moles.
Figure 22: Example DBT slice 15 with a lesion that neither network is able to detect. It is hard for the network to distinguish the lesion from the lymph nodes or skin moles.

Appendix B Liver Tumor Segmentation Challenge

We show 9 example sagittal slices from the LITS challenge test set in Fig. 23 to demonstrate the variation of both livers and liver lesions. The images are cropped to the region with liver centered. The sizes and shapes of the livers vary a lot between individuals. The variation of liver lesion in sizes and intensities is even higher. The lesions are highly sparse in the abdominal CT images. Thus it is challenging for the networks to segment the lesions with small sizes. Please note that we do not have the ground truth of the test volumes.

Three example volumes are selected from the test image set to demonstrate the effectiveness of our proposed network in Fig. 24, Fig. 25 and Fig. 26

. Although we do not have the groundtruth label maps for the testing images, the liver boundaries and the presence of lesions can be visually inspected. The liver lesions normally appear as a dark region within the liver. Without sufficient 3D context, MC-GCN tends to generate false positive regions at the structure boundaries, especially under low image contrast. From the sagittal and coronal views, it is visible that MC-GCN could not generate the correct boundaries close to the top or the bottom of the lesion. By considering the consistency between slices, 3D AH-Net can segment the structures in 3D correctly, although the feature extraction network is transferred from a 2D network. The jagged boundary in the sagittal and coronal view is due to the low resolution in the z direction.

Figure 23: Example sagittal view slices from the LITS challenge test volumes overlaided with the segmentation boundaries obtained with 3D AH-Net. The livers and the lesions both vary in sizes, morphology and intensities.
(a) Axial Image Slice
(b) Axial Segmentation with MC-GCN
(c) Axial Segmentation with 3D AH-Net
(d) Sagittal Image Slice
(e) Sagittal Segmentation with MC-GCN
(f) Sagittal Segmentation with 3D AH-Net
(g) Coronal Image Slice
(h) Coronal Segmentation with MC-GCN
(i) Coronal Segmentation with 3D AH-Net
Figure 24: Multi-view slices from the example test CT volume 1 of the LITS challenge.
(a) Axial Image Slice
(b) Axial Segmentation with MC-GCN
(c) Axial Segmentation with 3D AH-Net
(d) Sagittal Image Slice
(e) Sagittal Segmentation with MC-GCN
(f) Sagittal Segmentation with 3D AH-Net
(g) Coronal Image Slice
(h) Coronal Segmentation with MC-GCN
(i) Coronal Segmentation with 3D AH-Net
Figure 25: Multi-view slices from the example test CT volume 2 of the LITS challenge.
(a) Axial Image Slice
(b) Axial Segmentation with MC-GCN
(c) Axial Segmentation with 3D AH-Net
(d) Sagittal Image Slice
(e) Sagittal Segmentation with MC-GCN
(f) Sagittal Segmentation with 3D AH-Net
(g) Coronal Image Slice
(h) Coronal Segmentation with MC-GCN
(i) Coronal Segmentation with 3D AH-Net
Figure 26: Multi-view slices from the example test CT volume 3 of the LITS challenge.