1 Introduction
Long ShortTerm Memory (LSTM) networks [lstm97and95, Gers:99c]
are recurrent neural networks (RNNs) initially designed for sequence processing. They achieved stateoftheart results on challenging tasks such as handwriting recognition
[Graves:09tpami], large vocabulary speech recognition [sak2014, sak2014large] and machine translation sutskever2014. Their architecture contains gates to store and read out information from linear units called error carousels that retain information over long time intervals, which is hard for traditional RNNs.MultiDimensional LSTM networks (MDLSTM [DBLP:conf/icann/GravesFS07]) connect hidden LSTM units in gridlike fashion^{1}^{1}1For example, in two dimensions this yields 4 directions; up, down, left and right.. Two dimensional MDLSTM is applicable to image segmentation [DBLP:conf/icann/GravesFS07, graves:2009nips, byeon15] where each pixel is assigned to a class such as background or foreground. Each LSTM unit sees a pixel and receives input from neighboring LSTM units, thus recursively gathering information about all other pixels in the image.
There are many biomedical 3D volumetric data sources, such as computed tomography (CT), magnetic resonance (MR), and electron microscopy (EM). Most previous approaches process each 2D slice separately, using image segmentation algorithms such as snakes [Kass88]
[Wang15] and Convolutional Neural Networks [ciresan2012nips]. 3DLSTM, however, can process the full context of each pixel in such a volume through 8 sweeps over all pixels by 8 different LSTMs, each sweep in the general direction of one of the 8 directed volume diagonals.Due to the sequential nature of RNNs, however, MDLSTM parallelization was difficult, especially for volumetric data. The novel Pyramidal MultiDimensional LSTM (PyraMiDLSTM) networks introduced in this paper use a rather different topology and update strategy. They are easier to parallelize, need fewer computations overall, and scale well on GPU architectures.
PyraMiDLSTM is applied to two challenging tasks involving segmentation of biological volumetric images. Competitive results are achieved on EMISBI12 [cardona10]; best known results are achieved on MRBrainS13 [MICCAI15].
2 Method
We will first describe standard onedimensional LSTM [Gers:99c] and MDLSTM. Then we introduce topology changes to construct the PyraMiDLSTM, which is formally described and discussed.
The original LSTM unit consists of an input gate (), forget gate^{2}^{2}2Although the forget gate output is inverted and actually ‘remembers’ when it is on, and forgets when it is off, the traditional nomenclature is kept. (), output gate (), and memory cell (
) which control what should be remembered or forgotten over potentially long periods of time. All gates and activations are realvalued vectors:
, where is the length of the input. The gates and activations at discrete time (=1,2,…) are computed as follows:(1)  
(2)  
(3)  
(4)  
(5)  
(6) 
where () is a matrix multiplication, () an elementwise multiplication, and denotes the weights. is the input to the ’cell’ , which is gated by the input gate, and is the output. The nonlinear functions and are applied elementwise, where . Equations (1, 2) determine gate activations, Equation (3) cell inputs, Equation (4) the new cell states (here ‘memories’ are stored or forgotten), Equation (5) output gate activations which appear in Equation (6), the final output.
2.1 Pyramidal Connection Topology
In MDLSTMs, connections are aligned with the grid axes. In 2D, these directions are up, down, left and right. A 2DLSTM adds the pixelwise outputs of 4 LSTMs, one scanning the image pixel by pixel from northwest to southeast, one from northeast to southwest, one from southwest to northeast, and one from southeast to northwest.
If the connections are rotated by , all inputs to all units come from either left, right, up, or down (left in case of Figure 1–b). This greatly facilitates parallelization, since all the elements of a whole grid row can be computed independently, which does not work for MDLSTM simplexes, whose sizes vary. However, this introduces context gaps as in Figure 1–b. By adding an extra input, these gaps are filled as in Figure 1–c.
A similar connection strategy has been previously used to speed up noneuclidian distance computations on surfaces [weber2008parallel]. There are however important differences:

[noitemsep]

We can exploit efficient GPUbased CUDA convolution operations, but in a way unlike what is done in CNNs, as will be explained below.

As a result of these operations, input filters that are bigger than the necessary filters arise naturally, creating overlapping contexts. Such redundancy turns out to be beneficial and is used in our experiments.

We apply several layers of complex processing with multichanneled outputs and several statevariables for each pixel, instead of having a single value per pixel as in distance computations.

Our application is focused on volumetric data.
One of the striking differences between PyraMiDLSTM and MDLSTM is the shape of the scanned contexts. Each LSTM of an MDLSTM scans rectanglelike contexts in 2D or cuboids in 3D. Each LSTM of a PyraMiDLSTM scans triangles in 2D and pyramids in 3D (see Figure 2). An MDLSTM needs 8 LSTMs to scan a volume, while a PyraMiDLSTM needs only 6, since it takes 8 cubes or 6 pyramids to fill a volume. Given dimension , the number of LSTMs grows as for an MDLSTM (exponentially) and for a PyraMiDLSTM (linearly).
2.2 PyraMiDLSTM
Here we explain the PyraMiDLSTM network architecture for 3D volumes (see Figure 3). It consists of six LSTMs with RNNtailored convolutions (CLSTM), one for each direction, to create the full context of each pixel. Note that each of these CLSTMs is a entire LSTM RNN, processing the entire volume in one direction. The directions are defined over the three axes : .
Each CLSTM performs computations in a plane moving in the defined direction. The symbol () in a dimension signifies that the plane is parallel to this axis, and a or implies that the computation is moving along the positive or negative direction of that axis, respectively. The input is , where is the width, the height, the depth, and the number of channels of the input, or hidden units in the case of second and higher layers. Similarly, we define the volumes , where is a direction and is the number of hidden units (per pixel). Since each direction needs a separate volume, we denote volumes with .
To keep the equations readable, we omit the time index below: becomes and becomes . The time index is bound to the axis along which computations are performed. For instance, for direction , refers to the plane orthogonal to axis z; i.e. for , and . For a negative direction , the plane is the same but moves in the opposite direction: . Furthermore, refers to the previous plane, in this case for . A special case is the first plane in each direction, which does not have a previous plane, hence we omit the corresponding computation. The following functions are defined for all planes and all directions:
CLSTM:
(7)  
(8)  
(9)  
(10)  
(11)  
(12) 
where () is a RNNtailored convolution^{3}^{3}3
In 3D volumes, convolutions are performed in 2D; in general an nD volume requires n1D convolutions. All convolutions have stride 1, and their filter sizes should at least be
in each dimension to create the full context., and is the output of the layer. Note that in CLSTMs the convolution operations propagate information ‘sideways’ over the axis of the data. This is very different from CNNs where the convolution operations propagate information upwards to the next layer. All biases are the same for all LSTM units (i.e., no positional biases are used). The outputs for all directions are added, combining all CLSTMs into one PyraMiDLSTM:(13) 
FullyConnected Layer: Each PyraMiDLSTM layer is connected to a fullyconnected layer, and the output is squashed by the hyperbolic tangent () function. At the end, a softmax function is applied after the last fullyconnected layer.
3 Experiments
We evaluate our approach on two 3D biomedical image segmentation datasets: electron microscopy (EM) and MR Brain images.
EM dataset
The EM dataset [cardona10]
is provided by the ISBI 2012 workshop on Segmentation of Neuronal Structures in EM Stacks
[isbi12]. Two stacks consist of 30 slices of pixels obtained from a microcube with a resolution of /pixel and binary labels. One stack is used for training, the other for testing. Target data consists of binary labels (membrane and nonmembrane).MR Brain dataset
The MR Brain images are provided by the ISBI 2015 workshop on Neonatal and Adult MR Brain Image Segmentation (ISBI NEATBrainS15) [MICCAI15]. The dataset consists of twenty fully annotated highfield (3T) multisequences: 3D T1weighted scan (T1), T1weighted inversion recovery scan (IR), and fluidattenuated inversion recovery scan (FLAIR). The dataset is divided into a training set with five volumes and a test set with fifteen volumes. All scans are biascorrected and aligned. Each volume includes 48 slices with pixels ( slice thickness). The slices are manually segmented through nine labels: cortical gray matter, basal ganglia, white matter, white matter lesions, cerebrospinal fluid in the extracerebral space, ventricles, cerebellum, brainstem, and background. Following the ISBI NEATBrainS15 workshop procedure, all labels are grouped into four classes and background: 1) cortical gray matter and basal ganglia (GM), 2) white matter and white matter lesions (WM), 3) cerebrospinal fluid and ventricles (CSF), and 4) cerebellum and brainstem. Class 4) is ignored for the final evaluation as required.
Subvolumes and Augmentation
The full dataset requires more than the 12 GB of memory provided by our GPU, hence we train and test on subvolumes. We randomly pick a position in the full data and extract a smaller cube (see the details in Bootstrapping). This cube is possibly rotated at a random angle over some axis and can be flipped over any axis. For EM images, we rotate over the zaxis and flipped subvolumes with 50% chance along x, y, and z axes. For MR brain images, rotation is disabled; only flipping along the x direction is considered, since brains are (mostly) symmetric in this direction.
During testtime, rotations and flipping are disabled and the results of all subvolumes are stitched together using a Gaussian kernel, providing the final result.
Preprocessing
We normalize each input slice towards a mean of zero and variance of one, since the imaging methods sometimes yield large variability in contrast and brightness. We do not apply the complex preprocessing common in biomedical image segmentation
[Wang15].We apply simple preprocessing on the three datatypes of the MR Brain dataset, since they contain large brightness changes under the same label (even within one slice; see Figure 5). From all slices we subtract the Gaussian smoothed images (filter size: , ), then a ContrastLimited Adaptive Histogram Equalization (CLAHE) [Pizer87] is applied to enhance the local contrast (tile size: , contrast limit: 2.0). An example of the images after preprocessing is shown in Figure 5. The original and preprocessed images are all used, except the original IR images (Figure 4(b)), which have high variability.
Training
We apply RMSprop [tieleman2012lecture, dauphin2015rmsprop] with momentum. We define to be , where
. The following equations hold for every epoch:
(14)  
(15)  
(16)  
(17)  
(18) 
where is the target, is the output from the networks, is the squared loss, a running average of the variance of the gradient, is the elementwise squared gradient, the normalized gradient, the smoothed gradient, and the weights. This results in normalized gradients of similar size for all weights, such that even weights with small gradients get updated. This also helps to deal with vanishing gradients [Hochreiter:91].
We use a decaying learning rate: , which starts at and halves every 100 epochs asymptotically towards . Other hyperparameters used are , , and .
Bootstrapping
To speed up training, we run three learning procedures with increasing subvolume sizes: first, 3000 epochs with size , then 2000 epochs with size . Finally, for the EMdataset, we train 1000 epochs with size , and for the MR Brain dataset 1000 epochs with size . After each epoch, the learning rate is reset.
Experimental Setup
All experiments are performed on a desktop computer with an NVIDIA GTX TITAN X 12GB GPU. For GPU implementation, the NVIDIA CUDA Deep Neural Network library (cuDNN) [cudnn14] is used. On the MR brain dataset, training took around three days, and testing per volume took around 2 minutes.
We use exactly the same hyperparameters and architecture for both datasets. Our networks contain three PyraMiDLSTM layers. The first PyraMiDLSTM layer has 16 hidden units followed by a fullyconnected layer with 25 hidden units. In the next PyraMiDLSTM layer, 32 hidden units are connected to a fullyconnected layer with 45 hidden units. In the last PyraMiDLSTM layer, 64 hidden units are connected to the fullyconnected output layer whose size equals the number of classes.
The convolutional filter size for all PyraMiDLSTM layers is set to
. The total number of weights is 10,751,549, and all weights are initialized according to a uniform distribution:
.3.1 Neuronal Membrane Segmentation
Evaluation metrics
Three error metrics evaluate the following factors:

Rand error [Rand1971]
: 1  Fscore of rand index, which measures similarity between two segmentations on the foreground.

Warping error [Bollmann2010]: topological disagreements (object splits and mergers)

Pixel error: 1  Fscore of pixel similarity
Results
Group  Rand Err.  Warping Err.()  Pixel Err. 

Human  0.002  0.0053  0.001 
Simple Thresholding  0.450  17.14  0.225 
IDSIA [ciresan2012nips]  0.050  0.420  0.061 
DIVE  0.048  0.374  0.058 
PyraMiDLSTM  0.047  0.462  0.062 
IDSIASCI  0.0189  0.617  0.103 
DIVESCI  0.0178  0.307  0.058 
Membrane segmentation is evaluated through an online system provided by the ISBI 2012 organizers. Comparisons to other methods are reported in Table 1. The teams IDSIA and DIVE provide membrane probability maps for each pixel, like our method. These maps are adapted by the postprocessing technique of the teams SCI [Liu14], which directly optimizes the rand error (DIVESCI (top1) and IDSIASCI (top2)); this is most important in this particular segmentation task. Without postprocessing, PyraMiDLSTM networks outperform other methods in rand error, and are competitive in wrapping and pixel errors. Of course, performance could be further improved by applying postprocessing techniques. Figure 4 shows an example segmentation result.
3.2 MR Brain Segmentation
Evaluation metrics
The results are compared based on the following three measures:

The DICE overlap (DC) [Dice1945]: spatial overlap between the segmented volume and ground truth

The modified Hausdorff distance (MD) [Huttenlocher1993]: 95thpercentile Hausdorff distance between the segmented volume and ground truth

The absolute volume difference (AVD) [Babalola2009]: the absolute difference between segmented and ground truth volume, normalized over the whole volume.
Results
Structure  GM  WM  CFS  
Metric  DC  MD  AVD  DC  MD  AVD  DC  MD  AVD  Rank 
(%)  (mm)  (%)  (%)  (mm)  (%)  (%)  (mm)  (%)  
BIGR2  84.65  1.88  6.14  88.42  2.36  6.02  78.31  3.19  22.8  6 
KSOM GHMF  84.12  1.92  5.44  87.96  2.49  6.59  82.10  2.71  12.8  5 
MNAB2  84.50  1.69  7.10  88.04  2.12  7.73  82.30  2.27  8.73  4 
ISINeonatology  85.77  1.62  6.62  88.66  2.06  6.96  81.08  2.66  9.77  3 
UNCIDEA  84.36  1.62  7.04  88.69  2.06  6.46  82.81  2.35  10.5  2 
PyraMiDLSTM  84.82  1.69  6.77  88.33  2.07  7.05  83.72  2.14  7.10  1 
MR brain image segmentation results are evaluated by the ISBI NEATBrain15 organizers [MICCAI15] who provided the extensive comparison to other approaches on http://mrbrains13.isi.uu.nl/results.php. Table 2
compares our results to those of the top five teams. The organizers compute nine measures in total and rank all teams for each of them separately. These ranks are then summed per team, determining the final ranking (ties are broken using the standard deviation). PyraMiDLSTM leads the final ranking with a new stateoftheart result and outperforms other methods for CFS in all metrics.
We also tried regularization through dropout [srivastava14]. Following earlier work [dropout14], the dropout operator is applied only to nonrecurrent connections (50% dropout on fully connected layers and/or 20% on input layer). However, this did not improve performance, but made the system slower.
4 Conclusion
Since 2011, GPUtrained maxpooling CNNs have dominated classification contests
[ciresan:2011ijcnn, Krizhevsky:2012, zeiler2013] and segmentation contests ciresan2012nips. MDLSTM, however, may pose a serious challenge to such CNNs, at least for segmentation tasks. Unlike CNNs, MDLSTM has an elegant recursive way of taking each pixel’s entire spatiotemporal context into account, in both images and videos. Previous MDLSTM implementations, however, could not exploit the parallelism of modern GPU hardware. This has changed through our work presented here. Although our novel highly parallel PyraMiDLSTM has already achieved stateoftheart segmentation results in challenging benchmarks, we feel we have only scratched the surface of what will become possible with such PyraMiDLSTM and other MDRNNs.5 Acknowledgements
We would like to thank Klaus Greff and Alessandro Giusti for their valuable discussions, and Jan Koutnik and Dan Ciresan for their useful feedback. We also thank the ISBI NEATBrain15 organizers [MICCAI15] and the ISBI 2012 organisers, in particular Adriënne Mendrik and Ignacio ArgandaCarreras. Lastly we thank NVIDIA for generously providing us with hardware to perform our research. This research was funded by the NASCENCE EU project (EU/FP7ICT317662).
Comments
There are no comments yet.