The brain is the central processing unit of our body which maintains order and controls the actions of all the organs. Naturally, brain also communicates between its own subdivisions (active regions) for which it uses neuronal connections (termed as neuronal fibers), which consist of dendrites and axons. The dendrites serve as the receivers in a neuron and axons in any neuron are responsible for transmission of signals to other neurons. Typically a brain consist of several billions of such neurons constantly receiving and transmitting signals between themselves, forming a very complex network that commands our day to day activities. The brain fibers are broadly divided into two classes, namely the grey and white fibers as shown in Fig.1(a). The white fibers mainly consist of axons that connect various parts of grey matter while the gray matter contains cell bodies, dendrites and axons as shown in Fig. 1(b)
. The white matter can be further classified into eight clusters - Arcute, Cingulum, Corticospinal, Forceps Major, Fornix, Inferior Occipitofrontal Fasciculus, Superior Longitudinal Fasciculus and Uncinate. These are basically bundles or clusters of fibers called neural tracts that form the pathways between different hemispheres and brain regions as shown in Fig.1(f).
The brain fiber tractography data can be extracted from 3T Magnetic resonance imaging(MRI) data using diffusion tensor imaging which is a non-invasive MRI technique. In this technique diffusion has been considered as the molecular fluid spread and its extent depends on the diffusive property of the medium as shown in Fig.1(c). In brain white matter, a tissue consists of bundles of myelination axons. Crucially, it is more hindered across than along such bundles. Hence, by measuring diffusion along many directions and observing that it is faster in one direction than in others, one can deduce the direction of fiber bundles [Guise et al.(2016)Guise, Fernandes, Nóbrega, Pathak, Schneider, and Fangueiro]. The process known as tractography, helps in visualizing the fiber structure in three dimensions using Diffusion Tensor Imaging(DTI) as shown in Fig. 1(e). One such example has been shown in Fig. 2, where on the top-right each fiber is coloured randomly, for better visualization. The eight major fiber clusters are shown in bottom right, while the grey matter fibers are shown in bottom-left. Any fiber can be represented as a set of points in 3D space (typically 20 to 100 points per fiber). Tractography produces thousands of fiber trajectories per subject (around 250K). Finally, obtained data can be seen as a 3D point cloud that does not convey anything useful. In order to extract any useful information the fibers must have to be organized into anatomically meaningful structure. DTI may improve preservation of eloquent regions during surgery by providing access to direct connectivity information between functional regions of the brain, and it has progressively been incorporated into strategic planning for resection of complex brain lesions [Fernandez-Miranda et al.(2012)Fernandez-Miranda, Pathak, Engh, Jarbo, Verstynen, Yeh, Wang, Mintz, Boada, Schneider, et al.].
1.1 Problem Statement
Designing brain tractography segmentation network (BrainSegNet) :
Given tractography data of human brain, we have designed a moderately deep recurrent neural network that can automatically segment brain fibers into tracts having “similar” fibers which are anatomically meaningful. Medically it is also believed that at coarser level there are two classes grey and white matter (Macro Level) and further in white matter their are eight cluster/tracts (Micro Level)viz. Arcute, Cingulum, Corticospinal, Forceps Major, Fornix, Inferior Occipitofrontal Fasciculus, Superior Longitudinal Fasciculus and Uncinate.
Classification of these brain fibers provides a better understanding of the brain structure and also can be very helpful for planning brain tumor surgeries and other medical tasks involving the alterations/cutting of brain fibers. Also a much more detailed and systematic analysis can be planned over the brains suffering from many complex disease like Parkinson and Alzheimer, for which still not much is unraveled. The manual segmentation of various brain fiber is a very tedious task as the data available is enormous and a great amount of expertise is required. Therefore, an automated classification of these fibers into their respective anatomically meaningful clusters, is medically very important and a challenging problem.
The BrainSegNet model proposed in this paper, uses labeled DTI data for learning the features required to classify the individual brain fibers. It has been observed that the inter tract fibers follow different routes or paths from one part of the brain to another and paths are often non-linear. While fibers from the same class usually connect “similar” parts of brains and follow similar routes. One important feature we have used in this work is that we first extract a fixed fraction of points having the highest curvature and then use a network consisting of multiple LSTM layers and one bilateral LSTM layer at the beginning so as to funnel out the information from the gradients in both the positive as well as the negative network propagation direction. A two level hierarchical classification has been performed. One at the “Micro” level, in which we classify eight primary classes of the white matter (WM) and another at “Macro” level, in which binary classification between two classes viz. white matter(WM) and grey matter(GM) tracts. Any test fiber is classified into one of the classes by comparing a proximity based on the learned high-curvature-point based BrainSegNet model.
An entirely new model(BrainSegNet) has been formulated using bilateral LSTMs and LSTMs, which is in accordance with the theory of curvature points specifying the routes being the distinctive factors. The trained model when tested, surpassed the previous state of the art results.
1.4 Previous Work
Although the problem of brain fiber classification is quite a recent development, still several unsupervised techniques have already been suggested [Catani et al.(2002)Catani, Howard, Pajevic, and Jones], [Maddah et al.(2005)Maddah, Mewes, Haker, Grimson, and Warfield] along with few supervised approaches. Majorly the unsupervised approaches manually select the Region of Interest (ROI) and then group the fibers through these ROIs. In [O’Donnell and Westin(2007)]
, authors used spectral clustering to generate a white matter atlas automatically. The similarity is calculated between fibers using Hausdorff distance, and the clustering is employed in embedded space, that is formed using the Eigen-vectors of the distance matrix. In[Wang et al.(2011)Wang, Grimson, and Westin], hierarchical Dirichlet process has been used to determine the number of clusters. Another supervised approach presented in [Patel et al.(2016)Patel, Parmar, Bhavsar, and Nigam], selects few major points having maximum curvature and fed them to a clustering algorithm which takes into account the position and the curvature values while classifying.
2 Proposed Model : BrainSegNet
In this section we discuss the various components of our model, the overall structure, data format, data pre-processing ( of data pruning) and the detailed architecture of the proposed network consisting of LSTMs and bilateral LSTMs layers. The overview of our proposed model has been shown in Fig. 2.
2.1 Tractography Data and its Format
The full data of three patients brain and their respective labels has been taken from University of Pittsburg as a part of contest. The ground truth had been manually annotated by expert neurologists and surgeons using their own interactive tools. The data has been provided as or track file format. A track file is a single binary file, with the first bytes are header information and the rest constitute the required fiber information. Each patient data consists of about fibers. The average number of sample points over a fiber across different classes varies between to highlighting that our approach is insensitive to the fiber length. Each of the fiber is labelled an integer in the range to with respect to classes, where there is used for grey matter and remaining different types of white matter classes/tracts. The number of fibers per class in the training set considered for all the 3 patients has been depicted in Table 1.
|Patient ID||Gray Matter||White Matter|
2.2 Curvature based Fiber Data Pruning using Partial Derivatives
Fiber pruning has been done in two steps : a) Extracting meaningful data points - high curvature points, and b) Conversion into fixed input using masking. The labelled data of 3 brains is available to us in the form of .trk (Track) file format [Ref.], Track file is one single binary file, with the first 1000 bytes as the header and the rest as the body.
Fiber Pruning : Extraction of meaningful data points has been done under the assumption that similar fibers follow similar paths from one part brain region to another and hence would have almost “similar” curvature points. Only those sample points involved in high curvature are selected by taking projections of these vectors on each of the planes ( and ). Then accumulate their gradients in both directions with respect to the sample points just before and after them and as well as with respect to points, four steps ahead and behind from the current point. This scheme is chosen in order to handle the multi-scale curvature. Finally gradients are sorted and bottom points are pruned.
Fixed Length Conversion : The brain fibers are represented by a variable length sequence of points in a three dimensional space (3D vectors). Since our network requires fixed length input, the length of sequences was restricted to
points. Longer sequences has been truncated while shorter ones are padded with zeros. This strategy lead to poor results as the fiber structure severely got affected. Later, masking layer was used instead of padding to preserve the structure of fibers. Masking skips a time-step where all features are equal to the mask value.
Justification : To evaluate pre-processing, we have tested our model over both pruned and original fiber and have observed that with longer sizes the training time increases while accuracy does not improve.
2.3 Model Architecture
Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on very complex learning tasks. We have used a deep stacked Long Short Term Memory (LSTM) network as shown in Fig. 3. Histotically RNNs [Sutskever(2013)]
are built to utilize past information to predict the future using hidden layers. But RNN’s failed to train well due to vanishing gradient problem and later LSTM Networks has been proposed.
2.3.1 RNN (Recurrent Neural Networks)
RNNs [Sutskever(2013)] are built in order to utilize past information to predict the future by using a loop into the hidden layers that pass information to their respective successors. But RNN’s with long dependencies failed to train well due to vanishing gradient problem. In order to learn long and short both types of dependencies LSTM Networks has been proposed.
2.3.2 Long Short Term Memory Networks (LSTM)
The LSTMs [Hochreiter and Schmidhuber(1997)] are special RNNs capable of learning long term dependencies. At each iteration information passed to their respective successors, that learns which information appeared in the past is most relevant discard remaining. The core concept behind LSTMs is that they can manipulate the inputs to the cells from their predecessors by using three gates as discussed below:
Forget Gate : Acts as a filter for the information flow from previous cells.
Here, learns what to forget, is previous cell state, is current input, is bias and is weight matrix learned during training.
Input Gate : It selects what to store in the current cell.
(2) (3) (4)
Here, , are weight matrices, is vector of new candidate values, and are biases and is new cell state.
Output Gate : Computes the output of the current state.
Here, is weight matrices, learns significant cell state part, is the output of current hidden layer and is outptut bias.
2.3.3 Justification for the selected model
We have observed that RNNs need to be deep enough to capture subtle differences in structure of fibres from different classes. Initial few layers consist of one Bi-directional LSTM to capture structural understanding at each fiber point and its curvature characteristics. The next are LSTM layers with reducing memory units as shown in Fig. 3
. Finally, fully connected layers with a sigmoid activation function has been used to make predictions.
Deep stacked LSTMs often give better accuracy over shallower models. However, simply stacking more layers of LSTM works only to a certain number of layers, beyond which the network becomes too slow and difficult to train due to exploding and vanishing gradient problem [Hochreiter and Schmidhuber(1997)]. We have observed that stacked LSTM layers work well up to layers, barely with layers, and very poorly beyond layers. The training set was split for validating and training at a ratio of
, so as to validate the training after each epoch.
Network Hyper-Parameters : Hyper parameter used in the model are as follows :
Number of epochs : 15
Batch Size : 64
Activation Function : Sigmoid
Optimizer Function : Adam
Fraction of Data used for training : 0.4
Loss functions : Binary Cross-entropy and Categorical Cross-entropy for macro and micro respectively
Validation set of size 0.2 of training set size has been used.
3 Experimental Analysis
In this section we talk about the methods applied for validation of our proposed model. The major challenge that we have faced is to get more and more “labeled tractography” data. We some how got hold of three brain data from a contest organized by Pittsburg. Hence, we have done the validation over a dataset of three patients brain data containing various fibers which are represented by a variable length of sequence and each of the fiber is labeled to one of the 9 classes. Each brain data contains fibers and their respective class. We have formulated this problem as two level hierarchical classification problem in which experiments are carried first at, a) Macro Level - In which the fibers undergo binary classification with respect to two classes viz. grey and the white matter (data imbalance is a big challenge), secondly at b) Micro Level - In which the fibers are classified into one of the 8 sub-classes of white fibers (intra-class variation is a big challenge).
3.1 Testing Methodology
The testing of our trained BrainSegNet model has been done using three protocols defined below :
Intra Brain Testing : BrainSegNet has been trained and tested over the same patient’s data (one of the three patient).
Inter Brain Testing : BrainSegNet has been trained over one of the three patient’s data while tested on a fraction of the data from other two patients. We have reported our results only when trained over patient 2 (i.e. Brain 2) data and tested over remaining Brain 1 and Brain 3 data.
Merged Brain Testing : BrainSegNet has been trained over merged data of all the three patients, such that half of the data points from each brain has been considered for training after shuffling, while remaining patients data has been shuffled and used for testing.
All of the above mentioned testing strategies are performed for both Micro classification and macro classification as described. We have used two performance parameters (a) Accuracy and (b) Recall defined in the Eqs. (6), (7). Since the number of grey fiber is much more than that of white fibers, as shown in Table 1
, we have computed recall only for white fibers as it would skew the results for grey fibers.
|Intra: Trained and tested over same brain|
|ANN [Patel et al.(2016)Patel, Parmar, Bhavsar, and Nigam]||BrainSegNet||ANN [Patel et al.(2016)Patel, Parmar, Bhavsar, and Nigam]||BrainSegNet||ANN [Patel et al.(2016)Patel, Parmar, Bhavsar, and Nigam]||BrainSegNet|
|Inter: Trained on B2 and tested on B1 and B3|
|Merged: Trained and tested over merged brain data|
3.2 Experimental Results
The results with respect to the accuracy values and the recall values according to the testing strategy has been depicted in Table 2. As we can see that we have achieved state of the art results in almost all the testing strategies adopted in this paper. As we see when we move from intra to inter testing strategy the accuracy falls, this is due to the fact that the model has been trained on an entirely other brain and the brain may differ in size or shape and hence may differ slightly in the paths taken from one part to another. One can also observe that the merged training and testing strategy also gives less accurate results than intra as it is more of a generalized model for accommodating fiber from any one of the brains. For the Table 2, one can infer that the proposed BrainSegNet achieves an accuracy more than and for macro and micro level respectively. Over such a huge and diverse dataset with small training samples we have achieved quit high performance.
3.3 Comparative Analysis
The recall values in Table 2, signify the superiority of the proposed BrainSegNet in classifying white fibers (which are fewer in number due to sever data imbalance). The proposed BrainSegNet gives far better results than the model proposed in [Patel et al.(2016)Patel, Parmar, Bhavsar, and Nigam], in terms of recall values with better accuracy’s in most of the experiments. In [Patel et al.(2016)Patel, Parmar, Bhavsar, and Nigam], inter brain analysis has not been performed at all, which is very challenging and important to be reported. Even after such a huge data imbalance and intra-class variations the proposed network has achieved state-of-the-art performance and significantly outperforms the existing work [Patel et al.(2016)Patel, Parmar, Bhavsar, and Nigam].
In this paper we propose a novel stacked bidirectional LSTM based segmentation network, (BrainSegNet) for human brain fiber tractography data classification. We perform a two-level hierarchical classification a) White vs Grey matter (Macro) and b) White matter clusters (Micro). Our experimental evaluations show that our model achieves state-of-the-art results. We performed classification at both macro and micro levels that can eliminate the need for manually segmenting the brain fiber tracts which is presently a big issue. We are in the process to get more labeled data, so that we can train another better and deep generalizable network.
- [Catani et al.(2002)Catani, Howard, Pajevic, and Jones] Marco Catani, Robert J Howard, Sinisa Pajevic, and Derek K Jones. Virtual in vivo interactive dissection of white matter fasciculi in the human brain. Neuroimage, 17(1):77–94, 2002.
- [Fernandez-Miranda et al.(2012)Fernandez-Miranda, Pathak, Engh, Jarbo, Verstynen, Yeh, Wang, Mintz, Boada, Schneider, et al.] Juan C Fernandez-Miranda, Sudhir Pathak, Johnathan Engh, Kevin Jarbo, Timothy Verstynen, Fang-Cheng Yeh, Yibao Wang, Arlan Mintz, Fernando Boada, Walter Schneider, et al. High-definition fiber tractography of the human brain: neuroanatomical validation and neurosurgical applications. Neurosurgery, 71(2):430–453, 2012.
- [Guise et al.(2016)Guise, Fernandes, Nóbrega, Pathak, Schneider, and Fangueiro] Catarina Guise, Margarida M Fernandes, João M Nóbrega, Sudhir Pathak, Walter Schneider, and Raul Fangueiro. Hollow polypropylene yarns as a biomimetic brain phantom for the validation of high-definition fiber tractography imaging. ACS Applied Materials & Interfaces, 8(44):29960–29967, 2016.
- [Hochreiter and Schmidhuber(1997)] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735–1780, November 1997. ISSN 0899-7667. doi: 10.1162/neco.19220.127.116.115. URL http://dx.doi.org/10.1162/neco.1918.104.22.1685.
- [Maddah et al.(2005)Maddah, Mewes, Haker, Grimson, and Warfield] Mahnaz Maddah, Andrea Mewes, Steven Haker, W Grimson, and Simon Warfield. Automated atlas-based clustering of white matter fiber tracts from dtmri. Medical image computing and computer-assisted intervention–MICCAI 2005, pages 188–195, 2005.
- [O’Donnell and Westin(2007)] Lauren J O’Donnell and Carl-Fredrik Westin. Automatic tractography segmentation using a high-dimensional white matter atlas. IEEE transactions on medical imaging, 26(11):1562–1575, 2007.
[Patel et al.(2016)Patel, Parmar, Bhavsar, and
Vedang Patel, Anand Parmar, Arnav Bhavsar, and Aditya Nigam.
Automated brain tractography segmentation using curvature points.
Proceedings of the Tenth Indian Conference on Computer Vision, Graphics and Image Processing, ICVGIP ’16, pages 18:1–18:6, New York, NY, USA, 2016. ACM. ISBN 978-1-4503-4753-2. doi: 10.1145/3009977.3010013. URL http://doi.acm.org/10.1145/3009977.3010013.
- [Sutskever(2013)] Ilya Sutskever. Training recurrent neural networks. PhD thesis, University of Toronto, 2013.
- [Wang et al.(2011)Wang, Grimson, and Westin] Xiaogang Wang, W Eric L Grimson, and Carl-Fredrik Westin. Tractography segmentation using a hierarchical dirichlet processes mixture model. NeuroImage, 54(1):290–302, 2011.