DeepTract: A Probabilistic Deep Learning Framework for White Matter Fiber Tractography

by   Itay Benou, et al.

We present DeepTract, a deep-learning framework for estimation of white matter fibers orientation and streamline tractography. We take a data-driven approach for fiber reconstruction from raw diffusion MRI, without assuming a specific diffusion model. We use a recurrent neural network for mapping sequences of diffusion-weighted imaging (DWI) values into probabilistic fiber orientation distributions. Based on these estimations, our model can perform both deterministic and probabilistic tractography on unseen DWI datasets. We quantitatively evaluate our method using the Tractometer tool, demonstrating comparable performance to state-of-the-art classical and DL-based methods. We further present qualitative results of bundle-specific probabilistic tractography of our method.



There are no comments yet.


page 7

page 9

page 10


A Machine Learning Approach For Identifying Patients with Mild Traumatic Brain Injury Using Diffusion MRI Modeling

While diffusion MRI has been extremely promising in the study of MTBI, i...

DIFFnet: Diffusion parameter mapping network generalized for input diffusion gradient schemes and bvalues

In MRI, deep neural networks have been proposed to reconstruct diffusion...

Mitigating Gyral Bias in Cortical Tractography via Asymmetric Fiber Orientation Distributions

Diffusion tractography in brain connectomics often involves tracing axon...

Diffeomorphic Metric Mapping and Probabilistic Atlas Generation of Hybrid Diffusion Imaging based on BFOR Signal Basis

We propose a large deformation diffeomorphic metric mapping algorithm to...

Combined tract segmentation and orientation mapping for bundle-specific tractography

While the major white matter tracts are of great interest to numerous st...

Tractography filtering using autoencoders

Current brain white matter fiber tracking techniques show a number of pr...

Tract orientation mapping for bundle-specific tractography

While the major white matter tracts are of great interest to numerous st...

Code Repositories


Official implementation of the paper "DeepTract: A Probabilistic Deep Learning Framework for White Matter Fiber Tractography" (by Benou and Riklin-Raviv): Any use of this code requires a citation of the paper.

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Diffusion MRI (dMRI) and tractography are useful tools in the study of white matter (WM). Thanks to the ability of dMRI-based tractography to visualize complex neural tracts, it has become a key component in a variety of applications such as brain connectivity studies [8], analysis of WM tracts for investigation of neurlogical disorders [5, 10], and even in surgical planning [6].

The standard tractography pipeline consist of a diffusion modeling stage, in which local fiber orientations are reconstructed from diffusion weighted images (DWI), followed by a tracking stage in which these orientations are pieced together into WM streamlines. At the heart of the modeling stage lays the inverse problem of finding the configuration of local fiber orientations that gave rise to the measured DWI signal. Given the fact that a single brain voxel can contain tens of thousands of differently oriented fibers, accurate reconstruction of fiber orientations is a very challenging task.

Over the last two decades, various tractography algorithms have been presented. While some of these methods are deterministic, i.e. provide a single streamline orientation in each voxel [3, 20], others perform probabilistic tracking [7, 13] or take a global tractography approach [1, 15]

. Nevertheless, these methods are model-based, in the sense that they rely on a specific mathematical model for mapping raw dMRI signal into fiber orientation estimates. Among others, these models include the diffusion tensor model

[4], Q-ball imaging [12] and spherical deconvolution [27]. Despite the remarkable progress that has been made over the last decade, current methods are not without limitations [14, 17]. Each such model makes different assumptions regarding WM tissue properties and the dMRI signal, which may vary substantially depending on the subject and the data acquisition process [23]. Furthermore, some models impose specific requirements on the data quality and acquisition protocol. For example, higher order models usually require a larger number of gradient directions and are more computationally expensive. Therefore, from the user’s point of view, choosing a suitable model is not trivial. Even after a specific model is chosen, the user still has to manually tune various tracking-related parameters that may vary between different models and require a high level of expertise.

Recently there have been efforts to address these issues using data-driven approaches and machine learning (ML) techniques. These approaches aim to learn the mapping between DWI signals and fiber orientations directly from dMRI and tractography datasets, instead of using a-priori modelling. By not assuming a specific diffusion model, the resulting algorithms can be less dependent on data acquisition schemes and requires less user intervention.


pioneered this line of work in 2015, presenting a supervised ML tractography algorithm based on random forest (RF) classification. The RF classifier was trained to predict a local fiber orientation from a discrete set of possible directions, based on the surrounding dMRI values. More recently,

[25] suggested harnessing the representational power of deep-learning (DL) for fiber tractography, and examined a fully-connected (FC) and recurrent neural network (RNN) architectures. In contrast to [24], streamline tractography was treated as a regression problem by training a neural network to predict a continuous tracking direction based on sequences of dMRI values. A similar regression-based approach was presented in [28]

using a multi-layer perceptron (MLP) network. We note that all of these methods perform deterministic tractography, outputting a single streamline direction in each step. Also, they do not attempt to model the underlying fiber orientation distribution function (fODF), but rather directly provide a single tracking direction. On the other hand, other DL-based works have focused strictly on fODF estimation. In


a deep convolutional neural network (CNN) was used for estimating discrete fODFs from raw dMRI scans. A variation of this idea was presented in

[18], using a CNN to predict the spherical harmonics (SH) coefficients from which continuous fODFs are reconstructed. These works, however, do not perform fiber tractography.

In this work we present a DL-based tractography framework, addressing both

fiber orientation estimation and streamline tracking from raw dMRI scans. To exploit the sequential nature of tractography data, we treat the problem as a sequential classification task by training an RNN model for predicting local fiber orientations (i.e., classes) along tractography streamlines. Unlike other DL-based tractography algorithms, our model does not assume a single deterministic fiber orientation in each tracking step. Instead, it provides a probabilistic estimation of local fiber orientations in the form of discrete probability density functions. In addition to deteministic streamline tracking, this probabilistic approach enables our method to produce probabilistic tractograms by sampling from the estimated distributions. We note that these distributions differ from standard fODFs, since they are conditioned by the DWI "history" along a specific streamline path. We therefore refer to them as

conditional fiber orientation distribution functions (CfODFs).

We quantitatively evaluate the proposed method using the Tractometer tool [11], demonstrating comparable results to state-of-the-art classical and DL-based tractography algorithms. We further present qualitative results of high-quality probabilistic tractograms generated by our method.

2 Methods

In the following sections we describe the details of proposed DeepTract framework: (1) the input model, (2) how the network learns to predict fiber orientations from dMRI data, (3) how new tractograms are generated from unseen data, and (4) the implementation details of the neural network’s architecture.

2.1 Input Model

The training data consists of two separate sets: a DWI set  and its corresponding whole-brain tractography containing  streamlines. Each streamline is represented by a sequence of equi-distant 3D coordinates, i.e., .

Pre-Processing: To handle datasets acquried with different gradient schemes, we first resample the DWI set into  pre-defined gradient directions evenly distributed on the unit hemisphere, using spherical harmonics (we use  =100). Each DWI volume is then centered according to its mean and normalized by the (non-diffusion weighted) volume.

Sequential Input Model: In order for our model to be invariant to spatial transformations, we feed it with sequences of DWI values instead of directly using the 3D coordinates of the streamlines, as was suggested in [25]. Formally, given a DWI dataset  and a streamline segment , the input to our model is the series of DWI values measured along this segment, i.e. . Notice that every input entry

is a vector containing

dWI values. Therefore a single streamline segment of length corresponds to a 2D input tensor of size x.

2.2 Fiber Orientation Estimation

A straight forward approach for addressing ML-based tractography is using a regression framework, in which a continuous direction is estimated in each tracking step. This approach, however, is limited to a single deterministic orientation output. Aiming to facilitate both deterministic and probabilistic tractography, we require our model to provide a probabilistic estimation of local fiber orientations prior to tracking.

We therfore address the problem as a discrete classification task. For this purpose, a discrete representation of an fODF is obtained by sampling the unit sphere at evenly-distributed points , each representing a possible fiber orientation. Each orientation is treated as a separate "class", in addition to another "end-of-fiber" (EoF) class which is used for labeling fiber termination points. Given an input sequence, our network predicts a vector of 1 class probabilities along each point of the input streamline.

represents the probability distribution of local fiber orientations, such that all probabilities sum to one, i.e. 

. We note that this formulation poses a tradeoff between higher angular resolution, achieved by increasing the number of classes, and the complexity of the classification problem. In this work we use =724, which provides an angular quantization error of 3.5 degrees.

Conditional fODFs: Standard fODFs represent the total orientation distribution function at a voxel location , independent of other voxels, i.e. fODF() = . However, since our model is sequence-based, it in fact estimates the local fODF given the entire sequnce of DWI values along a specific input streamline. Formally, this is the conditional fiber orientation distribution function (CfODF) at location , given a streamline path , i.e. CfODF.

We note, howerver, that there is a straight forward relation between the CfODF and the total fODF. Consider a voxel containing two distinct fiber orientations, as illustrated in Fig. 1. Using the total probability theorem, the total fODF is given by the mean CfODF computed over all streamlines passing through the voxel, i.e.,


where refers to the probability of reaching the point via path .

Figure 1: The relationship between CfODF and the total fODF. Left: Illustration of two crossing fibers on a 2D grid. Right: The individual CfODFs of the two fibers (top and middle) and the corresponding total fODF at the green voxel (bottom).

2.3 Streamline Tractography

During training, the predicted CfODFs are compared to the "true" orientation derived from input streamline itself. For a specific location , this is simply given by . The discrete class label is then chosen as the orientation in which is closest to , and represented as a "1-hot" vector. Once the model is fully trained, streamline tractography can be performed based on unseen dMRI scans. Streamlines are generated in an iterative process as illustrated in Fig. 3(b). Provided with a seed point , the corresponding dWI values are measured and fed into the network. The network in turn provides an estimated CfODF() as output. Deterministic tracking is performed by stepping in the most likely fiber orientation, i.e. . In addition, Probabilistic tracking can be performed by sampling a random direction from the CfODF. Either way, the streamline is propagated iteratively as in standard tractography algorithms according to , where is the step size. The process is repeated until the EoF class is predicted.

2.4 Network Architecture

We implement our model using a recurrent neural network (RNN), specifically a gated recurrent unit (GRU)

[9]. RNNs have shown to be successful in generating complex sequences in various domains, simply by predicting one data point at a time [16]. Specifically, their ability to model long-range sequential dependencies and utilize them for making predictions is well-suited for the task of streamline tractography. The full network architecture is illustrated in Fig. 3

. The proposed model consists of five stacked GRU layers, each containing 1000 neurons. Rectified linear unit (ReLU) activations are placed after each layer, except the last one which is followed by a fully-connected (FC) layer. The FC layer outputs a class-scores vector, which is then normalized using a softmax operation to obtain the CfODF. The predicted CfODFs are compared to the true class labels using a cross-entropy loss function. The total loss is computed by the mean loss over the entire input sequence.

2.4.1 Label Smoothing:

In [26] it was shown that using a single (sparse) label in classification tasks with a cross-entropy loss may impair the modle’s ability to properly generalize. Specifically, it drives the model to be "over confident" with its predictions, thus encouraging overfitting. It seems, however, that this problem is even more severe in our case. In standard classification tasks, when there is no clear measure of proximity between classes, one type of classification mistake is usually "equally bad" as any other (i.e., mistaking a "dog" with a "chair" is no worse than mistaking it with a "tree"). However, in our case the classes are spatially structured, and there is a clear angular measure of how large a classification error is with respect to the true direction. Therefore, using "1-hot" encoded labels with a cross-entropy loss will result in assigning identical penalties to every classification mistake, regardless of how large the angular error is.

[26] suggests solving this problem by replacing the sparse "1-hot" label with a weighted mixture of itself with the a-priori class probability function, thus "smoothing" the label by transferring probability mass to classes with high a-priori probabilities. Here we suggest a different smoothing scheme, which takes advantage of the spatial relationship between classes. Instead of relying on a-priori class probabilities, we distribute probability mass to classes which are spatially adjacent to the ground truth class, as ullustrated in Fig. 2. This is performed by convolving the ground truth label, represented as a delta function at the true direction , with a Gaussian kernel on the unit sphere, i.e.,


where is the Gaussian kenel, is the angle between any direction and the ground truth direction , is a partition function used for normalization and is the kernel width. The resulting distribution is then sampled at the same M discrete orientations in , yielding a discrete class label distribution . Note that since the smooth label decreases exponentially, the cross-entropy loss will now force predictions to concetrate probability mass inside a -sized window around the true direction. As a result, predictions outside this window will be assigned with larger penalties.

Original Label  Smooth Label


Figure 2: Label smoothing. The original sparse label (left) is smoothed using a Gaussian kernel on the unit sphere, transferring probability mass to neighboring classes (right).

2.4.2 Entropy-Based Tracking Termination:

During generative process of RNN models, accumulated error may lead predictions so stray off the training data manifold [16], resulting in completely erroneous outputs. To alleviate this problem, we employ an entropy-based tracking termination criterion. When the network strays off the training data manifold, "unfamiliar" input DWI values increase the uncertainty of the model’s prediction, resulting in more isotropic CfODF estimations. Therefore, we terminate the tracking process of a streamline whenever the entropy of the predicted CfODF exceeds a pre-defined threshold. Since the CfODF’s entropy tends to be larger at the beginning of the generative process, we use an exponentially decreasing threshold , where is the step index along the sequence, and , and are parameters to be set.

(a) Training Process (b) Tracking Process
Figure 3: Architecture of the proposed model. (a) Training: Sequences of DWI values are fed into the network to produce CfODF estimations, which are compared to the true streamline direction. (b) Tracking: Once the model is trained, a new streamline is generated iteratively starting from a given seed point.

3 Experiments and Results

We test the performance of the proposed method using the following experiments: 1) Quantitative evaluation based on the ISMRM tractography challenge phantom dataset [21]. 2) Qualitative (visual) demonstration of bundle-specific probabilistic tractography performed by our model.

Training Enviornment: The netwrok architecture described in section 2.4 was used in all three experiments. DWI scans were first denoised [22] and corrected for eddy currents and head motion. A ground truth whole brain tractography (200K streamlines) was created using Q-ball reconstruction [2]

followed by probabilistic streamline trackig using MITK diffusion tool. The resulting streamlines were randomly divided into training and validation sets using a 90%-10% split. Data augmentation was performed by reversing the orientation of all streamlines in the training set, thus doubling the number of training examples. Training was performed using the Adam optimizer with a batch size of 40 streamlines per batch. Dropout was used with deletion probability of 0.3 to avoid overfitting, as well as gradient clipping to avoid exploding gradients.

Tracking parameters: After the model was trained, streamline tractography was performed using a fixed step size of 0.5 (in voxels). Seeding was performed using 100K randomly placed seed points (no more than one seed per voxel). Tracking was terminated online for high-curvature steps (larger than 60 degrees), and output streamlines shorter than 20mm or longer than 200mm were discarded. For the entropy-based stopping criteria (see section 2.4), we used =3, =10 and =4.5.

3.1 Tractometer Analysis

In this experiment we evaluate our method using Tractometer, a publicly avilable online tool for assessment of whole brain tractography, which was used for testing the submissions of the ISMRM 2015 tractography challenge. This enables us to compare our results to the original challenge submissions, as well as to other methods that were previously evaluated using Tractometer. In addition to training our model on the MITK tractography output, we report the results achieved when training the model on the ground-truth ISMRM tractography. This is done to show the high-limit performance of our method, when overfitted to the test set.

Evaluation metrics: Tractometer compares a given whole brain tractogram to 25 gold standard streamline bundles, using the following metrics:

  • Valid bundles (VB): The number of valid bundles identified in the tractogram under test, out of the 25 ground truth bundles (higher is better).

  • Invalid bundles (IB): The number of bundles that were not matched to a known ground truth bundle (lower is better).

  • Valid connections (VC): The percentage of individual streamlines that were identified as a part of a known ground truth bundle (higher is better).

  • Invalid connections (IC): The percentage of individual streamlines that were identified as a part of an invalid bundle (lower is better).

  • No connections (NC): The percentage of individual streamlines that were not assigned to VC or IC (lower is better).

  • Bundle overlap (OL): The proportion of voxels in a ground truth bundle that is traversed by at least one valid streamline associated with the same bundle (higher is better). This measure is averaged over all 25 bundles.

  • Bundle overreach (OR): The fraction of voxels outside the volume of a ground truth bundle that is traversed by at least one valid streamline associated with the bundle, over the total number of voxels within the ground truth bundle (lower is better). This measure is averaged over all 25 bundles.

  • The standard F1 score.

Results: The whole brain tractography output of our method is shown in Fig. 4, alongside the MITK tractography used for supervision, and the gold standard tractography of the ISMRM challenge. Quantitative evaluation of our method based on the Tractometer metrics is summarized in table 1. The performance of MITK’s tractography (supervisor) is also presented, as well as the average ISMRM challenge performance and two other DL-based methods [25, 28]. From the results we see that when trained on the ground-truth streamlines, our model achieves the highest scores in most parameters, however clearly overfitting the test set. Nevertheless, even when trained on the MITK tractography output our model manages to properly generalize, performing better than the average ISMRM submission in most parameters. While only moderately outperformed by its MITK supervisor in terms of valid connections, we note that our model achieves the best OR and IB rates out of all examined methods. This is most likely due to the entorpy-based termination criterion described above, which prevents generated streamlines from straying off coherent bundle structures.

Model Connections (%) Bundles Avg. coverage (%)
 VC  IC  NC  VB  IB  OL  OR   F1
ISMRM mean results 53.6 19.7 25.2 21.4 281 31.0 23.0 44.2
Poulin et al. [25] 41.6 45.6 12.8 23 130 64.4 35.4 64.5
Wegmayr et al. [28] 72 - - 23 57 16.0 28.0 -
MITK (supervisor) 59.1 27.8 13.1 24 69 47.2 31.2 52.5
Proposed (GT supervision) 70.6 19.5 9.9 25 56 69.3 22.7 70.1
Proposed (MITK supervision) 40.5 32.6 22.9 23 51 34.4 17.3 44.2
Table 1: Quantitative evaluation results using Tractometer.
Proposed Method MITK Ground Truth
Figure 4: Whole brain tractography results - visual comparison.

3.2 Probabilistic Tracking

We futher performed probabilistic tracking on the phantom dMRI dataset of the ISMRM challenge, using the model trained on the MITK tractography. This was done by seeding from bundle-specific endpoints, using the enpoints masks available at the ISMRM challenge website. Streamlines were generated in a probabilistic manner by sampling from the CfODFs, and the process was repeated several times to create a probabilistic map counting the number of "visits" in every voxel. Results for the Frontopontine tract (FPT) and Unicate Fasiculus (UF) are shown in Fig. 5, alongside the ground-truth bundles. Visual evaluation shows that the resulting bundles are in-line with the ground truth tractograms. Moreover, notice that higher probability was assigned along the core of the bundles, reducing towards remote regions near the cortex. This result is also in-line with other probabilistic tractography algorithms.

Frontopontine Tract Unicate Fasiculus
Ground Truth Proposed Prob. Proposed Ground Truth
Figure 5: Probabilistic tracking results of the proposed method for specific bundles.

4 Summary and Discussion

We presented the first deep learning framework which is capable of performing both deterministic and probabilistic streamline tractography directly from raw dMRI data. We showed that by combining a sequential processing approach of a recurrent model with a discrete classification framework, our model provides reliable probabilistic fiber orientation estimations, i.e. CfODFs. In a quantitative evaluation, the proposed method demonstrated comparable performance to state-of-the-art classical and DL-based tractography algorithms. While these results demonstrate the potential of DL-based approaches for tractography applications, we believe that additional high-quality data is required to further progress this area of research. With larger publicly available sets of accurate dMRI and tractography data, training and testing procedures of ML- and DL-based methods can be greatly improved. In future works, as more data becomes available, we plan on testing our model’s ability to generalize to unseen bundle structures, as well as to perform accurate tracking on in-vivo clinical scans.


  • [1] Iman Aganj, Christophe Lenglet, Neda Jahanshad, Essa Yacoub, Noam Harel, Paul M Thompson, and Guillermo Sapiro. A hough transform global probabilistic approach to multiple-subject diffusion mri tractography. Medical image analysis, 15(4):414–425, 2011.
  • [2] Iman Aganj, Christophe Lenglet, and Guillermo Sapiro. Odf reconstruction in q-ball imaging with solid angle consideration. In Biomedical Imaging: From Nano to Macro, 2009. ISBI’09. IEEE International Symposium on, pages 1398–1401. IEEE, 2009.
  • [3] Peter J Basser. Fiber-tractography via diffusion tensor mri (dt-mri). In Proceedings of the 6th Annual Meeting ISMRM, Sydney, Australia, volume 1226, 1998.
  • [4] Peter J Basser, James Mattiello, and Denis LeBihan. Estimation of the effective self-diffusion tensor from the nmr spin echo. Journal of Magnetic Resonance, Series B, 103(3):247–254, 1994.
  • [5] Itay Benou, Ronel Veksler, Alon Friedman, and Tammy Riklin Raviv. Fiber-flux diffusion density for white matter tracts analysis: Application to mild anomalies localization. In Computational Diffusion MRI: MICCAI Workshop, Québec, Canada, September 2017, page 191. Springer, 2018.
  • [6] Jeffrey Berman. Diffusion mr tractography as a tool for surgical planning. Magnetic resonance imaging clinics of North America, 17(2):205–214, 2009.
  • [7] Jeffrey I Berman, SungWon Chung, Pratik Mukherjee, Christopher P Hess, Eric T Han, and Roland G Henry. Probabilistic streamline q-ball tractography using the residual bootstrap. Neuroimage, 39(1):215–222, 2008.
  • [8] Ed Bullmore and Olaf Sporns. Complex brain networks: graph theoretical analysis of structural and functional systems. Nature Reviews Neuroscience, 10(3):186, 2009.
  • [9] Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014.
  • [10] Olga Ciccarelli, Marco Catani, Heidi Johansen-Berg, Chris Clark, and Alan Thompson. Diffusion-based tractography in neurological disorders: concepts, applications, and future developments. The Lancet Neurology, 7(8):715–727, 2008.
  • [11] Marc-Alexandre Côté, Gabriel Girard, Arnaud Boré, Eleftherios Garyfallidis, Jean-Christophe Houde, and Maxime Descoteaux. Tractometer: towards validation of tractography pipelines. Medical image analysis, 17(7):844–857, 2013.
  • [12] Maxime Descoteaux, Elaine Angelino, Shaun Fitzgibbons, and Rachid Deriche. Regularized, fast, and robust analytical q-ball imaging. Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine, 58(3):497–510, 2007.
  • [13] Maxime Descoteaux, Rachid Deriche, Thomas R Knosche, and Alfred Anwander. Deterministic and probabilistic tractography based on complex fibre orientation distributions. IEEE transactions on medical imaging, 28(2):269–286, 2009.
  • [14] Shawna Farquharson, J-Donald Tournier, Fernando Calamante, Gavin Fabinyi, Michal Schneider-Kolsky, Graeme D Jackson, and Alan Connelly. White matter fiber tractography: why we need to move beyond dti. Journal of neurosurgery, 118(6):1367–1377, 2013.
  • [15] Pierre Fillard, Cyril Poupon, and Jean-François Mangin. A novel global tractography algorithm based on an adaptive spin glass model. In International conference on medical image computing and computer-assisted intervention, pages 927–934. Springer, 2009.
  • [16] Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
  • [17] Ben Jeurissen, Maxime Descoteaux, Susumu Mori, and Alexander Leemans. Diffusion mri fiber tractography of the brain. NMR in Biomedicine, page e3785, 2017.
  • [18] Simon Koppers, Matthias Friedrichs, and Dorit Merhof. Reconstruction of diffusion anisotropies using 3d deep convolutional neural networks in diffusion imaging. In Modeling, Analysis, and Visualization of Anisotropy, pages 393–404. Springer, 2017.
  • [19] Simon Koppers and Dorit Merhof. Direct estimation of fiber orientations using deep learning in diffusion imaging. In International Workshop on Machine Learning in Medical Imaging, pages 53–60. Springer, 2016.
  • [20] Mariana Lazar, David M Weinstein, Jay S Tsuruda, Khader M Hasan, Konstantinos Arfanakis, M Elizabeth Meyerand, Benham Badie, Howard A Rowley, Victor Haughton, Aaron Field, et al. White matter tractography using diffusion tensor deflection. Human brain mapping, 18(4):306–321, 2003.
  • [21] Klaus Maier-Hein, Peter Neher, Jean-Christophe Houde, Marc-Alexandre Cote, Eleftherios Garyfallidis, Jidan Zhong, Maxime Chamberland, Fang-Cheng Yeh, Ying Chia Lin, Qing Ji, et al. Tractography-based connectomes are dominated by false-positive connections. biorxiv, page 084137, 2016.
  • [22] José V Manjón, Pierrick Coupé, Luis Concha, Antonio Buades, D Louis Collins, and Montserrat Robles. Diffusion weighted image denoising using overcomplete local pca. PloS one, 8(9):e73021, 2013.
  • [23] Peter F Neher, Marc-Alexandre Cote, Jean-Christophe Houde, Maxime Descoteaux, and Klaus H Maier-Hein. Fiber tractography using machine learning. Neuroimage, 158:417–429, 2017.
  • [24] Peter F Neher, Michael Götz, Tobias Norajitra, Christian Weber, and Klaus H Maier-Hein. A machine learning based approach to fiber tractography using classifier voting. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 45–52. Springer, 2015.
  • [25] Philippe Poulin, Marc-Alexandre Cote, Jean-Christophe Houde, Laurent Petit, Peter F Neher, Klaus H Maier-Hein, Hugo Larochelle, and Maxime Descoteaux. Learn to track: Deep learning for tractography. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 540–547. Springer, 2017.
  • [26] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna.

    Rethinking the inception architecture for computer vision.


    Proceedings of the IEEE conference on computer vision and pattern recognition

    , pages 2818–2826, 2016.
  • [27] J-Donald Tournier, Fernando Calamante, and Alan Connelly. Robust determination of the fibre orientation distribution in diffusion mri: non-negativity constrained super-resolved spherical deconvolution. Neuroimage, 35(4):1459–1472, 2007.
  • [28] Viktor Wegmayr, Giacomo Giuliari, Stefan Holdener, and Joachim Buhmann. Data-driven fiber tractography with neural networks. In Biomedical Imaging (ISBI 2018), 2018 IEEE 15th International Symposium on, pages 1030–1033. IEEE, 2018.