The Brain Tumor Sequence Registration Challenge: Establishing Correspondence between Pre-Operative and Follow-up MRI scans of diffuse glioma patients

Registration of longitudinal brain Magnetic Resonance Imaging (MRI) scans containing pathologies is challenging due to tissue appearance changes, and still an unsolved problem. This paper describes the first Brain Tumor Sequence Registration (BraTS-Reg) challenge, focusing on estimating correspondences between pre-operative and follow-up scans of the same patient diagnosed with a brain diffuse glioma. The BraTS-Reg challenge intends to establish a public benchmark environment for deformable registration algorithms. The associated dataset comprises de-identified multi-institutional multi-parametric MRI (mpMRI) data, curated for each scan's size and resolution, according to a common anatomical template. Clinical experts have generated extensive annotations of landmarks points within the scans, descriptive of distinct anatomical locations across the temporal domain. The training data along with these ground truth annotations will be released to participants to design and develop their registration algorithms, whereas the annotations for the validation and the testing data will be withheld by the organizers and used to evaluate the containerized algorithms of the participants. Each submitted algorithm will be quantitatively evaluated using several metrics, such as the Median Absolute Error (MAE), Robustness, and the Jacobian determinant.



page 1

page 3


Unsupervised Segmentation Algorithms' Implementation in ITK for Tissue Classification via Human Head MRI Scans

Tissue classification is one of the significant tasks in the field of bi...

Generative Aging of Brain Images with Diffeomorphic Registration

Analyzing and predicting brain aging is essential for early prognosis an...

Cortical Surface Co-Registration based on MRI Images and Photos

Brain shift, i.e. the change in configuration of the brain after opening...

Symmetric-Constrained Irregular Structure Inpainting for Brain MRI Registration with Tumor Pathology

Deformable registration of magnetic resonance images between patients wi...

Automatic Landmarks Correspondence Detection in Medical Images with an Application to Deformable Image Registration

Deformable Image Registration (DIR) can benefit from additional guidance...

Towards Machine Learning Prediction of Deep Brain Stimulation (DBS) Intra-operative Efficacy Maps

Deep brain stimulation (DBS) has the potential to improve the quality of...

Registration of Volumetric Prostate Scans using Curvature Flow

Radiological imaging of the prostate is becoming more popular among rese...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Registration is a fundamental problem in medical image analysis [1, 2] that aims at finding spatial correspondences between two images and aligning them, e.g., for atlas-based segmentation. Particularly for patients with brain tumors, an accurate image registration between the pre-operative and the follow-up scans can help analyze the characteristics of tissue resulting in tumor recurrence [3] and contribute on offering a better mechanistic understanding. Diffuse glioma, and specifically Isocitrate Dehydrogenase (IDH)-wildtype glioblastoma as per the World Health Organization classification of tumors of the central nervous system (CNS WHO grade 4), is the most common and aggressive malignant adult brain tumor that heavily and heterogeneously infiltrates the surrounding brain tissue. Finding imaging signatures in the pre-operative setting that can predict tumor infiltration and subsequent tumor recurrence is very important in the treatment and management of brain diffuse glioma patients [4], as it could influence treatment decisions at baseline patient evaluations [5].

Fig. 1: Visual example of a pre-operative baseline and its corresponding follow-up MRI scan. The contrast-enhanced T1-weighted (T1-CE), and the T2 Fluid Attenuated Inversion Recovery (FLAIR) scan of baseline clearly show the tumor and edema, respectively. Similarly, the T1-CE of follow-up scan shows the resection cavity and tumor recurrence, whereas the Edema is visible in the FLAIR (Figure taken from [6])

The registration between pre-operative and follow-up Magnetic Resonance Imaging (MRI) scans of brain diffuse glioma patients, is important yet challenging due to i) large deformations in the brain tissue, caused by the tumor’s mass effect, ii) missing correspondences, between the apparent tumor in the pre-operative baseline scan and the resection cavity in the follow-up scans, and iii) inconsistent intensity profiles between the acquired scans, as MRI acquisition does not depend on standardized units as, for example, Computed Tomography (CT) that depends on Hounsfield units. Fig. 1 shows a baseline pre-operative and follow-up scan of the same brain glioblastoma (IDH-wildtype, CNS WHO grade 4) patient, with the resection cavity, tumor, and edema regions marked. The peritumoral tissue labeled as edema in the pre-operative scan, is known to also include infiltrated tumor cells that may transform to a recurring tumor in the follow-up scan. Thus, corresponding brain tissue regions can have highly heterogeneous intensity profiles across the same brain. This raises the need for the development of accurate deformable registration algorithms to establish spatial correspondences between the pre-operative and follow-up MRI brain scans. This will allow follow-up information to be mapped onto baseline scans and hence elucidate imaging signatures that can be used for detection in future unseen cases [7]. These algorithms must account for large deformations that are present in the pre-operative scan, due to the tumor’s mass effect, as well as in the follow-up scan after the tumor resection and the related relaxation of these previous mass effect induced deformations.

Although this is a very appealing problem to solve and an accurate solution could contribute on improving computational studies focusing on brain tumor analyses [4, 8, 9], there is only some limited literature on this problem [6, 5, 10, 3, 11] with concerns about their reproducibility on multi-institutional data. Towards this end, this paper describes the design of the Brain Tumor Sequence Registration (BraTS-Reg) challenge, as a computational competition that utilizes ample retrospective multi-institutional mpMRI data from patient populations of distinct demographics, to establish a public benchmark environment for deformable registration algorithms.

The rest of the document provides a complete summary of all details related to our challenge, including but not limited to the data, evaluation approach, result submission format, and timeline for the first instance of the BraTS-Reg challenge, currently running in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2022 scientific conference (28-31 March 2022). Finally, for better interpretability and reproducibility of our challenge design and for improving its quality control, we have also taken into consideration all fields included in the structured challenge submission tool used for the Medical Image Computing and Computer Assisted Intervention (MICCAI) challenge proposals since 2018.

Ii Materials & Methods

Ii-a Multi-institutional Data Sources

The data used in the BraTS-Reg challenge comprise curated and pre-processed retrospective multi-institutional data, from routine clinical practice, of 250 diffuse glioma patients, from our affiliated institutions, as well as from existing publicly available data collections hosted at The Cancer Imaging Archive (TCIA) [12]. Specifically, the public data collections we consider are the 1) TCGA-GBM [13], 2) TCGA-LGG [14], 3) IvyGAP [15, 16], and 4) CPTAC-GBM [17, 18]. For the private institutional datasets, the protocol for releasing the data was approved by the institutional review board of the data contributing institutions. The complete data comprises of 250 pairs of pre-operative baseline and follow-up brain MRI scans (each pair being of the same patient) diagnosed and treated for an adult-type diffuse glioma (WHO CNS grades 3-4). The exact multi-parametric MRI (mpMRI) sequences of each time-point are i) native T1-weighted (T1), ii) contrast-enhanced T1 (T1-CE), iii) T2-weighted (T2), and iv) T2 Fluid Attenuated Inversion Recovery (FLAIR).

Following close coordination with the clinical experts of the organizing committee (H.A., M.B., B.W., J.S., E.C., J.R., S.A., M.M.), the time-window between the two paired scans of each patient was decided to be selected such that 1) the scans of the 2 time-points should include sufficient apparent tissue deformations, and 2) confounding effects of surgically induced contrast enhancement [19, 20] are avoided. Therefore, after thorough visual assessment, the identified follow-up data have to be at least 27 days after the initial surgery. Few baseline scans might show some signs of prior instrumentation. The time-window between all pairs of baseline and follow-up mpMRI scans included in the dataset of this challenge is in the range of 27 days-37 months.

Ii-B Pre-processing

The complete multi-institutional dataset consists of scans acquired under routine clinical conditions, and hence reflects very heterogeneous acquisition equipment and protocols, affecting the image properties. To ensure consistency and keep the challenge focused on the registration problem, all included scans are pre-processed and provided to the participants in the Neuroimaging Informatics Technology Initiative (NIfTI) file format [21]. The pre-processing steps followed, in summary, consist of: i) rigid registration of all mpMRI scans to the same anatomical template (i.e., SRI24 atlas [22]

), ii) interpolation to the same isotropic resolution (

), and iii) brain extraction (also known as skull-stripping) [23], with a variety of tools [24, 25].

Specifically, all mpMRI scans were first re-oriented so that all scans are transformed into the same Left-Post-Superior (LPS) coordinate system, and then rigidly co-registered to the same anatomical space (i.e., the SRI24 atlas [22]). All rigid registrations were performed using “Greedy” [26], a CPU-based C++ implementation of the greedy diffeomorphic registration algorithm [27]. Greedy shares multiple concepts and implementation strategies within the SyN tool in the ANTs package but focuses on computational efficiency. Greedy is integrated into the segmentation software [28, 29], as well as the Cancer Imaging Phenomics Toolkit ( [24, 30, 31, 32]). After this rigid registration step, we performed brain extraction using the Brain Mask Generator ( [33, 23]

, which is based on a deep learning segmentation architecture (namely U-Net

[34]) and uses a novel training strategy introducing the brain’s shape as a prior and hence allowing it to be agnostic to the input MRI sequence. BrainMaGe was used to remove non-cerebral tissues like the skull, scalp, and dura from all scans considered in this challenge.

Ii-C Landmark Annotation Protocol

Fig. 2: An example of corresponding pair of landmark point in Baseline (left) and Follow-up (Right) superimposed on t1ce scan for visualization

The clinical experts of our team (H.A., M.B., B.W., J.S., E.C., J.R., S.A., M.M.) were provided with the segmentation of intra-tumor parts (i.e., the tumor core that is the potentially resectable region [35]) and specific instructions of a common annotation protocol. Specifically, for each pre-operative scan, an expert places landmarks near the tumor (within 30mm) and landmarks far from the tumor (beyond 30mm). Subsequently, the expert identifies the corresponding landmarks in the post-operative scan. The landmarks are defined on anatomically distinct locations, such as blood vessel bifurcations, the anatomical shape of the cortex, and anatomical landmarks of the midline of the brain. Fig. 2 shows a sample landmark point marked in baseline scan along with the location of its corresponding landmark in the follow-up scan. The total number of landmarks (+) vary for each case between 6 and 50. The annotators are given the flexibility to use their preferred tool (including MIPAV, CaPTk, and ITK-SNAP) for making the annotations and we then convert all of them into a common .csv format.

In order to assess the inter-rater variability, and offer a gold standard comparison to the participants, a second expert is also used to define landmarks on 20 randomly selected subjects from our testing cohort. The complete testing cohort will be double reviewed by the time of the testing phase. This second expert considers the landmarks of expert 1 in the pre-operative scans and then defines landmarks only on the follow-up scans, to provide information essential to calculate the inter-rater variability.

Ii-D Data Partitioning in Distinct Independent Cohorts

Following the paradigm of algorithmic evaluation in the machine learning domain, the complete dataset is divided into training (70%), validation (10%), and testing/ranking cohorts (20%), with the number of scan pairs in each of these sets being 140, 20, and 40, respectively. Cases in these cohorts are randomly and proportionally distributed, while accounting for 1) varying deformation levels, 2) time-window between baseline and follow-up scans, and 3) number of landmarks. Another subset of 50 cases is reserved to evaluate performance on out of sample test data. It will be used to check the generalizability of the algorithms and will not be considered for ranking participants.

Ii-D1 Training Data

Each case of the training cohort consists of Baseline and Follow-up scans. The challenge participants are provided with the landmark point locations on the baseline, as well as on the follow-up scan. The participants are expected to use this data for building and evaluating their image registration methods.

Ii-D2 Validation Data

The validation data will be provided to the participants as scan pairs of and with landmarks provided only for the scan. Participants are requested to submit:

  • coordinates of warped landmark locations: coordinates need to submitted in format and be will stored in a comma separated value (csv) file with each row corresponding to a different landmark

  • displacement field in the scan: this needs to be submitted as Insight Toolkit (ITK) displacement field image

  • determinant of jacobian of the displacement field: this needs to be submitted as a NIfTI image with the same geometry as scan.

We intend to use the University of Pennsylvania’s Image Processing Portal ( for the evaluation of the validation data, which has already been used in the past as the evaluation platform of the Brain Tumor Segmentation (BraTS) challenge 2017-2021 [36, 37, 35, 38, 39, 40].

Ii-D3 Test Data

The test data is not intended to be publicly released at any point. The participants are asked to containerize (e.g., docker/singularity containers) their algorithms and submit these containers to the organizers for evaluation and ranking. The challenge organizers will run all the methods on the test data using their own existing computational infrastructure. This will enable confirmation of reproducibility and maximization of the use of these algorithms towards offering publicly available solutions for this problem. Specific instructions for creating docker/singularity containers of uniform API for this challenge will be provided, similarly to what was previously done for other challenges666

Iii Quantitative Performance Evaluation

The evaluation of the registration between the two scans will be based on manually seeded landmarks (ground truth) in both the pre-operative and the follow-up scans, by the expert clinical neuroradiologists. The performance of various registration methods will be quantitatively evaluated on the basis of landmarks provided by the challenge organizers, in terms of the Median Absolute Error , Robustness , and smoothness of the displacement field.

Iii-a Median Absolute Error (MAE)

For each pair of Baseline and Follow-up scans, we have determined the coordinates of corresponding landmarks , where . is the set of landmarks identified in both and . Given , the participants have to estimate the coordinates of landmark points in relation to scan corresponding to provided coordinates in the scan . The Median Absolute Error between the submitted coordinates and the manually defined coordinates (ground truth)  will then be determined, as:


Iii-B Robustness

All the participating algorithms will also be evaluated according to the metric of robustness . Similar to a successful-rate like measure, we will use as a relative value describing how many landmarks from improved its after image registration, compared to the initial (prior to any registration). Let us call the set of successfully registered landmarks, i.e., those for which the registration error decreases. So, we define robustness for a pair of scans as the relative number of successfully registered landmarks,


Then the average robustness over all scan pairs in particular cohort is:


is therefore the relative value (in the range of and ) of how many landmarks across all scan pairs have an improved after registration, when compared to the initial . When is equal to , the average distance of all the landmarks in the target and warped images is reduced after registration, whereas means that none of the distances are reduced.

Iii-C Smoothness of the displacement field

We will also evaluate the regularity of the displacement field. To do this, we will calculate the determinant of the Jacobian of the displacement field and evaluate the number and percentage of voxels with a non-positive Jacobian determinant for each method. These voxels correspond to locations where the deformation is not diffeomorphic. Additionally, to evaluate the smoothness of the displacement field, we will calculate the minimum and maximum of the Jacobian determinant (we will use the percentile of the Jacobian determinant instead of the maximum since the maximum is prone to noise). These will be computed within different regions of interest (e.g., within tumor core, within 30mm from the tumor core boundary, outside 30mm from the tumor core boundary, etc.). This metric will not be used to determine the ranking of the participating algorithms during the challenge, but in the event of a tie between teams, the one with the higher smoothness of displacement field will be given a higher rank.

Iii-D Longitudinal consistency

We have identified multiple follow-up scans for a subset of our subjects. For these, we plan to track landmarks across all appropriate follow-up scans and use these to perform additional meta-analysis, after the conclusion of the challenge. Specifically, we will use the additional scans to evaluate the consistency of the and metrics over time. This longitudinal consistency will not be incorporated in the ranking of the participating algorithms during the challenge.

Iii-E Baseline Performance

The baseline performance for registration between the given pre-operative baseline and follow-up scans will be provided using an affine registration. This decision is driven by the comparative analysis of various methods shown in [10], where based on the mean landmark error, every deformable registration method performs better than affine registration, but worse than human experts. So, only participating methods that outperform affine registration results would be considered for evaluation in the BraTS-Reg challenge.

Iv Participation Policies & Awards

Iv-a Participant Registration

Prospective participants are asked to first register online for this challenge through its website777

Iv-B Release of Participating Methods

Participants are asked to compete in this challenge only with fully-automatic algorithms. User interaction (e.g., semi-automatic methods) will not be allowed, as the participants will need to submit their method in a containerized form (e.g., Singularity) to be evaluated by the organizers on the hidden testing data. This way we will ensure that these algorithms can also be used in the future by anyone following the specific instructions on how to make use of the participants’ containers. Notably, challenge participants will be asked to provide their written permission to make their containers publicly available through the challenge’s website.

Iv-C Awards

Participants from the immediate groups of the organizers may participate in the challenge but will not be eligible for any awards. Also, participants who will not provide their approval on publicly releasing their containerized algorithm will not be eligible for awards.

Iv-D Use of External Data

Participants are allowed to use additional public and/or private data for further complementing the provided dataset, only for scientific publication purposes and it must be explicitly mentioned in their submitted manuscripts. Importantly, participants that decide to proceed with such an extension of the challenge dataset, must also report results using only the challenge dataset and discuss potential result differences in their manuscript. This is due to our intentions to identify if additional data could contribute on providing an improved solution, while we provide a fair comparison among the participating methods.


  • [1] A. Sotiras, C. Davatzikos, and N. Paragios, “Deformable medical image registration: A survey,” IEEE transactions on medical imaging, vol. 32, no. 7, pp. 1153–1190, 2013.
  • [2] Y. Ou, H. Akbari, M. Bilello, X. Da, and C. Davatzikos, “Comparative evaluation of registration algorithms in different brain databases with varying difficulty: results and insights,” IEEE transactions on medical imaging, vol. 33, no. 10, pp. 2039–2065, 2014.
  • [3] X. Han, Z. Shen, Z. Xu, S. Bakas, H. Akbari, M. Bilello, C. Davatzikos, and M. Niethammer, “A deep network for joint registration and reconstruction of images with pathologies,” in International Workshop on Machine Learning in Medical Imaging.   Springer, 2020, pp. 342–352.
  • [4] H. Akbari, L. Macyszyn, X. Da, M. Bilello, R. L. Wolf, M. Martinez-Lage, G. Biros, M. Alonso-Basanta, D. M. O’Rourke, and C. Davatzikos, “Imaging surrogates of infiltration obtained via multiparametric imaging pattern analysis predict subsequent location of recurrence of glioblastoma,” Neurosurgery, vol. 78, no. 4, pp. 572–580, 2016.
  • [5] D. Kwon, K. Zeng, M. Bilello, and C. Davatzikos, “Estimating patient specific templates for pre-operative and follow-up brain tumor registration,” in International Conference on Medical Image Computing and Computer-Assisted Intervention.   Springer, 2015, pp. 222–229.
  • [6] D. Kwon, M. Niethammer, H. Akbari, M. Bilello, C. Davatzikos, and K. M. Pohl, “Portr: Pre-operative and post-recurrence brain tumor registration,” IEEE transactions on medical imaging, vol. 33, no. 3, pp. 651–667, 2013.
  • [7] H. Akbari, S. Rathore, S. Bakas, M. P. Nasrallah, G. Shukla, E. Mamourian, M. Rozycki, S. J. Bagley, J. D. Rudie, A. E. Flanders et al., “Histopathology-validated machine learning radiographic biomarker for noninvasive discrimination between true progression and pseudo-progression in glioblastoma,” Cancer, vol. 126, no. 11, pp. 2625–2636, 2020.
  • [8]

    S. Bakas, K. Zeng, A. Sotiras, S. Rathore, H. Akbari, B. Gaonkar, M. Rozycki, S. Pati, and C. Davatzikos, “Glistrboost: combining multimodal mri segmentation, registration, and biophysical tumor growth modeling with gradient boosting machines for glioma segmentation,” in

    BrainLes 2015.   Springer, 2015, pp. 144–155.
  • [9] S. Bakas, G. Shukla, H. Akbari, G. Erus, A. Sotiras, S. Rathore, C. Sako, S. Min Ha, M. Rozycki, R. T. Shinohara, M. Bilello, and C. Davatzikos, “Overall survival prediction in glioblastoma patients using structural magnetic resonance imaging (mri): advanced radiomic features may compensate for lack of advanced mri modalities,” Journal of medical imaging (Bellingham, Wash.), vol. 7, no. 3, p. 031505, May 2020. [Online]. Available:
  • [10] X. Han, S. Bakas, R. Kwitt, S. Aylward, H. Akbari, M. Bilello, C. Davatzikos, and M. Niethammer, “Patient-specific registration of pre-operative and post-recurrence brain tumor mri scans,” in International MICCAI Brainlesion Workshop.   Springer, 2018, pp. 105–114.
  • [11] D. Waldmannstetter, F. Navarro, B. Wiestler, J. S. Kirschke, A. Sekuboyina, E. Molero, and B. H. Menze, “Reinforced redetection of landmark in pre-and post-operative brain scan using anatomical guidance for image alignment,” in International Workshop on Biomedical Image Registration.   Springer, 2020, pp. 81–90.
  • [12] K. Clark, B. Vendt, K. Smith, J. Freymann, J. Kirby, P. Koppel, S. Moore, S. Phillips, D. Maffitt, M. Pringle et al., “The cancer imaging archive (tcia): maintaining and operating a public information repository,” Journal of digital imaging, vol. 26, no. 6, pp. 1045–1057, 2013.
  • [13] L. Scarpace, L. Mikkelsen, T. Cha, S. Rao, S. Tekchandani, S. Gutman, and D. Pierce, “Radiology data from the cancer genome atlas glioblastoma multiforme [tcga-gbm] collection,” The Cancer Imaging Archive, vol. 11, no. 4, p. 1, 2016.
  • [14] N. Pedano, A. E. Flanders, L. Scarpace, T. Mikkelsen, J. M. Eschbacher, B. Hermes, and Q. Ostrom, “Radiology data from the cancer genome atlas low grade glioma [tcga-lgg] collection,” Cancer Imaging Arch, vol. 2, 2016.
  • [15] R. B. Puchalski, N. Shah, J. Miller, R. Dalley, S. R. Nomura, J.-G. Yoon, K. A. Smith, M. Lankerovich, D. Bertagnolli, K. Bickley et al., “An anatomic transcriptional atlas of human glioblastoma,” Science, vol. 360, no. 6389, pp. 660–663, 2018.
  • [16] N. Shah, X. Feng, M. Lankerovich, R. B. Puchalski, and B. Keogh, “Data from ivy gap,” The Cancer Imaging Archive, vol. 10, p. K9, 2016.
  • [17] TCIA, “Radiology data from the clinical proteomic tumor analysis consortium glioblastoma multiforme [cptac-gbm] collection,” The Cancer Imaging Archive, vol. 10, p. K9, 2016.
  • [18] L.-B. Wang, A. Karpova, M. A. Gritsenko, J. E. Kyle, S. Cao, Y. Li, D. Rykunov, A. Colaprico, J. H. Rothstein, R. Hong et al., “Proteogenomic and metabolomic characterization of human glioblastoma,” Cancer cell, vol. 39, no. 4, pp. 509–528, 2021.
  • [19] F. K. Albert, M. Forsting, K. Sartor, H.-P. Adams, and S. Kunze, “Early postoperative magnetic resonance imaging after resection of malignant glioma: objective evaluation of residual tumor and its influence on regrowth and prognosis,” Neurosurgery, vol. 34, no. 1, pp. 45–61, 1994.
  • [20] P. Y. Wen, D. R. Macdonald, D. A. Reardon, T. F. Cloughesy, A. G. Sorensen, E. Galanis, J. DeGroot, W. Wick, M. R. Gilbert, A. B. Lassman et al., “Updated response assessment criteria for high-grade gliomas: response assessment in neuro-oncology working group,” Journal of clinical oncology, vol. 28, no. 11, pp. 1963–1972, 2010.
  • [21] R. Cox, J. Ashburner, H. Breman, K. Fissell, C. Haselgrove, C. Holmes, J. Lancaster, D. Rex, S. Smith, J. Woodward et al., “A (sort of) new image data format standard: Nifti-1: We 150,” Neuroimage, vol. 22, 2004.
  • [22] T. Rohlfing, N. M. Zahr, E. V. Sullivan, and A. Pfefferbaum, “The sri24 multichannel atlas of normal adult human brain structure,” Human brain mapping, vol. 31, no. 5, pp. 798–819, 2010.
  • [23] S. Thakur, J. Doshi, S. Pati, S. Rathore, C. Sako, M. Bilello, S. M. Ha, G. Shukla, A. Flanders, A. Kotrotsou et al., “Brain extraction on mri scans in presence of diffuse glioma: Multi-institutional performance evaluation of deep learning methods and robust modality-agnostic training,” NeuroImage, vol. 220, p. 117081, 2020.
  • [24] C. Davatzikos, S. Rathore, S. Bakas, S. Pati, M. Bergman, R. Kalarot, P. Sridharan, A. Gastounioti, N. Jahani, E. Cohen et al., “Cancer imaging phenomics toolkit: quantitative imaging analytics for precision diagnostics and predictive modeling of clinical outcome,” Journal of medical imaging, vol. 5, no. 1, p. 011018, 2018.
  • [25] F. Kofler, C. Berger, D. Waldmannstetter, J. Lipkova, I. Ezhov, G. Tetteh, J. Kirschke, C. Zimmer, B. Wiestler, and B. H. Menze, “Brats toolkit: translating brats brain tumor segmentation algorithms into clinical and scientific practice,” Frontiers in neuroscience, vol. 14, p. 125, 2020.
  • [26] P. A. Yushkevich, J. Pluta, H. Wang, L. E. Wisse, S. Das, and D. Wolk, “Fast automatic segmentation of hippocampal subfields and medial temporal lobe subregions in 3 tesla and 7 tesla t2-weighted mri,” Alzheimer’s & Dementia, vol. 7, no. 12, pp. P126–P127, 2016.
  • [27] S. Joshi, B. Davis, M. Jomier, and G. Gerig, “Unbiased diffeomorphic atlas construction for computational anatomy,” NeuroImage, vol. 23, pp. S151–S160, 2004.
  • [28] P. A. Yushkevich, J. Piven, H. C. Hazlett, R. G. Smith, S. Ho, J. C. Gee, and G. Gerig, “User-guided 3d active contour segmentation of anatomical structures: significantly improved efficiency and reliability,” Neuroimage, vol. 31, no. 3, pp. 1116–1128, 2006.
  • [29] P. A. Yushkevich, A. Pashchinskiy, I. Oguz, S. Mohan, J. E. Schmitt, J. M. Stein, D. Zukić, J. Vicory, M. McCormick, N. Yushkevich et al., “User-guided segmentation of multi-modality medical imaging datasets with itk-snap,” Neuroinformatics, vol. 17, no. 1, pp. 83–102, 2019.
  • [30] A. Fathi Kazerooni, H. Akbari, G. Shukla, C. Badve, J. D. Rudie, C. Sako, S. Rathore, S. Bakas, S. Pati, A. Singh et al., “Cancer imaging phenomics via captk: multi-institutional prediction of progression-free survival and pattern of recurrence in glioblastoma,” JCO clinical cancer informatics, vol. 4, pp. 234–244, 2020.
  • [31] S. Rathore, S. Bakas, S. Pati, H. Akbari, R. Kalarot, P. Sridharan, M. Rozycki, M. Bergman, B. Tunc, R. Verma et al., “Brain cancer imaging phenomics toolkit (brain-captk): an interactive platform for quantitative analysis of glioblastoma,” in International MICCAI Brainlesion Workshop.   Springer, 2017, pp. 133–145.
  • [32] S. Pati, A. Singh, S. Rathore, A. Gastounioti, M. Bergman, P. Ngo, S. M. Ha, D. Bounias, J. Minock, G. Murphy et al., “The cancer imaging phenomics toolkit (captk): Technical overview,” in International MICCAI Brainlesion Workshop.   Springer, 2019, pp. 380–394.
  • [33] S. P. Thakur, J. Doshi, S. Pati, S. M. Ha, C. Sako, S. Talbar, U. Kulkarni, C. Davatzikos, G. Erus, and S. Bakas, “Skull-stripping of glioblastoma mri scans using 3d deep learning,” in International MICCAI Brainlesion Workshop.   Springer, 2019, pp. 57–68.
  • [34] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention.   Springer, 2015, pp. 234–241.
  • [35] S. Bakas, M. Reyes, A. Jakab, S. Bauer, M. Rempfler, A. Crimi, R. T. Shinohara, C. Berger, S. M. Ha, M. Rozycki et al., “Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the brats challenge,” arXiv preprint arXiv:1811.02629, 2018.
  • [36] B. H. Menze, A. Jakab, S. Bauer, J. Kalpathy-Cramer, K. Farahani, J. Kirby, Y. Burren, N. Porz, J. Slotboom, R. Wiest et al., “The multimodal brain tumor image segmentation benchmark (brats),” IEEE transactions on medical imaging, vol. 34, no. 10, pp. 1993–2024, 2014.
  • [37] S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J. S. Kirby, J. B. Freymann, K. Farahani, and C. Davatzikos, “Advancing the cancer genome atlas glioma mri collections with expert segmentation labels and radiomic features,” Scientific data, vol. 4, no. 1, pp. 1–13, 2017.
  • [38] U. Baid, S. Ghodasara, S. Mohan, M. Bilello, E. Calabrese, E. Colak, K. Farahani, J. Kalpathy-Cramer, F. C. Kitamura, S. Pati et al., “The rsna-asnr-miccai brats 2021 benchmark on brain tumor segmentation and radiogenomic classification,” arXiv preprint arXiv:2107.02314, 2021.
  • [39] S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J. Kirby, J. Freymann, K. Farahani, and C. Davatzikos, “Segmentation labels and radiomic features for the pre-operative scans of the tcga-gbm collection. the cancer imaging archive,” The cancer imaging archive, vol. 286, 2017.
  • [40] S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J. Kirby, J. Freymann, K. Farahani, and C. Davatzikos, “Segmentation labels and radiomic features for the pre-operative scans of the tcga-lgg collection,” The cancer imaging archive, vol. 286, 2017.