IntrA: 3D Intracranial Aneurysm Dataset for Deep Learning

03/02/2020 ∙ by Xi Yang, et al. ∙ 55

Medicine is an important application area for deep learning models. Research in this field is a combination of medical expertise and data science knowledge. In this paper, instead of 2D medical images, we introduce an open-access 3D intracranial aneurysm dataset, IntrA, that makes the application of points-based and mesh-based classification and segmentation models available. Our dataset can be used to diagnose intracranial aneurysms and to extract the neck for a clipping operation in medicine and other areas of deep learning, such as normal estimation and surface reconstruction. We provide a large-scale benchmark of classification and part segmentation by testing state-of-the-art networks. We also discuss the performance of each method and demonstrate the challenges of our dataset. The published dataset can be accessed here: https://github.com/intra3d2019/IntrA.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 4

page 5

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Intracranial aneurysm is a life-threatening disease, and its surgical treatments are complicated. Timely diagnosis and preoperative examination are necessary to formulate the treatment strategies and surgical approaches. Currently, the primary treatment method is clipping the neck of an aneurysm to prevent it from rupturing, as shown in Figure 2. The decisions of the position and posture of the clip are still highly dependent on “clinical judgment” based on the experience of physicians. In the surgery support system of intracranial aneurysms simulating real-life neurosurgery and teach neurosurgical residents [1], the accuracy of aneurysm segmentation is the most crucial part because it is used to extract the neck of an aneurysm, that is, the boundary line of the aneurysm.

Based on 3D surface models, the diagnosis of an aneurysm can be much more accurate than 2D images. The edge of the aneurysm is much clearer for doctors, and the complicated and time-consuming annotation of a mess of 2D images is avoided. There are many reports of automatic diagnosis and segmentation of aneurysms based on medical images, including intracranial aneurysm (IA) and abdominal aortic aneurysm (AAA) [23, 29, 37]; however, few reports have been published based on 3D models. This is not only because data collection is inefficient, subjective, and challenging to share in medicine, but also the joint knowledge of computer application science and medical science.

Figure 2: The treatment of intracranial aneurysm by clipping.

Objects with arbitrary shapes are ubiquitous, and a non-Euclidean manifold reveals more critical information than using Euclidean geometry, like complex typologies of brain tissues in neuroscience[4]

. However, the study of 2D magnetic resonance angiography (MRA) images confines the selection to 3D neural networks based on pixels and voxels, which also omits the information from manifolds. Therefore, we propose an open-access 3D intracranial aneurysm dataset to solve the above issues and to promote the application of deep learning models in medical science. The points and meshes-based models exhibit excellent generalization abilities for 3D deep learning tasks in our experiments.

Our main contributions are:

  1. We propose an open dataset that consists of 3D aneurysm segments with segmentation annotations, automatically generated blood vessel segments, and complete models of scanned blood vessels of the brains. All annotated aneurysm segments are processed as manifold meshes.

  2. We develop tools to generate 3D blood vessel segments from complete models and to annotate a 3D aneurysm model interactively. The data processing pipeline is also introduced.

  3. We evaluate the performance of various state-of-the-art 3D deep learning methods on our dataset to provide benchmarks of classification (diagnose) and segmentation of intracranial aneurysms. Furthermore, we analyze the different features of each method from the results obtained.

2 Related Work

Free open-access online datasets in both medical and non-medical, and 3D deep learning algorithms are extensively investigated.

2.1 Datasets

Medical dataset. Although data collection is challenging, data sharing is common in medical research. Because large-scale samples are required to surmount the challenges of the complexity and heterogeneity of many diseases, it is unattainable for a single research institute. Several medical datasets have been published online for collaboration on finding a treatment solution. For example, integrated dataset MedPix [31], bone X-rays dataset MURA (musculoskeletal radiographs) [35], brain neuroimaging dataset The Open Access Series of Imaging Studies (OASIS) [13], Medical Segmentation Decathlon [38]. Moreover, Harvard GSP is an MRI dataset to address complex questions regarding the relationship between the brain and behavior [5]. SCR database was constructed for the automatic segmentation of anatomical structures in chest radiographs [41].

Data collection is also critical for a single category of disease. The Lung Image Database Consortium image collection (LIDC-IDRI) consists of diagnostic and lung cancer screening thoracic computed tomography (CT) scans with marked-up annotated lesions [28]. The aim of the Indian Diabetic Retinopathy Image Dataset (IDRiD) is to evaluate algorithms for the automated detection and grading of diabetic retinopathy and diabetic macular edema using retinal fundus images [19]. EyePACS is a retinal image database comprised of diverse populations with various degrees of diabetic retinopathy [11]. Additionally, Autism Brain Imaging Data Exchange (ABIDE) is for the autism spectrum disorder (ASD) [2]. To date, almost all of them are 2D medical images.

Complete model (103)                                  Generated segments (1909)                                   Annotated segments (116)

Figure 3: There are three types of data in our dataset. The automatically generated blood vessel segments can be with or without aneurysms, and an aneurysm is often partially separated. We see that both aneurysm segments and healthy vessel segments have complex shapes with a different number of branches. For example, in the annotated segments, the aneurysm is huge in the second segment of the first row, but it is tiny in the first segment of the third row. The second segment of the third row has even two aneurysms.

Non-medical 3D dataset.

In recent years, 3D model datasets were introduced in the research of computer vision and computer graphics with the development of deep learning algorithms. For instance, CAD model datasets: modelNet 

[47], shapeNet [6], COSEG Dataset [44], ABC dataset [22]; 3D printing model datasets: Thingi10K [50], Human model dataset [30], etc. Various 3D deep learning tasks are widely carried out on these datasets.

2.2 Methods

A 3D model has four kinds of representations, projected view, voxel, point cloud, and mesh. The methods based on projected view or voxel are implemented conveniently using similar structures with 2D convolutional neural networks (CNNs). Point cloud or mesh has a more accurate representation of a 3D shape; however, new convolution structures are required.

Projected View. Su et al. proposed a multi-view CNN to recognize 3D shapes [40]. Kalogerakis et al. combined image-based fully convolutional networks (FCNs) and surface-based conditional random fields (CRFs) to yield coherent segmentation of 3D shapes [20].

Voxel. Çiçek et al. introduced 3D U-Net for volumetric segmentation that learns from sparsely annotated volumetric images [7]. Wang et al. presented O-CNN, an Octree-based Convolutional Neural Network (CNN), for 3D shape analysis [42]. Graham et al. designed new sparse convolutional operations to process spatially-sparse 3D data, called submanifold sparse convolutional networks (SSCNs) [14]. Wang and Lu proposed VoxSegNet to extract discriminative features encoding detailed information under limited resolution [45]. Le and Duan proposed the PointGrid, a 3D convolutional network that is an integration of point and grid [25].

Points. Qi et al. proposed PointNet, making it is possible to input 3D points directly for neural work [33], then they introduced a hierarchical network PointNet++ to learn local features [34]. Based on these pioneering works, many new convolution operations were proposed. Wu et al. treated convolution kernels as nonlinear functions of the local coordinates of 3D points comprised of weight and density functions, named PointConv [46]. Li et al. presented PointCNN that can leverage spatially local correlation in data represented densely in grids for feature learning [27]. Xu et al. designed the filter as a product of a simple step function that captures local geodesic information and a Taylor polynomial, named SpiderCNN [48]

. Moreover, the SO-Net models the spatial distribution of point cloud by building a Self-Organizing Map (SOM) 

[26]. Su et al. presented SPLATNet for processing point clouds that directly operates on a collection of points represented as a sparse set of samples in a high-dimensional lattice [39]. Zhao et al. proposed 3D point-capsule networks [49]. Wang et al. proposed dynamic graph neural network (DGCNN) [43].

Mesh. Maron et al. applied a convolution operator to sphere-type shapes using a global seamless parameterization to a planar flat-torus [30]. Hanocka et al. utilize the unique properties of the triangular mesh for direct analysis of 3D shapes, named MeshCNN [15]. Feng et al. regard the polygon faces as the unit, split their features into spatial and structural features called MeshNet [12].

3 Our Dataset

3.1 Data

Our dataset includes complete models with aneurysms, generated vessel segments, and annotated aneurysm segments, as shown in Figure 3. 3D models of entire brain vessels are collected by reconstructing scanned 2D MRA images of patients. We do not publish the raw 2D MRA images because of medical ethics. blood vessel segments are generated automatically from the complete models, including healthy vessel segments and aneurysm segments for diagnosis. An aneurysm can be divided into segments that can verify the automatic diagnosis. aneurysm segments are divided and annotated manually by medical experts; the scale of each aneurysm segment is based on the need for a preoperative examination. The details are described in the next section. Furthermore, geodesic distance matrices are computed and included for each annotated 3D segment, because the expression of the geodesic distance is more accurate than Euclidean distance according to the shape of vessels. The matrix is saved as for a model with points, shortening the training computation time.

Our data have several characteristics common to medical data: 1) Small but diverse. The amount of data is not so large compared to other released CAD model datasets; however, it includes diverse shapes and scales of intracranial aneurysms as well as different amounts of vessel branches. 2) Unbalanced. The number of points of aneurysms and healthy vessel parts is imbalanced based on the shape of aneurysms. The number of 3D aneurysm segments and healthy vessel segments are not equal because the aneurysms are usually much smaller than the entire brain.

Challenge. Experts collected out dataset instead of regular people. Intact 3D models have to be restored from reconstructed data manually, as shown in Figure 4. Besides, the annotation of the neck of aneurysms necks requires years of clinical experience in complex situations. Also, the 3D models are not manifold. We clean the surface meshes to create an ideal dataset for algorithm research.

Figure 4: An intracranial aneurysm is restored from incomplete scanned data by neurosurgeons.

Statistics and analysis. The statistics of our dataset are shown in Figure 5

. We count the number points in each segment to some extent express the difference of the shapes, since the points are mostly uniformly distributed on the surface. The number of points generated in 1909 segments is approximately

to at Geodesic distance . Our dataset includes all types of intracranial aneurysms in medicine: bifurcation type, trunk type, blister type, and combined type. The shapes of aneurysm are diverse in our dataset both in geometry and topology; six aneurysm segments selected are shown in the right of Figure 3. Besides, we calculated the size of an aneurysm as the ratio between the diagonal distance of the global segment and the aneurysm part instead of the real size of the parent vessel or the aneurysm.

Figure 5: Statistics of annotated models.

3.2 Tools

We developed annotation tools and segment generation tools to assist in constructing our dataset.

Annotation. Users draw an intended boundary by clicking several points. The connection between two points is determined by the shortest path. After users create a closed boundary line, they annotate the aneurysm part by selecting a point inside of it. The enclosed area is calculated automatically by propagation from the point to the boundary line along with surface meshes. With the support of multiple boundary lines, the annotation tool also can be used for separating both the aneurysm part and the aneurysm segment manually, as shown in Figure 6.

Figure 6: The top figure shows the UI of our annotation tool. The points the user clicked are shown as blue to decide the boundary of the aneurysm, then a random point (yellow) is selected to annotate the aneurysm part (green). The two bottom figures demonstrate the results of multiple boundary lines.

Vessel segment generation. Vessel segments are generated by randomly picking points from the complete models and selecting the neighbor area whose geodesic distance along the vessel is smaller than a threshold. We also manually select points for increasing the number of segments with an aneurysm. To construct an ideal dataset, few data which are ambiguous or only include trivial components are removed by using our visualization tool.

3.3 Processing pipeline

3D reconstruction and restore. Our data are acquired by Time-Of-Flight Magnetic Resonance Angiography (TOF-MRA) of human brain. Using the single threshold method [17], each complete 3D model is reconstructed from 2D images sliced by . The aneurysm segments are separated and restored interactively using the multi-threshold method [21] by two neurosurgeons, then processed by Gaussian smoothing. This image processing is conducted in life sciences software, Amira 2019 (Thermo Fisher Scientific, MA, USA). It takes about 50 workdays in total.

Generation and annotation.

By using our generation and annotation tools, blood vessel segments are obtained and classified. The segmentation annotation of aneurysm segments is also finished. A neurosurgeon completed it in 8 hours.

Data clean and re-meshing. The reconstructed 3D models are noisy and not manifold. Huang et al[18]

described an algorithm to generate a manifold surface for 3D models; however, this method does not remove isolated components and significantly changes the shape of the model. Therefore, we use filter tools in MeshLab to remove duplicate faces and vertices, and separate pieces in the data manually, which ensure that the models do not have non-manifold edges. MeshLab also generates the normal vector at each point. The geodesic matrix is computed by solving the heat equation on the surface using a fast approximate geodesic distance method by 

[8].

3.4 Supported Studies

Diagnose (Classification). The diagnosis of an aneurysm can be considered as a classification problem of aneurysms and healthy vessel segments. From a 3D brain model of a patient, vessel segments are generated by our tools; then, the diagnosis is completed by classifying the segments with aneurysms.

Part segmentation. Our annotated 3D models present a precise boundary of each aneurysm to support segmentation research. The data is easy to convert to any 3D representation of various deep learning algorithms.

Rule-based algorithms. Besides, algorithms based on rules are proposed for either aneurysm in the brain or abdominal aortic aneurysm [10, 24, 36]. The accuracy and generalization of the designed rules should be verified by large data.

Others. Because we provide the information about the normal vector and geodesic distance in our dataset, other researches on 3D models are also supported, e.g. normal estimation [3], geodesic estimation [16], 3D surface reconstruction [9, 32], and etc.

4 Benchmark

We selected state-of-the-art methods as the benchmarks of classification and segmentation of our dataset. We implemented dataset interfaces to the original implementations by the authors and kept the same hyper-parameters and loss functions of models as in the original papers. A detailed explanation of the implementation of each method is described in the supplementary material. We tested these methods by 5-fold cross-validation. The shuffled data was divided into 5 subsamples and was the same for each method. 4 subsamples were used as training data, 1 was for test data. The experiments were carried out on PCs with GeForce RTX 2080 Ti

2, GeForce GTX 1080 Ti 1. The net training time of all methods was over 92 hours.

4.1 Classification

6 methods were selected for classification benchmarks, including PointNet [33], PointNet++ (PN++) [34], PointCNN [27], SpiderCNN [48], self-organizing network (SO-Net) [26], dynamic graph CNN (DGCNN) [43]. We combined the generated blood vessel segments and manually segmented aneurysms in total 2025 as the dataset for testing classification accuracy and F1-Score of each method. The experimental results are shown in Table 1.

Network Input V. (%). A. (%) F1-Score
PN++ 512
1024
2048
SpiderCNN 512
1024
2048
SO-Net 512
1024
2048
PointCNN 512
1024
2048
DGCNN 512/10
1024/20
2048/40
PointNet 512
1024
2048

Table 1: Classification results of each method. The second column shows the number of input points, the additional input is required for DGCNN. The accuracies of healthy vessel segments (V.) and aneurysm segments (A.) and F1-Score are calculated by the mean value of each fold.

4.2 Segmentation

We selected 11 networks, PointGrid [25], two kind of submanifold sparse convolutional networks (SSCNs): fully-connected (SSCN-F) and Unet-like (SSCN-U) [14] structures, PointNet [33], two kind of PointNet++ [34]: input with normal (PN++) and with normal and geodesic distance (PN++g), PointConv [46], PointCNN [27], SpiderCNN [48], MeshCNN [15], and SO-Net [26]

, to provide segmentation benchmarks. 116 annotated aneurysm segments were used for evaluating these methods. The test of each subsample was repeated 3 times, and the final results were the mean values of each best result. We assessed the effects of each method using two indexes: Jaccard Index (JI) and Sørensen-Dice Coefficient (DSC). Jaccard Index is also known as Intersection over Union (IoU). The results of segmentation are shown in Figure 

7 and Table 2.

Figure 7: IoU results of the aneurysm part for each fold of networks. These methods are compared with their best performance.

5 Discussion

5.1 Classification

PN++ with 1024 sampling points has the highest accuracy of aneurysms, and PointCNN with 2048 sampled points has the greatest accuracy of artery and F1-Score. The accuracy and F1-score of almost all methods showed an increasing tendency as more sampled points were provided. However, SpiderCNN attained the highest aneurysm detection rate and F1-Score at 1024 sampling points. The majority of misclassified 3D models contained small-sized or incomplete aneurysms that are hard to distinguish from healthy blood vessels.

5.2 Segmentation

Methods based on points. The segmentation methods based on point cloud obtained good results and maintained the same level with the results on ShapeNet [6]

. SO-Net showed excellent performance on IOU and DSC of aneurysms, while PointConv had the best result on parent blood vessels. PN++ had the third-best performance and had the fastest training speed (5s per epoch, and converged at approximately an epoch of 115 on GTX 1080 Ti). Meanwhile, PointCNN had the slowest training speed (24s per epoch, and converged at approximately an epoch of 500 on GTX 1080 Ti) and moderate segmentation accuracy. SpiderCNN did not have the same performance as it had on the ShapeNet, but CI95 was unusually high. Besides the methods mentioned in Section 

4.2, we also tried 3D CapsuleNet [49], but it classified every point into the healthy blood vessel, which shows its limited generalization crossing datasets.

Resolution of voxels. Methods based on voxels achieved relatively low IOU and DSC on each fold. The performance of SSCN grew as the resolution was increased from 24 to 40 (the resolution 24 was offered by the authur in the code). But the average IOU had a fluctuation of about 8%, which was quite obvious compared to other methods (about 2%). Based on the paper of PointGrid, and were recommended parameters. However, we noticed that the combination of and achieved the highest scores.

Common poorly segmented 3D models. Most models were segmented excellent as top two rows in Figure 8. However, the accuracy dropped when the aneurysm occupied a small size ratio of the segment, like the third and fourth row. Meanwhile, the segmentation performance of aneurysms with a large size ratio was satisfactory. The fifth row shows a special segment with 2 aneurysms. Although most of methods failed to segment it, PointConv and PN++ with geodesic information maintained a good performance.

Geodesic information. Compared to other CAD model datasets, the complex shapes of blood vessels is a different challenge in part segmentation. Methods based on points usually use the Euclidean distance to estimate the relevance between points. However, it is not ideal for our dataset. For example, PN++ misclassified the aneurysm points close to the blood vessels even with normal information, as shown in the last row of Figure 8. While, by using geodesic distance, PN++ learned more exact spatial structure. PointConv also segmented it well. Its excellent performance can be attributed to the network learning the parameters of spatial filters. In addition, MeshCNN segmented every aneurysm decently although the overall performance is not best, which owes to its convolution on meshes providing information on manifolds.

Figure 8: Results comparison of three networks.
Network Input IoU () CI 95 () DSC () CI 95 ()
V. A. V. A. V. A. V. A.

Point

SO-Net 512
1024
2048

Point

PointConv 512
1024
2048

Point

PN++g 512
1024
2048

Point

PN++ 512
1024
2048

Point

PointCNN 512
1024
2048

Mesh

MeshCNN 750
1500
2250

Point

SpiderCNN 512
1024
2048

Voxel

SSCN-F 24
32
40

Voxel

SSCN-U 24
32
40

Voxel

PointGrid 16/2
16/4
32/2

Point

PointNet 512
1024
2048

Table 2: Segmentation results of each network. The second column shows the number of input points, edges, or resolutions for the methods based on points, mesh, and voxel, respectively. For PointGrid, the additional refers to the parameter in the paper. The healthy vessel part and aneurysm part are noted as V. and A., respectively. CI indicated confidence interval of IoU or DSC. The results are calculated by the mean value of each fold.

6 Conclusion and Further Work

In this paper, we introduced a 3D dataset of intracranial aneurysm with annotation by experts for geometric deep learning networks. The developed tools and data processing pipeline is also released. Furthermore, we evaluated and analyzed the state-of-the-art methods of 3D object classification and part segmentation on our dataset. The existing methods are likely to be less effective on complex objects, though they perform well on the segmentation of common ones. It is possible to improve further the performance and generalization of networks when geodesic or connectivity information on 3D surfaces is accessible. The introduction of our dataset can be instructive to the development of new structures of geometric deep learning methods for medical datasets.

In further work, we will keep increasing processed real data for our dataset. Besides, we will verify the feasibility of synthetic data for data augmentation, which can significantly improve the efficiency of data collection. We hope more deep learning networks will be applied to medical practice.

References

  • [1] A. Alaraj, C. J. Luciano, D. P. Bailey, A. Elsenousi, B. Z. Roitberg, A. Bernardo, P. P. Banerjee, and F. T. Charbel (2015) Virtual reality cerebral aneurysm clipping simulation with real-time haptic feedback. Operative Neurosurgery 11 (1), pp. 52–58. Cited by: §1.
  • [2] Autism Brain Imaging Data Exchange (ABIDE). Note: http://fcon_1000.projects.nitrc.org/indi/abide/ Cited by: §2.1.
  • [3] A. Boulch and R. Marlet (2016) Deep learning for robust normal estimation in unstructured point clouds. In Computer Graphics Forum, Vol. 35, pp. 281–290. Cited by: §3.4.
  • [4] M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, and P. Vandergheynst (2017) Geometric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine 34 (4), pp. 18–42. Cited by: §1.
  • [5] Cited by: §2.1.
  • [6] A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su, J. Xiao, L. Yi, and F. Yu (2015) ShapeNet: An Information-Rich 3D Model Repository. Technical report Technical Report arXiv:1512.03012 [cs.GR], Stanford University — Princeton University — Toyota Technological Institute at Chicago. Cited by: §2.1, §5.2.
  • [7] Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger (2016) 3D u-net: learning dense volumetric segmentation from sparse annotation. In International conference on medical image computing and computer-assisted intervention, pp. 424–432. Cited by: §2.2.
  • [8] K. Crane, C. Weischedel, and M. Wardetzky (2013) Geodesics in heat: a new approach to computing distance based on heat flow. ACM Transactions on Graphics (TOG) 32 (5), pp. 152. Cited by: §3.3.
  • [9] A. Dai, C. Ruizhongtai Qi, and M. Nießner (2017) Shape completion using 3d-encoder-predictor cnns and shape synthesis. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    ,
    pp. 5868–5877. Cited by: §3.4.
  • [10] S. P. Dakua, J. Abinahed, and A. Al-Ansari (2018) A pca-based approach for brain aneurysm segmentation. Multidimensional Systems and Signal Processing 29 (1), pp. 257–277. Cited by: §3.4.
  • [11] EyePACS. Note: http://www.eyepacs.com/data-analysis Cited by: §2.1.
  • [12] Y. Feng, Y. Feng, H. You, X. Zhao, and Y. Gao (2019) Meshnet: mesh neural network for 3d shape representation. In

    Proceedings of the AAAI Conference on Artificial Intelligence

    ,
    Vol. 33, pp. 8279–8286. Cited by: §2.2.
  • [13] A. F. Fotenos, A. Snyder, L. Girton, J. Morris, and R. Buckner (2005) Normative estimates of cross-sectional and longitudinal brain volume decline in aging and ad. Neurology 64 (6), pp. 1032–1039. Cited by: §2.1.
  • [14] B. Graham, M. Engelcke, and L. van der Maaten (2018) 3d semantic segmentation with submanifold sparse convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9224–9232. Cited by: §2.2, §4.2.
  • [15] R. Hanocka, A. Hertz, N. Fish, R. Giryes, S. Fleishman, and D. Cohen-Or (2019) MeshCNN: a network with an edge. ACM Transactions on Graphics (TOG) 38 (4), pp. 90. Cited by: §2.2, §4.2.
  • [16] T. He, H. Huang, L. Yi, Y. Zhou, C. Wu, J. Wang, and S. Soatto (2019) GeoNet: deep geodesic networks for point cloud analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6888–6897. Cited by: §3.4.
  • [17] R. M. Hoogeveen, C. J. Bakker, and M. A. Viergever (1998) Limits to the accuracy of vessel diameter measurement in mr angiography. Journal of Magnetic Resonance Imaging 8 (6), pp. 1228–1235. Cited by: §3.3.
  • [18] J. Huang, H. Su, and L. Guibas (2018) Robust watertight manifold surface generation method for shapenet models. arXiv preprint arXiv:1802.01698. Cited by: §3.3.
  • [19] Indian Diabetic Retinopathy Image Dataset (IDRiD). Note: https://idrid.grand-challenge.org/ Cited by: §2.1.
  • [20] E. Kalogerakis, M. Averkiou, S. Maji, and S. Chaudhuri (2017) 3D shape segmentation with projective convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3779–3788. Cited by: §2.2.
  • [21] T. Kin, H. Nakatomi, M. Shojima, M. Tanaka, K. Ino, H. Mori, A. Kunimatsu, H. Oyama, and N. Saito (2012) A new strategic neurosurgical planning tool for brainstem cavernous malformations using interactive computer graphics with multimodal fusion images. Journal of neurosurgery 117 (1), pp. 78–88. Cited by: §3.3.
  • [22] S. Koch, A. Matveev, Z. Jiang, F. Williams, A. Artemov, E. Burnaev, M. Alexa, D. Zorin, and D. Panozzo (2019-06) ABC: a big cad model dataset for geometric deep learning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.1.
  • [23] F. Lareyre, C. Adam, M. Carrier, C. Dommerc, C. Mialhe, and J. Raffort (2019) A fully automated pipeline for mining abdominal aortic aneurysm using image segmentation. Scientific reports 9 (1), pp. 1–14. Cited by: §1.
  • [24] M. W. Law and A. C. Chung (2007)

    Vessel and intracranial aneurysm segmentation using multi-range filters and local variances

    .
    In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 866–874. Cited by: §3.4.
  • [25] T. Le and Y. Duan (2018) Pointgrid: a deep network for 3d shape understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 9204–9214. Cited by: §2.2, §4.2.
  • [26] J. Li, B. M. Chen, and G. Hee Lee (2018) So-net: self-organizing network for point cloud analysis. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 9397–9406. Cited by: §2.2, §4.1, §4.2.
  • [27] Y. Li, R. Bu, M. Sun, W. Wu, X. Di, and B. Chen (2018) Pointcnn: convolution on x-transformed points. In Advances in Neural Information Processing Systems, pp. 820–830. Cited by: §2.2, §4.1, §4.2.
  • [28] LIDC-IDRI. Note: https://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI Cited by: §2.1.
  • [29] K. López-Linares, I. García, A. García-Familiar, I. Macía, and M. A. G. Ballester (2019) 3D convolutional neural network for abdominal aortic aneurysm segmentation. arXiv preprint arXiv:1903.00879. Cited by: §1.
  • [30] H. Maron, M. Galun, N. Aigerman, M. Trope, N. Dym, E. Yumer, V. G. Kim, and Y. Lipman (2017) Convolutional neural networks on surfaces via seamless toric covers.. ACM Trans. Graph. 36 (4), pp. 71–1. Cited by: §2.1, §2.2.
  • [31] MedPix. Note: https://medpix.nlm.nih.gov/home Cited by: §2.1.
  • [32] J. J. Park, P. Florence, J. Straub, R. Newcombe, and S. Lovegrove (2019) DeepSDF: learning continuous signed distance functions for shape representation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 165–174. Cited by: §3.4.
  • [33] C. R. Qi, H. Su, K. Mo, and L. J. Guibas (2017) Pointnet: deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652–660. Cited by: §2.2, §4.1, §4.2.
  • [34] C. R. Qi, L. Yi, H. Su, and L. J. Guibas (2017) Pointnet++: deep hierarchical feature learning on point sets in a metric space. In Advances in neural information processing systems, pp. 5099–5108. Cited by: §2.2, §4.1, §4.2.
  • [35] P. Rajpurkar, J. Irvin, A. Bagul, D. Ding, T. Duan, H. Mehta, B. Yang, K. Zhu, D. Laird, R. L. Ball, et al. (2017) Mura: large dataset for abnormality detection in musculoskeletal radiographs. arXiv preprint arXiv:1712.06957. Cited by: §2.1.
  • [36] S. Salahat, A. Soliman, T. McGloughlin, N. Werghi, and A. El-Baz (2017) Segmentation of abdominal aortic aneurysm (aaa) based on topology prior model. In Annual Conference on Medical Image Understanding and Analysis, pp. 219–228. Cited by: §3.4.
  • [37] T. Sichtermann, A. Faron, R. Sijben, N. Teichert, J. Freiherr, and M. Wiesmann (2019) Deep learning–based detection of intracranial aneurysms in 3d tof-mra. American Journal of Neuroradiology 40 (1), pp. 25–32. Cited by: §1.
  • [38] A. L. Simpson, M. Antonelli, S. Bakas, M. Bilello, K. Farahani, B. van Ginneken, A. Kopp-Schneider, B. A. Landman, G. J. S. Litjens, B. H. Menze, O. Ronneberger, R. M. Summers, P. Bilic, P. F. Christ, R. K. G. Do, M. Gollub, J. Golia-Pernicka, S. Heckers, W. R. Jarnagin, M. McHugo, S. Napel, E. Vorontsov, L. Maier-Hein, and M. J. Cardoso (2019) A large annotated medical image dataset for the development and evaluation of segmentation algorithms. CoRR abs/1902.09063. External Links: Link, 1902.09063 Cited by: §2.1.
  • [39] H. Su, V. Jampani, D. Sun, S. Maji, E. Kalogerakis, M. Yang, and J. Kautz (2018) Splatnet: sparse lattice networks for point cloud processing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2530–2539. Cited by: §2.2.
  • [40] H. Su, S. Maji, E. Kalogerakis, and E. Learned-Miller (2015) Multi-view convolutional neural networks for 3d shape recognition. In Proceedings of the IEEE international conference on computer vision, pp. 945–953. Cited by: §2.2.
  • [41] B. van Ginneken, M.B. Stegmann, and M. Loog (2006) Segmentation of anatomical structures in chest radiographs using supervised methods: a comparative study on a public database. Medical Image Analysis 10 (1), pp. 19–40. Cited by: §2.1.
  • [42] P. Wang, Y. Liu, Y. Guo, C. Sun, and X. Tong (2017) O-cnn: octree-based convolutional neural networks for 3d shape analysis. ACM Transactions on Graphics (TOG) 36 (4), pp. 72. Cited by: §2.2.
  • [43] Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon (2019) Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics (TOG) 38 (5), pp. 146. Cited by: §2.2, §4.1.
  • [44] Y. Wang, S. Asafi, O. Van Kaick, H. Zhang, D. Cohen-Or, and B. Chen (2012) Active co-analysis of a set of shapes. ACM Transactions on Graphics (TOG) 31 (6), pp. 165. Cited by: §2.1.
  • [45] Z. Wang and F. Lu (2019) VoxSegNet: volumetric cnns for semantic part segmentation of 3d shapes. IEEE transactions on visualization and computer graphics. Cited by: §2.2.
  • [46] W. Wu, Z. Qi, and L. Fuxin (2019) Pointconv: deep convolutional networks on 3d point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9621–9630. Cited by: §2.2, §4.2.
  • [47] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao (2015) 3d shapenets: a deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1912–1920. Cited by: §2.1.
  • [48] Y. Xu, T. Fan, M. Xu, L. Zeng, and Y. Qiao (2018) Spidercnn: deep learning on point sets with parameterized convolutional filters. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 87–102. Cited by: §2.2, §4.1, §4.2.
  • [49] Y. Zhao, T. Birdal, H. Deng, and F. Tombari (2019) 3D point capsule networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1009–1018. Cited by: §2.2, §5.2.
  • [50] Q. Zhou and A. Jacobson (2016) Thingi10k: a dataset of 10,000 3d-printing models. arXiv preprint arXiv:1605.04797. Cited by: §2.1.