1 Introduction
In this work, we are interested in 3Dvideo perception. A 3Dvideo is a temporal sequence of 3D scans such as a video from a depth camera, a sequence of LIDAR scans, or a multiple MRI scans of the same object or a body part (Fig. 1). As LIDAR scanners and depth cameras become more affordable and widely used for robotics applications, 3Dvideos became readilyavailable sources of input for robotics systems or AR/VR applications.
However, there are many technical challenges in using 3Dvideos for highlevel perception tasks. First, 3D data requires heterogeneous representations and processing those either alienates users or makes it difficult to integrate into larger systems. Second, the performance of the 3D convolutional neural networks is worse or onpar with 2D convolutional neural networks. Third, there are limited number of opensource libraries for fast largescale 3D data.
To resolve most, if not all, of the challenges in the highdimensional perception, we adopt a sparse tensor [8, 9] for our problem and propose the generalized sparse convolutions. The generalized sparse convolution encompasses all discrete convolutions as its subclasses and is crucial for highdimensional perception. We implement the generalized sparse convolution and all standard neural network functions in Sec. 4 and opensource the library.
We adopt the sparse representation for several reasons. Currently, there are various concurrent works for 3D perception: a dense 3D convolution [5], pointnetvariants [22, 23], continuous convolutions [11, 15], surface convolutions [20, 29], and an octree convolution [24]. Out of these representations, we chose a sparse tensor due to its expressiveness and generalizability for highdimensional spaces. Also, it allows homogeneous data representation within traditional neural network libraries since most of them support sparse tensors.
Second, the sparse convolution closely resembles the standard convolution (Sec. 3) which is proven to be successful in 2D perception as well as 3D reconstruction [4], feature learning [33], and semantic segmentation [30].
Third, the sparse convolution is efficient and fast. It only computes outputs for predefined coordinates and saves them into a compact sparse tensor (Sec. 3
). It saves both memory and computation especially for 3D scans or highdimensional data where most of the space is empty.
Thus, we adopt the sparse representation for the our problem and create the first largescale 3D/4D networks or Minkowski networks.^{3}^{3}3At the time of submission, our proposed method was the first very deep 3D convolutional neural networks with more than 20 layers. We named them after the spacetime continuum, Minkowski space, in Physics.
However, even with the efficient representation, merely scaling the 3D convolution to highdimensional spaces results in significant computational overhead and memory consumption due to the curse of dimensionality. A 2D convolution with kernel size 5 requires
weights which increases exponentially to in a 3D cube, and 625 in a 4D tesseract (Fig. 2). This exponential increase, however, does not necessarily lead to better performance and slows down the network significantly. To overcome this challenge, we propose custom kernels with non(hyper)cubic shapes using the generalized sparse convolution.Finally, the predictions from the 4D spatiotemporal generalized sparse convnets are not necessarily consistent throughout the space and time. To enforce consistency, we propose highdimensional conditional random fields defined in a 7D trilateral space (spacetimecolor) with a stationary pairwise consistency function. We use variational inference to convert the conditional random field to differentiable recurrent layers which can be implemented in as a 7D generalized sparse convnet and train both the 4D and 7D networks endtoend.
Experimentally, we use various 3D benchmarks that cover both indoor [5, 2] and outdoor spaces [27, 25]
. First, we show that networks with only generalized sparse 3D conv nets can outperform 2D or hybrid deeplearning algorithms by a large margin.
^{4}^{4}4We achieved 67.9% mIoU on the ScanNet benchmark outperforming all algorithms including the best peerreviewed work [6] by 19% mIoU at the time of submission. Also, we create 4D datasets from Synthia [27] and Varcity [25] and report ablation studies of temporal components. Experimentally, we show that the generalized sparse conv nets with the hybrid kernel outperform sparse convnets with tesseract kernels. Also, the 4D generalized sparse convnets are more robust to noise and sometimes more efficient in some cases than the 3D counterpart.2 Related Work
The 4D spatiotemporal perception fundamentally requires 3D perception as a slice of 4D along the temporal dimension is a 3D scan. However, as there are no previous works on 4D perception using neural networks, we will primarily cover 3D perception, specifically 3D segmentation using neural networks. We categorized all previous works in 3D as either (a) 3Dconvolutional neural networks or (b) neural networks without 3D convolutions. Finally, we cover early 4D perception methods. Although 2D videos are spatiotemporal data, we will not cover them in this paper as 3D perception requires radically different data processing, implementation, and architectures.
3Dconvolutional neural networks. The first branch of 3Dconvolutional neural networks uses a rectangular grid and a dense representation [30, 5] where the empty space is represented either as or the signed distance function. This straightforward representation is intuitive and is supported by all major public neural network libraries. However, as the most space in 3D scans is empty, it suffers from high memory consumption and slow computation. To resolve this, OctNet [24] proposed to use the Octree structure to represent 3D space and convolution on it.
The second branch is sparse 3Dconvolutional neural networks [28, 9]. There are two quantization methods used for high dimensions: a rectangular grid and a permutohedral lattice [1]. [28] used a permutohedral lattice whereas [9] used a rectangular grid for 3D classification and semantic segmentation.
The last branch is 3D pseudocontinuous convolutional neural networks [11, 15]. Unlike the previous works, they define convolutions using continuous kernels in a continuous space. However, finding neighbors in a continuous space is expensive, as it requires KDtree search rather than a hash table, and are susceptible to uneven distribution of point clouds.
Neural networks without 3D convolutions. Recently, we saw a tremendous increase in neural networks without 3D convolutions for 3D perception. Since 3D scans consist of thin observable surfaces, [20, 29] proposed to use 2D convolutions on the surface for semantic segmentation.
Another direction is PointNetbased methods [22, 23]
. PointNets use a set of input coordinates as features for a multilayer perceptron. However, this approach processes a limited number of points and thus a sliding window for cropping out a section from an input was used for large spaces making the receptive field size rather limited.
[14] tried to resolve such shortcomings with a recurrent network on top of multiple pointnets, and [15] proposed a variant of 3D continuous convolution for lower layers of a PointNet and got a significant performance boost.4D perception. The first 4D perception algorithm [18] proposed a dynamic deformable balloon model for 4D cardiac image analysis. Later, [16] used a 4D Markov Random Fields for cardiac segmentation. Recently, a "SpatioTemporal CNN" [34]
combined a 3DUNet with a 1DAutoEncoder for temporal data and applied the model for autoencoding brain fMRI images, but it is not a 4Dconvolutional neural network.
In this paper, we propose the first highdimensional convolutional neural networks for 4D spatiotemporal data, or 3D videos and the 7D spacetimechroma space. Compared with other approaches that combine temporal data with a recurrent neural network or a shallow model (CRF), our networks use a homogeneous representation and convolutions consistently throughout the networks. Instead of using an RNN, we use convolution for the temporal axis since it is proven to be more effective in sequence modeling
[3].3 Sparse Tensor and Convolution
In traditional speech, text, or image data, features are extracted densely. Thus, the most common representations of these data are vectors, matrices, and tensors. However, for 3dimensional scans or even higherdimensional spaces, such dense representations are inefficient due to the sparsity. Instead, we can only save the nonempty part of the space as its coordinates and the associated features. This representation is an Ndimensional extension of a sparse matrix; thus it is known as a sparse tensor. There are many ways to save such sparse tensors compactly
[31], but we follow the COO format as it is efficient for neighborhood queries (Sec. 3.1). Unlike the traditional sparse tensors, we augment the sparse tensor coordinates with the batch indices to distinguish points that occupy the same coordinate in different batches [9]. Concisely, we can represent a set of 4D coordinates as or as a matrix and a set of associated features or as a matrix . Then, a sparse tensor can be written as(1) 
where and are the batch index and the feature associated to the th coordinate. In Sec. 6, we augment the 4D space with the 3D chromatic space and create a 7D sparse tensor for trilateral filtering.
3.1 Generalized Sparse Convolution
In this section, we generalize the sparse convolution [8, 9] for generic input and output coordinates and for arbitrary kernel shapes. The generalized sparse convolution encompasses not only all sparse convolutions but also the conventional dense convolutions. Let be an dimensional input feature vector in a dimensional space at (a Ddimensional coordinate), and convolution kernel weights be . We break down the weights into spatial weights with matrices of size as for . Then, the conventional dense convolution in Ddimension is
(2) 
where is the list of offsets in Ddimensional hypercube centered at the origin. e.g. . The generalized sparse convolution in Eq. 3 relaxes Eq. 2.
(3) 
where is a set of offsets that define the shape of a kernel and as the set of offsets from the current center, , that exist in . and are predefined input and output coordinates of sparse tensors. First, note that the input coordinates and output coordinates are not necessarily the same. Second, we define the shape of the convolution kernel arbitrarily with . This generalization encompasses many special cases such as the dilated convolution and typical hypercubic kernels. Another interesting special case is the "sparse submanifold convolution" [9] when we set and . If we set and , the generalized sparse convolution becomes the conventional dense convolution (Eq. 2). If we define the and as multiples of a natural number and
, we have a strided dense convolution.
4 Minkowski Engine
In this section, we propose an opensource autodifferentiation library for sparse tensors and the generalized sparse convolution (Sec. 3). As it is an extensive library with many functions, we will only cover essential forwardpass functions.
4.1 Sparse Tensor Quantization
The first step in the sparse convolutional neural network is the data processing to generate a sparse tensor, which converts an input into unique coordinates, associated features, and optionally labels when training for semantic segmentation. In Alg. 1, we list the GPU function for this process. When a dense label is given, it is important that we ignore voxels with more than one unique labels. This can be done by marking these voxels with IGNORE_LABEL. First, we convert all coordinates into hash keys and find all unique hashkeylabel pairs to remove collisions. Note that SortByKey, UniqueByKey, and ReduceByKey are all standard Thrust library functions [19].
The reduction function takes labelkey pairs and returns the IGNORE_LABEL since at least two labelkey pairs in the same key means there is a label collision. A CPUversion works similarly except that all reduction and sorting are processed serially.
4.2 Generalized Sparse Convolution
The next step in the pipeline is generating the output coordinates given the input coordinates (Eq. 3). When used in conventional neural networks, this process requires only a convolution stride size, input coordinates, and the stride size of the input sparse tensor (the minimum distance between coordinates). The algorithm is presented in the supplementary material. We create this output coordinates dynamically allowing an arbitrary output coordinates for the generalized sparse convolution.
Next, to convolve an input with a kernel, we need a mapping to identify which inputs affect which outputs. This mapping is not required in conventional dense convolutions as it can be inferred easily. However, for sparse convolution where coordinates are scattered arbitrarily, we need to specify the mapping. We call this mapping the kernel maps and define them as pairs of lists of input indices and output indices, for . Finally, given the input and output coordinates, the kernel map, and the kernel weights , we can compute the generalized sparse convolution by iterating through each of the offset (Alg. 2)
where and indicate the th element of the list of indices and respectively and and are also th input and output feature vectors respectively. The transposed generalized sparse convolution (deconvolution) works similarly except that the role of input and output coordinates is reversed.
4.3 Max Pooling
Unlike dense tensors, on sparse tensors, the number of input features varies per output. Thus, this creates nontrivial implementation for a max/average pooling. Let and be the vector that concatenated all and for respectively. We first find the number of inputs per each output coordinate and indices of the those inputs. Alg. 3 reduces the input features that map to the same output coordinate. Sequence(n) generates a sequence of integers from 0 to n  1 and the reduction function which returns the minimum value given two keyvalue pairs. MaxPoolKernel is a custom CUDA kernel that reduces all features at a specified channel using , which contains the first index of that maps to the same output, and the corresponding output indices .
4.4 Global / Average Pooling, Sum Pooling
An average pooling and a global pooling layer compute the average of input features for each output coordinate for average pooling or one output coordinate for global pooling. This can be implemented in multiple ways. We use a sparse matrix multiplication since it can be optimized on hardware or using a faster sparse BLAS library. In particular, we use the cuSparse library for sparse matrixmatrix (cusparse_csrmm) and matrixvector multiplication (cusparse_csrmv) to implement these layers. Similar to the max pooling algorithm,
is the inputtooutput kernel map. For the global pooling, we create the kernel map that maps all inputs to the origin and use the same Alg. 4. The transposed pooling (unpooling) works similarly.On the last line of the Alg. 4, we divide the pooled features by the number of inputs mapped to each output. However, this process could remove density information. Thus, we propose a variation that does not divide the number of inputs and named it the sum pooling.
4.5 Nonspatial Functions
For functions that do not require spatial information (coordinates) such as ReLU, we can apply the functions directly to the features
. Also, for batch normalization, as each row of
represents a feature, we could use the 1D batch normalization function directly on .5 Minkowski Convolutional Neural Networks
In this section, we introduce 4dimensional spatiotemporal convolutional neural networks for spatiotemporal perception. We treat the time dimension as an extra spatial dimension and create networks with 4dimensional convolutions. However, there are unique problems arising from highdimensional convolutions. First, the computational cost and the number of parameters in the networks increase exponentially as we increase the dimension. However, we experimentally show that these increases do not necessarily lead to better performance. Second, the networks do not have an incentive to make the prediction consistent throughout the space and time with conventional crossentropy loss alone.
To resolve the first problem, we make use of a special property of the generalized sparse convolution and propose nonconventional kernel shapes that not only save memory and computation, but also perform better. Second, to enforce spatiotemporal consistency, we propose a highdimensional conditional random field (7D spacetimecolor space) that filters network predictions. We use variational inference to train both the base network and the conditional random field endtoend.
5.1 Tesseract Kernel and Hybrid Kernel
The surface area of 3D data increases linearly to time and quadratically to the spatial resolution. However, when we use a conventional 4D hypercube, or a tesseract (Fig. 2), for a convolution kernel, the exponential increase in the number of parameters leads to overparametrization, overfitting, as well as high computationalcost and memory consumption. Instead, we propose a hybrid kernel (nonhypercubic, nonpermutohedral) to save computation. We use the arbitrary kernel offsets of the generalized sparse convolution to implement the hybrid kernel.
The hybrid kernel is a combination of a crossshaped kernel a conventional cubic kernel (Fig. 3). For spatial dimensions, we use a cubic kernel to capture the spatial geometry accurately. For the temporal dimension, we use the crossshaped kernel to connect the same point in space across time. We experimentally show that the hybrid kernel outperforms the tesseract kernel both in speed and accuracy.
5.2 Residual Minkowski Networks
The generalized sparse convolution allows us to define strides and kernel shapes arbitrarily. Thus, we can create a highdimensional network only with generalized sparse convolutions, making the implementation easier and generic. In addition, it allows us to adopt recent architectural innovations in 2D directly to highdimensional networks. To demonstrate, we create a highdimensional version of a residual network on Fig. 4. For the first layer, instead of a 2D convolution, we use a generalized sparse convolution. However, for the rest of the networks, we follow the original network architecture.
For the Ushaped variants, we add multiple strided sparse convolutions and strided sparse transpose convolutions with skip connections connecting the layers with the same stride size (Fig. 5) on the base residual network. We use multiple variations of the same architecture for semantic segmentation experiments.
6 Trilateral StationaryCRF
For semantic segmentation, the crossentropy loss is applied for each pixel or voxel. However, the loss does not enforce consistency as it does not have pairwise terms. To make such consistency more explicit, we propose a highdimensional conditional random field (CRF) similar to the one used in image semantic segmentation [35]. In image segmentation, the bilateral space that consists of 2D space and 3D color is used for the CRF. For 3Dvideos, we use the trilateral space that consists of 3D space, 1D time, and 3D chromatic space. The color space creates a "spatial" gap between points with different colors that are spatially adjacent (e.g., on a boundary). Thus, it prevents information from "leaking out" to different regions. Unlike conventional CRFs with Gaussian edge potentials and dense connections [13, 35], we do not restrict the compatibility function to be a Gaussian. Instead, we relax the constraint and only apply the stationarity condition.
To find the global optima of the distribution, we use the variational inference and convert a series of fixed point update equations to a recurrent neural network similar to [35]. We use the generalized sparse convolution in 7D space to implement the recurrence and jointly train both the base network that generates unary potentials and the CRF endtoend.
6.1 Definition
Let a CRF node in the 7D (spacetimechroma) space be and the unary potential be and the pairwise potential as where is a neighbor of , . The conditional random field is defined as
where is the partition function; is the set of all nodes; and must satisfy the stationarity condition for . Note that we use the camera extrinsics to define the spatial coordinates of a node in the world coordinate system. This allows stationary points to have the same coordinates throughout the time.
6.2 Variational Inference
The optimization is intractable. Instead, we use the variational inference to minimize divergence between the optimal and an approximated distribution . Specifically, we use the meanfield approximation, as the closed form solution exists. From the Theorem 11.9 in [12], is a local maximum if and only if
and indicate all nodes or variables except for the th one. The final fixedpoint equation is Eq. 4. The derivation is in the supplementary material.
(4) 
6.3 Learning with 7D Sparse Convolution
Interestingly, the weighted sum in Eq. 4 is equivalent to a generalized sparse convolution in the 7D space since is stationary and each edge between can be encoded using . Thus, we convert fixed point update equation Eq. 4 into an algorithm in Alg. 5.
Finally, we use as the logit predictions of a 4D Minkowski network and train both and endtoend using one 4D and one 7D Minkowski network using Eq. 5.
(5) 
7 Experiments
To validate the proposed highdimensional networks, we first use multiple standard 3D benchmarks for 3D semantic segmentation. It allows us to gauge the performance of the highdimensional networks with the same architecture with other stateoftheart methods. Next, we create multiple 4D datasets from 3D datasets that have temporal sequences and analyze each of the proposed components for ablation study.
7.1 Implementation
7.2 Training and Evaluation
We use Momentum SGD with the Poly scheduler to train networks from learning rate and apply data augmentation including random scaling, rotation around the gravity axis, spatial translation, spatial elastic distortion, and chromatic translation and jitter.
For evaluation, we use the standard mean Intersection over Union (mIoU) and mean Accuracy (mAcc) for metrics following the previous works. To convert voxellevel predictions to pointlevel predictions, we simply propagated predictions from the nearest voxel center.
7.3 Datasets
ScanNet. The ScanNet [5] 3D segmentation benchmark consists of 3D reconstructions of real rooms. It contains 1.5k rooms, some repeated rooms captured with different sensors. We feed an entire room to a MinkowskiNet fully convolutionally without cropping.
Stanford 3D Indoor Spaces (S3DIS). The dataset [2] contains 3D scans of six floors of three different buildings. We use the Fold #1 split following many previous works. We use 5cm and 2cm voxel for the experiment.
RueMonge 2014 (Varcity). The RueMonge 2014 dataset [25] provides semantic labels for a multiview 3D reconstruction of the Rue Mongue. To create a 4D dataset, we crop the 3D reconstruction onthefly to generate a temporal sequence. We use the official split for all experiments.
Synthia 4D. We use the Synthia dataset [27] to create 3D video sequences. We use 6 sequences of driving scenarios in 9 different weather conditions. Each sequence consists of 4 stereo RGBD images taken from the top of a car. We backproject the depth images to the 3D space to create 3D videos. We visualized a part of a sequence in Fig. 1.
We use the sequence 14 except for sunset, spring, and fog for the train split; the sequence 5 foggy weather for validation; and the sequence 6 sunset and spring for the test. In total, the train/val/test set contain 20k/815/1886 3D scenes respectively.
Since the dataset is purely synthetic, we added various noise to the input point clouds to simulate noisy observations. We used elastic distortion, Gaussian noise, and chromatic shift in the color for the noisy 4D Synthia experiments.
7.4 Results and Analysis
Method  mIOU 

ScanNet [5]  30.6 
SSCUNet [10]  30.8 
PointNet++ [23]  33.9 
ScanNetFTSDF  38.3 
SPLATNet [28]  39.3 
TangetConv [29]  43.8 
SurfaceConv [20]  44.2 
3DMV [6]  48.4 
3DMVFTSDF  50.1 
PointNet++SW  52.3 
MinkowskiNet42 (5cm)  67.9 
MinkowskiNet42 (2cm)  72.1 
SparseConvNet [10]  72.5 
Method  mIOU  mAcc 

3D MinkNet20  76.24  89.31 
3D MinkNet20 + TA  77.03  89.20 
4D Tesseract MinkNet20  75.34  89.27 
4D MinkNet20  77.46  88.013 
4D MinkNet20 + TSCRF  78.30  90.23 
4D MinkNet32 + TSCRF  78.67  90.51 
IoU  Building  Road  Sidewalk  Fence  Vegetation  Pole  Car  Traffic Sign  Pedestrian  Lanemarking  Traffic Light  mIoU 

3D MinkNet42  87.954  97.511  78.346  84.307  96.225  94.785  87.370  42.705  66.666  52.665  55.353  76.717 
3D MinkNet42 + TA  87.796  97.068  78.500  83.938  96.290  94.764  85.248  43.723  62.048  50.319  54.825  75.865 
4D Tesseract MinkNet42  89.957  96.917  81.755  82.841  96.556  96.042  91.196  52.149  51.824  70.388  57.960  78.871 
4D MinkNet42  88.890  97.720  85.206  84.855  97.325  96.147  92.209  61.794  61.647  55.673  56.735  79.836 
Method  mIOU  mAcc 

PointNet [22]  41.09  48.98 
SparseUNet [9]  41.72  64.62 
SegCloud [30]  48.92  57.35 
TangentConv [29]  52.8  60.7 
3D RNN [32]  53.4  71.3 
PointCNN [15]  57.26  63.86 
SuperpointGraph [14]  58.04  66.5 
MinkowskiNet20  62.60  69.62 
MinkowskiNet32  65.35  71.71 
Method  mIOU 

MVCRF [26]  42.3 
Gradde et al. [7]  54.4 
RF+3D CRF [17]  56.4 
OctNet () [24]  59.2 
SPLATNet (3D) [28]  65.4 
3D MinkNet20  66.46 
4D MinkNet20  66.56 
4D MinkNet20 + TSCRF  66.59 
ScanNet & Stanford 3D Indoor The ScanNet and the Stanford Indoor datasets are one of the largest nonsynthetic datasets, which make the datasets ideal test beds for 3D segmentation. We were able to achieve +19% mIOU on ScanNet, and +7% on Stanford compared with the bestpublished works by the CVPR deadline. This is due to the depth of the networks and the fine resolution of the space. We trained the same network for 60k iterations with 2cm voxel and achieved 72.1% mIoU on ScanNet after the deadline. For all evaluation, we feed an entire room to a network and process it fully convolutionally.
4D analysis The RueMongue dataset is a small dataset that ranges one section of a street, so with the smallest network, we were able to achieve the best result (Tab. 5). However, the results quickly saturate. On the other hand, the Synthia 4D dataset has an order of magnitude more 3D scans than any other datasets, so it is more suitable for the ablation study.
We use the Synthia datasets with and without noise for 3D and 4D analysis and results are presented in Tab. 2 and Tab. 3. We use various 3D and 4D networks with and without TSCRF. Specifically, when we simulate noise in sensory inputs on the 4D Synthia dataset, we can observe that the 4D networks are more robust to noise. Note that the number of parameters added to the 4D network compared with the 3D network is less than 6.4 % and % for the TSCRF. Thus, with a small increase in computation, we could get a more robust algorithm with higher accuracy. In addition, when we process temporal sequence using the 4D networks, we could even get small speed gain as we process data in a batch mode. On Tab. 6, we vary the voxel size and the sequence length and measured the runtime of the 3D and 4D networks, as well as the 4D networks with TSCRF. Note that for large voxel sizes, we tend to get small speed gain on the 4D networks compared with 3D networks.
Voxel Size  0.6m  0.45m  0.3m  

Video Length (s)  3D  4D  4DCRF  3D  4D  4DCRF  3D  4D  4DCRF 
3  0.18  0.14  0.17  0.25  0.22  0.27  0.43  0.49  0.59 
5  0.31  0.23  0.27  0.41  0.39  0.47  0.71  0.94  1.13 
7  0.43  0.31  0.38  0.58  0.61  0.74  0.99  1.59  2.02 
8 Conclusion
In this paper, we propose a generalized sparse convolution and an autodifferentiation library for sparse tensors and the generalized sparse convolution. Using these, we create 4D convolutional neural networks for spatiotemporal perception. Experimentally, we show that 3D convolutional neural networks alone can outperform 2D networks and 4D perception can be more robust to noise.
9 Acknowledgements
Toyota Research Institute ("TRI") provided funds to assist the authors with their research but this article solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity. We acknowledge the support of the System X Fellowship and the companies sponsored: NEC Corporation, Nvidia, Samsung, and Tencent. Also, we want to acknowledge the academic hardware donation from Nvidia.
References
 [1] Andrew Adams, Jongmin Baek, and Myers Abraham Davis. Fast highdimensional filtering using the permutohedral lattice. In Computer Graphics Forum, volume 29, pages 753–762. Wiley Online Library, 2010.

[2]
Iro Armeni, Ozan Sener, Amir R. Zamir, Helen Jiang, Ioannis Brilakis, Martin
Fischer, and Silvio Savarese.
3d semantic parsing of largescale indoor spaces.
In
Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition
, 2016.  [3] Shaojie Bai, J Zico Kolter, and Vladlen Koltun. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271, 2018.
 [4] Christopher B Choy, Danfei Xu, JunYoung Gwak, Kevin Chen, and Silvio Savarese. 3dr2n2: A unified approach for single and multiview 3d object reconstruction. In Proceedings of the European Conference on Computer Vision (ECCV), 2016.
 [5] Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. Scannet: Richlyannotated 3d reconstructions of indoor scenes. In Proc. Computer Vision and Pattern Recognition (CVPR), IEEE, 2017.
 [6] Angela Dai and Matthias Nießner. 3dmv: Joint 3dmultiview prediction for 3d semantic scene segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), 2018.
 [7] Raghudeep Gadde, Varun Jampani, Renaud Marlet, and Peter Gehler. Efficient 2d and 3d facade segmentation using autocontext. IEEE transactions on pattern analysis and machine intelligence, 2017.
 [8] Benjamin Graham. Spatiallysparse convolutional neural networks. arXiv preprint arXiv:1409.6070, 2014.
 [9] Ben Graham. Sparse 3d convolutional neural networks. British Machine Vision Conference, 2015.
 [10] Benjamin Graham, Martin Engelcke, and Laurens van der Maaten. 3D semantic segmentation with submanifold sparse convolutional networks. CVPR, 2018.
 [11] P. Hermosilla, T. Ritschel, PP Vazquez, A. Vinacua, and T. Ropinski. Monte carlo convolution for learning on nonuniformly sampled point clouds. ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2018), 2018.

[12]
Daphne Koller and Nir Friedman.
Probabilistic Graphical Models: Principles and Techniques  Adaptive Computation and Machine Learning
. The MIT Press, 2009.  [13] Philipp Krähenbühl and Vladlen Koltun. Efficient inference in fully connected crfs with gaussian edge potentials. In Advances in Neural Information Processing Systems 24, 2011.
 [14] Loic Landrieu and Martin Simonovsky. Largescale point cloud semantic segmentation with superpoint graphs. arXiv preprint arXiv:1711.09869, 2017.
 [15] Yangyan Li, Rui Bu, Mingchao Sun, and Baoquan Chen. Pointcnn. arXiv preprint arXiv:1801.07791, 2018.
 [16] Maria LorenzoValdés, Gerardo I SanchezOrtiz, Andrew G Elkington, Raad H Mohiaddin, and Daniel Rueckert. Segmentation of 4d cardiac mr images using a probabilistic atlas and the em algorithm. Medical Image Analysis, 8(3):255–265, 2004.
 [17] Andelo Martinovic, Jan Knopp, Hayko Riemenschneider, and Luc Van Gool. 3d all the way: Semantic segmentation of urban scenes from start to end in 3d. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015.
 [18] Tim McInerney and Demetri Terzopoulos. A dynamic finite element surface model for segmentation and tracking in multidimensional medical images with application to cardiac 4d image analysis. Computerized Medical Imaging and Graphics, 19(1):69–83, 1995.
 [19] Nvidia. Thrust: Parallel algorithm library.
 [20] Hao Pan, Shilin Liu, Yang Liu, and Xin Tong. Convolutional neural networks on 3d surfaces using parallel frames. arXiv preprint arXiv:1808.04952, 2018.
 [21] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017.
 [22] Charles Ruizhongtai Qi, Hao Su, Kaichun Mo, and Leonidas J. Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. arXiv preprint arXiv:1612.00593, 2016.
 [23] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Advances in Neural Information Processing Systems, 2017.
 [24] Gernot Riegler, Ali Osman Ulusoy, and Andreas Geiger. Octnet: Learning deep 3d representations at high resolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.

[25]
Hayko Riemenschneider, András BódisSzomorú, Julien Weissenberg,
and Luc Van Gool.
Learning where to classify in multiview semantic segmentation.
In European Conference on Computer Vision. Springer, 2014.  [26] Hayko Riemenschneider, András BódisSzomorú, Julien Weissenberg, and Luc Van Gool. Learning where to classify in multiview semantic segmentation. In European Conference on Computer Vision, 2014.
 [27] German Ros, Laura Sellart, Joanna Materzynska, David Vazquez, and Antonio M. Lopez. The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
 [28] Hang Su, Varun Jampani, Deqing Sun, Subhransu Maji, Vangelis Kalogerakis, MingHsuan Yang, and Jan Kautz. Splatnet: Sparse lattice networks for point cloud processing. arXiv preprint arXiv:1802.08275, 2018.
 [29] Maxim Tatarchenko*, Jaesik Park*, Vladlen Koltun, and QianYi Zhou. Tangent convolutions for dense prediction in 3D. CVPR, 2018.
 [30] Lyne P Tchapmi, Christopher B Choy, Iro Armeni, JunYoung Gwak, and Silvio Savarese. Segcloud: Semantic segmentation of 3d point clouds. International Conference on 3D Vision (3DV), 2017.
 [31] Parker Allen Tew. An investigation of sparse tensor formats for tensor libraries. PhD thesis, Massachusetts Institute of Technology, 2016.
 [32] Xiaoqing Ye, Jiamao Li, Hexiao Huang, Liang Du, and Xiaolin Zhang. 3d recurrent neural networks with context fusion for point cloud semantic segmentation. In The European Conference on Computer Vision (ECCV), September 2018.
 [33] A. Zeng, S. Song, M. Nießner, M. Fisher, J. Xiao, and T. Funkhouser. 3dmatch: Learning the matching of local 3d geometry in range scans. In CVPR, 2017.
 [34] Yu Zhao, Xiang Li, Wei Zhang, Shijie Zhao, Milad Makkie, Mo Zhang, Quanzheng Li, and Tianming Liu. Modeling 4d fmri data via spatiotemporal convolutional neural networks (stcnn). arXiv preprint arXiv:1805.12564, 2018.
 [35] Shuai Zheng, Sadeep Jayasumana, Bernardino RomeraParedes, Vibhav Vineet, Zhizhong Su, Dalong Du, Chang Huang, and Philip H. S. Torr. Conditional random fields as recurrent neural networks. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), 2015.
Comments
There are no comments yet.