3D voxel grids are an attractive representation for 3D structure learning because they can represent shapes with arbitrary topology and they are well suited to convolutional neural network architectures. However, these advantages are dramatically diminished by the disadvantage of cubic storage and computation complexity, which significantly affects the structure learning efficiency and accuracy of deep learning models.
Recently, implicit functions have been drawing research attention as a promising 3D representation to resolve this issue. By representing a 3D shape as a function, discriminative neural networks can be trained to learn the mapping from a 3D location to a label, which can either indicate the inside or outside of the shape [45, 4, 37] or a signed distance to the surface [53, 41]. As a consequence, shape reconstruction requires sampling the function in 3D, where the 3D locations are required to be sampled near the 3D surface for training. Recent approaches based on implicit functions have shown superiority over point clouds in terms of geometry details, and advantages over meshes in terms of being able to represent arbitrary topologies. Although it is very memory efficient to learn implicit functions using discriminative models, these approaches require sampling dense 3D locations in a highly irregular manner during training, which also makes the sampling methods affect the accuracy of shape reconstruction during test.
To resolve this issue, we propose a method for 3D shape structure learning by leveraging the advantages of learning shape representations based on continuous functions without requiring sampling in 3D. Rather than regarding a voxel grid as a set of individual 3D voxels, which suffers from cubic complexity in learning, we represent voxel grids as functions over a 2D domain that map 2D locations to 1D voxel tubes. This voxel tubelization regards a voxel grid as a set of tubes along any one of three dimensions, for example Z, and indexes each tube by its 2D location on the plane spanned by the other two dimensions, i.e., X and Y. In addition, each tube is represented as a sequence of occupancy segments, where each segment consists of successive occupied voxels given by two 1D locations indicating the start and end points. Given a shape feature as a condition, this voxel tubelization enables us to propose a Seq2Seq model with attention as a discriminative model to predict each tube from its 2D location. Specifically, we leverage an RNN encoder to encode the 2D coordinates of a tube with a shape condition, and leverage an RNN decoder to sequentially predict the start and end locations of each occupancy segment in the tube. Because our approach essentially maps a coordinate sequence to another coordinate sequence, we call our method SeqXY2SeqZ. Given the 2D coordinates of a tube, SeqXY2SeqY produces the 1D coordinates of the occupancy segments along the third dimension. Not only can SeqXY2SeqZ be evaluated with a number of RNN steps that is quadratic in the grid resolution during test, but it is also memory efficient enough to learn high resolution shape representations. Experimental results show that SeqXY2SeqZ outperforms the state-of-the-art methods. In summary, our contributions are as follows:
We propose a novel shape representation based on 2D functions that map 2D locations to sequences of 1D voxel tubes, avoiding the cubic complexity of voxel grids. Our representation enables 3D structure learning of voxel grids in a tube-by-tube manner via discriminative neural networks.
We propose SeqXY2SeqZ, an RNN-based Seq2Seq model with attention, to implement the mapping from 2D locations to 1D sequences. Given a 2D coordinate and a shape condition, SeqXY2SeqZ sequentially predicts occupancy segments in a 1D tube. It requires a number of RNN steps that grows only quadratically with resolution, and achieves high resolutions due to its memory efficiency.
SeqXY2SeqZ demonstrates the feasibility of generating 3D voxel grids using discriminative neural networks in a more efficient way, and achieves state-of-the-art results in shape reconstruction.
2 Related work
Deep learning models have made big progress in 3D shape understanding tasks [14, 13, 15, 16, 19, 20, 11, 22, 17, 12, 36, 35, 21, 56, 25, 24]. Recent 3D structure learning methods are also mainly based on deep learning models, while working on various 3D representations including voxel grids, point clouds, triangle meshes, and implicit functions.
Voxel-based models. Because of their regularity, many previous studies learned 3D structures from voxel grids with 3D supervision [6, 44] or 2D supervision with the help of differentiable renderers [59, 50, 49, 8, 58, 9]. Due to the cubic complexity of voxel grids, these generative models are limited to relatively low resolution, such as . Recent studies [6, 57, 60] employed shallow 3D convolutional networks to reconstruct voxel grids in higher resolutions of , however, the computational cost is still very large. To remedy this issue, some methods employed a multi-resolution strategy [23, 47]. However, these methods are very complicated to implement and additionally require multiple passes over the input. Another alternative was introduced to represent 3D shapes using multiple depth images . However, it is hard to obtain consistency across multiple generated depth images during inference.
Different from these generative neural networks, we provide a novel perspective to benefit from the regularity of voxel grids but avoid their cubic complexity by leveraging discriminative neural networks in shape generation.
Point cloud-based models. As pioneers, PointNet  and PointNet++  enabled the learning of 3D structure from point clouds. Later, different variations have been proposed to improve the learning of 3D structures from 3D point clouds [7, 35, 21] or 2D images with various differentiable renderers [26, 28, 39, 54, 29]. Although point clouds are a compact and memory efficient 3D representation, they cannot express geometry details without additional non-trivial post-processing steps to generate meshes.
Mesh-based models. Meshes are also an attractive 3D representation in deep learning [51, 10, 55, 27, 30, 31, 33, 3]. Supervised methods employed 3D meshes as supervision to train networks by minimizing the location error of vertices with geometry constraints [51, 10, 55], while unsupervised methods relied on differentiable renderers to reconstruct meshes from multiple views [27, 30, 31, 33, 3]. However, these methods cannot generate arbitrary vertex topology but inherit the connectivity of the template mesh.
Implicit function-based models. Recently, implicit functions have become a promising 3D representation in deep learning models [45, 53, 37, 4, 40, 41, 38]. By representing a 3D shape as a 3D function, these methods employ discriminative neural networks to learn the function from a 3D location to an indicator labelling inside or outside of the shape [45, 4, 37] or a signed distance to the surface [53, 41]. However, these methods required to sample points near 3D surfaces during training. To learn implicit functions without 3D supervision, Liu et al.  introduced a novel ray-based field probing technique to mine supervision from 2D images, similarly, a concurrent work  employed a network to bridge world coordinates to a feature representation of local scene properties. Although it is very memory efficient to learn 3D implicit functions using discriminative models in a point-by-point manner, it requires sampling dense and irregular 3D locations during training, which also makes the sampling methods affect the accuracy of shape reconstruction during test.
Although our method is also a discriminative network for 3D structure learning, it can benefit from the regularity of voxel grids by learning a 2D function. It is memory efficient and avoids the dense and irregular sampling during training.
The core idea of SeqXY2SeqZ is to represent shapes as 2D functions that map each 2D location to a sequence of 1D occupancy segments. More specifically, we interpret each 3D shape as a set of 1D tubes , where each tube is indexed by its 2D coordinate . Tube consists of a sequence of occupancy segments, where we represent each segment by its 1D start and end locations and . To generate , SeqXY2SeqZ learns a 2D function to predict each tube from its coordinate and a shape condition by generating the start and end locations and of each occupancy segment in .
Fig. 1 illustrates how SeqXY2SeqZ generates a tube along the Z axis from its 2D coordinates on the X-Y plane. Specifically, we input the 2D coordinate and sequentially into an encoder, and a decoder sequentially predicts the start and end locations of two occupancy segments along the Z axis. In the figure, there is one occupancy segment with only one voxel starting at and ending at , and a second segment starting at and ending at . Therefore, the decoder sequentially predicts , , , to reconstruct the tube at and . In addition, the decoder outputs a binary flag to indicate whether there is any occupancy segment in this tube at all. The encoder also requires a shape condition from an image or a learned feature as input to provide information about the reconstructed shape.
4 Voxel Tubelization
To train the SeqXY2SeqZ model, we first need to convert each 3D voxel grid into a tubelized representation consisting of sets of 1D voxel tubes over a 2D plane. For a 3D shape represented by a grid with a resolution of , voxel tubelization re-organizes these voxels into a set of tubes along one of the three axes. Each tube can then be indexed by its location on the plane spanned by the other two dimensions using a 2D coordinate , such that . We further represent each tube using run-length encoding of its occupancy segments , where and . An occupancy segment is a set of consecutive voxels that are occupied by the shape, which we encode as a sequence of start and end locations and . Note that and are discrete 1D indices, which we will predict using a discriminative approach. We denote the tubes consisting of occupancy segments as . In our experimental section we show that this representation is effective irrespective of the axis that is leveraged for the tubelization. Our approach takes advantage of the following properties of voxel tubelization and run-length encoding of occupancy segments:
First, run-length encoding of occupancy segments significantly reduces the memory complexity of 3D grids, since only two indices are needed to encode each segment, irrespective of its length.
Second, our approach allows us to represent shapes as 2D functions that map 2D locations to sequences of 1D occupancy segments, which we will implement using discriminative neural networks. This is similar to shape representations based on 3D implicit functions implemented by discriminative networks, but our approach requires only RNN evaluation steps during shape reconstruction.
Third, networks that predict voxel occupancy using a scalar probability require an occupancy probability threshold as a hyperparameter, which can have a large influence on the reconstruction accuracy. In contrast, we predict start and end locations of occupancy segments and do not require such a parameter.
SeqXY2SeqZ aims to learn to generate each tube from its coordinate and a shape condition. We use an RNN encoder to encode the coordinate and the shape condition, while an RNN decoder produces the start and end locations of the occupancy segments in .
RNN encoder. We condition the RNN encoder on a global shape feature that represents the unique 3D structure of each object. For example, in 3D shape reconstruction from a single image,
could be a feature vector extracted from an image to guide the 3D shape reconstruction. In a 3D shape to 3D shape translation application,could be a feature vector that can be jointly learned with other parameters in the networks, such as shape memories  or codes .
As shown in Fig. 2(a), the RNN encoder aggregates the shape condition and a 2D coordinate into a hidden state , which is subsequently leveraged by the RNN decoder to generate the corresponding tube . Rather than directly employing a location or as a discrete integer, we leverage the location as a location embedding or , which makes locations meaningful in feature space. In this way, we have a location embedding matrix along each axis, i.e., , and . Each matrix holds the location embedding of all locations along an axis as rows, i.e., , and , so that we can get an embedding for a specific location by looking up the location embedding matrix. In the case of tubelizing along the Z axis demonstrated in Fig. 1, the RNN encoder would employ the location embeddings along the X and Y axes, that is and .
We employ Gated Recurrent Units (GRU) as the RNN cells in SeqXY2SeqZ. At each step, a hidden state is produced, and the hidden state at the last step is leveraged by the RNN decoder to predict a tube for the reconstruction of a shape conditioned on , where .
Location embedding. Although we could employ three different location embedding matrices to hold embeddings for locations along the X, Y, and Z axes separately, we use , and in a shareable manner. For example, we can employ the same location embedding matrix on the plane used for indexing the 1D tubes, such as in the case shown in Fig. 1. In our experiments, we justify that we can even employ only one location embedding matrix along all three axes, that is . The shareable location embeddings significantly increase the memory efficiency of SeqXY2SeqZ.
RNN decoder. With the hidden state from the RNN encoder, the RNN decoder needs to generate a tube for the shape indicated by condition via sequentially predicting the start and end locations of each occupancy segment . To interpret the prediction of tubes with no occupancy segments, we include an additional global occupancy indicator that the decoder predicts first, where indicates that there are occupancy segments in the current tube.
We denote as the concatenation of and , such that , where each element in is uniformly denoted as and . Note that the start and end points and are discrete voxel locations, which we interpret as class labels. In each step, the RNN decoder selects a discrete label to determine either start or end location. Therefore, we leverage the following cross entropy classification loss to push the decoder to predict the correct label sequence as accurately as possible under the training set,
where is the -th element in the sequence , represents the elements in front of , is the probability of correctly predicting the -th element according to the previous elements and the hidden state from the encoder. Finally, our objective function is given as
where denotes the parameters of the RNN encoder and decoder, is the shape condition, which is fixed or trainable depending on the application, and the location embedding matrices could be shareable.
Training progress in a step by step manner is shown in Fig. 2(b). At the -th step, element in sequence
is predicted through a softmax layer. For example,is either true or false for the global occupancy indicator , and and are the start and end locations and of the occupancy segment in the range of , etc. In addition, for each we look up its location embedding from the location embedding matrix of the coordinate axis corresponding to the tube direction. The embedding is then used in the prediction of at the next step. For example, in the tubelization along the Z axis demonstrated in Fig. 1, is looked up in , such that , where each row of represents an embedding for a location, and two additional rows for a true or false of .
Attention. Finally, we leverage a state-of-the-art attention mechanism  to increase the prediction accuracy of the predicted locations. We employ a context vector for the prediction of , where summarizes how well each step of the encoder matches the prediction of .
6 Experiments and Analysis
We employ tubelization along the Y axis in all our experiments and learn only two location embedding matrices. We share the location embedding matrices along the X and Z axes providing the 2D coordinates of tubes, such that , while we use a separate matrix along the Y axis. The location embedding is -dimensional, and the hidden state of the RNNs is also -dimensional, where the RNN encoder is bidirectional.
We train SeqXY2SeqZ using the Adam optimizer with , with a batch size of and a learning rate of in all experiments. The maximum number of steps in the encoder and decoder are 4 and 30, respectively. We employ volumetric IoU to evaluate the accuracy of the reconstructed shapes, and all reported IoU values are multiplied by .
6.1 Representation ability
Dataset. For fair comparison, we leverage five widely used categories from ShapeNetCore  in this subsection, including airplane, car, chair, rifle, and table, and keep the same train and test splitting as . The ground truth shapes are also voxelized at a resolution of , such that .
Auto-encoding. We evaluate the representation ability of SeqXY2SeqZ in an auto-encoding task. We leverage a learnable shape condition to represent each shape. Specifically, shape features are learned together with the other parameters in the RNN during training. During testing, we keep updating the shape features while fixing the parameters in the RNN including the location embedding matrices, which is similar as introduced by shape memories  or codes . Note that are also -dimensional vectors, similar as the location embeddings.
In this task, we compare SeqXY2SeqZ with results from the implicit decoder (IM)  and occupancy network (OccNet) . We show the comparison in Table 1, where the mean IoU over the first 100 shapes in the test set of each category is reported by IM while OccNet only reported its results on the training set of chair at a resolution of 256.
As shown by “Our(512-512)” in Table 1, our results with -dimensional location embeddings and -dimensional hidden states are the best among all compared methods under all shape categories. If we increase the learning ability of SeqXY2SeqZ by using location embeddings and hidden states with higher dimensions, such as and shown by “Our(2048-1024)”, we achieve an even higher IoU of under the challenging chair class.
In Fig. 3, we visualize the reconstructed shapes in the test set of each category with our best results in Table. 1. The reconstructed shapes with high fidelity demonstrate that SeqXY2SeqZ is capable of learning very complex structures of 3D shapes, such as the ones on chairs and tables.
|Along Y||Along Z||Along X|
Tubelization direction. We can tubelize a voxel grid along any one of the X, Y or Z axes, which should be kept consistent in training and testing. Although the tubelization direction may lead to different ways of 3D structure learning, SeqXY2SeqZ does not exhibit any bias on the tubelization direction. We demonstrate this by training SeqXY2SeqZ using voxel grids tubelized under the X, Y and Z axis, respectively. Table 2 shows that we achieve comparable results along the three tubelization directions under the airplane class. Visual comparisons are shown in Fig. 4(a).
High resolutions. Thanks to the 2D functions and the shareable location embedding matrices, SeqXY2SeqZ is memory efficient enough to reconstruct shapes in high resolutions. We show auto-encoded airplanes in different resolutions in Fig. 4(b). The high fidelity shapes justify our capabilities of high resolution reconstruction.
6.2 Single Image 3D Reconstruction
Dataset. We employ the dataset released from , which contains 3D shapes from 13 categories in the ShapeNetCore . We also use the same train and test splitting, where each shape is represented as a voxel grid with a resolution of accompanying 24 rendered images. While many 3D reconstruction techniques (including ours, see Table 1 and Fig. 4) support higher resolutions, we follow previous works [48, 44, 32, 34] and choose ground truth voxel grids in the benchmark to provide a comparison to a broad range of competing approaches.
Single image reconstruction. We leverage a CNN encoder from  to extract a 512 dimensional feature from a rendered image as a shape condition in this experiment. We compare with the state-of-the-art supervised and unsupervised methods in Table 3. Among these methods, “DISN-V” is a network formed by a DISN  encoder and a 3D CNN decoder, “DISN-C” is DISN 
working with the estimated camera poses which is required in the reconstruction, “PTN-R” is the result using retrieval from PTN. Besides the voxel-based methods including R2N2 , PTN  and Matryoshka , all the other methods represent 3D shapes as triangle meshes, where IM , OccNet , and DISN  are based on learning 3D implicit functions. For fair comparison, all the results listed here are taken from the literature rather than being reproduced by us. For example, the results of NMR , SoftRas  and DIB-R  are all from DIB-R .
Table 3 demonstrates the performance of our method, showing that in terms of the mean IoU we improve by over the best 3D implicit function based method (DISN) and by over the best unsupervised method (DIB-R). We achieve the best IoU in 7 out of 13 categories among all supervised methods, and in 8 out of 13 categories among all unsupervised methods. Matryoshka  comes closest to our performance, but it employs non-standard augmentation on training images, which we omit. Fig. 6 shows a visual comparison, where the shapes are reconstructed from the same input images using the trained network parameters released by different methods. Although we trained our method at a resolution of , the high accuracy enables us to reveal complex geometry that other methods cannot handle, which makes our results comparable to the meshes reconstructed by other methods. Fig. 5 shows additional airplanes and tables reconstructed by our method.
|IM ||3D Implicit||55.4||49.5||51.5||74.5||52.2||56.2||29.6||52.6||52.3||64.1||45.0||70.9||56.6||54.6|
|IMRender ||3D Implicit||65.1||53.6||-||78.2||54.8||-||-||-||-||-||51.5||-||60.8||60.7|
6.3 Ablation Studies and Analysis
Ablation studies. We highlight some elements in our method by ablation studies in single image reconstruction under the chair class in Table 4. We compare our result with the ones without attention (“NoAtt”), the ones with LSTM RNN cells (“LSTM”), and the ones with single direction RNN encoder (“SingleDir”). We find that GRU performs better than LSTM, and both attention mechanism and bidirectional RNN encoder contribute to the performance.
Shareable location embedding matrix. The memory efficiency is one advantage of SeqXY2SeqZ. We achieve this not only by avoiding the direct involvement of 3D voxel grids, but also by sharing the location embedding matrices. The above experiments have shown the effectiveness of shared location embedding matrices for the X and Z axes to define the plane indexing the tubes. In this experiment, we step further by employing only one location embedding matrix for all three axes. We also tubelize the voxel grids along the Y axis, and train SeqXY2SeqZ under the chair class in sigle image 3D reconstruction. In Table 4, “ShareableXYZ” still achieves the comparable result with “Our(GRU)”.
Location embedding visualization. We visualize the location embeddings learned in auto-encoding of Table 1 in Fig. 7 (a), where each class leverages two sets of location embeddings including one shared by the X and Z axes, and the other along the Y axis. We visualize each set of location embeddings using a cosine distance matrix whose element is the pairwise cosine distance between arbitrary two location embeddings. The structure of a shape category is demonstrated by the distinctive patterns on the cosine distance matrix in different shape categories, which demonstrates the effectiveness of the learned location embeddings. In each similarity matrix, blue means more similar between two location embeddings while yellow means more different. The similarity indicates whether the two corresponding locations show similar occupancy surrounding. For a class containing shapes with similar structures, like cars, the patterns are more obvious, while a class containing shapes with large structure variations, like chairs, the patterns are less obvious. In addition, we visualize the location embeddings learned in single image reconstruction of Table 3 in Fig. 7 (b), where we also observe the different patterns on the cosine distance matrix in different shape categories. Note that we show the dimensional distance matrix in Fig. 7 (a) and the dimensional distance matrix in Fig. 7 (b) in the same size.
Attention visualization. We further visualize the attention learned in auto-encoding of Table 1. At each 2D coordinate, an attention vector is learned at each decoder step for all encoder steps. For each decoder step, we leverage entropy () to visualize at all 2D coordinates (if there is no output at this decoder step, we encode at this 2D coordinate) into an attention image, and we normalize the whole attention image using the maximal entropy. We show five attention images at the first five decoder steps for each shape in Fig. 8 (a). In each image, the higher entropy (above 0, the lighter color) indicates this decoder step is paying attention more equally on all encoder steps to generate more complex structure, such as chairs, while the lower entropy (above 0, the darker color) indicates this decoder step is focusing on a specific encoder step to generate relatively simple structure, such as cars. Similarly, we visualize the attention learned in single image reconstruction of Table 3 in Fig. 8 (b), where the chair can be reconstructed by only one occupancy segment at all 2D coordinates, which makes the attention much simpler than the one for the chair in Fig. 8 (a). Note that we show the dimensional attention images in Fig. 8 (a) and the dimensional attention images in Fig. 8 (b) in the same size.
Occupancy segment visualization. With the auto-encoded shapes, we justify the efficiency of our voxel tubelization by visualizing the number of predicted occupancy segments at each 2D coordinate in Fig. 9. For a simple car in Fig. 9 (a), it is enough to use only one occupancy segment to represent the geometry at each 2D coordinate. Although the table in Fig. 9 (b) is more complex, we can still leverage no more than three occupancy segments to represent the geometry at almost all 2D coordinates. In Fig. 9 (c) and (d), we visualize the reconstructed chairs with different numbers of occupancy segments. For complex structures in chairs, it is still enough to reconstruct almost a whole shape using only two occupancy segments.
Interpolation. We visualize the shape condition space learned in auto-encoding of Table 1
by visualizing the interpolation between two shapes. We interpolate the features between two learned shape conditions, which are further leveraged to reconstruct shapes shown in Fig.10. The transition shows how one shape is gradually transformed to another one by manipulating occupancy segments.
Memory and Computation Time. We compare the memory and computation time requirements with methods based on learning 3D implicit functions in Table 5, including OccNet  and DISN . To reconstruct a 3D shape at a resolution of from a single image during test, OccNet  requires to get occupancy values for about sampled points with additional steps of subdivision, while DISN requires to get SDF values for sampled points, both of which are higher complexity than our RNN steps. Since DISN cannot run on a single GPU as OccNet and SeqXY2SeqZ, we report a fair comparison in terms of the CPU run time and RAM space with and for reconstructing one shape from a single image. Benefiting from learning 2D functions that predict sparse representations of 1D voxel tubes, SeqXY2SeqZ achieves both the lowest time and memory requirements by a large margin.
We propose SeqXY2SeqZ to learn the structure of 3D shapes using a discriminative neural network not only benefiting from the regularity inherent in voxel grids during both training and testing, but also avoiding cubic complexity for high memory efficiency. SeqXY2SeqZ successfully resolves the issue of dense and irregular sampling during structure learning or inference required by 3D implicit function-based methods, which leads to higher inference times compared to our approach. This is achieved based on the encoding of voxel grids by our 1D voxel tubelization, which effectively represents a voxel grid as a mapping from discrete 2D coordinates to sequences of discrete 1D locations. This mapping further enables SeqXY2SeqZ to effectively learn the 3D structures as 2D functions. We demonstrate that SeqXY2SeqZ outperforms the state-of-the-art methods under widely used benchmarks.
-  (2014) Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473. Cited by: §5.
-  (2015) ShapeNet: an information-rich 3D model repository. CoRR abs/1512.03012. Cited by: §6.1, §6.2.
-  (2019) Learning to predict 3D objects with an interpolation-based differentiable renderer. CoRR abs/1908.01210. Cited by: §2, §6.2, Table 3.
Learning implicit fields for generative shape modeling.
IEEE Conference on Computer Vision and Pattern Recognition. Cited by: §1, §2, §6.1, §6.1, §6.2, Table 1, Table 3.
-  (2014) On the properties of neural machine translation: encoder-decoder approaches. In SSST@EMNLP, pp. 103–111. Cited by: §5.
-  (2016) 3D-R2N2: A unified approach for single and multi-view 3D object reconstruction. In Proceedings of European Conference on Computer Vision, pp. 628–644. Cited by: §2, §6.2, §6.2, Table 3.
-  (2017) A point set generation network for 3D object reconstruction from a single image. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 2463–2471. Cited by: §2.
-  (2017) 3D shape induction from 2D views of multiple objects. In International Conference on 3D Vision, pp. 402–411. Cited by: §2.
-  (2019) Shape reconstruction using differentiable projections and deep priors. In International Conference on Computer Vision, Cited by: §2.
-  (2018) A papier-mâché approach to learning 3D surface generation. In IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §2, Table 3.
-  (2019) Parts4Feature: learning 3D global features from generally semantic parts in multiple views. In IJCAI, Cited by: §2.
-  (2019) Unsupervised learning of 3D local features from raw voxels based on a novel permutation voxelization strategy. IEEE Transactions on Cybernetics 49 (2), pp. 481–494. Cited by: §2.
Mesh convolutional restricted boltzmann machines for unsupervised learning of features with structure preservation on 3D meshes. IEEE Transactions on Neural Network and Learning Systems 28 (10), pp. 2268 – 2281. Cited by: §2.
-  (2016) Unsupervised 3D local feature learning by circle convolutional restricted boltzmann machine. IEEE Transactions on Image Processing 25 (11), pp. 5331–5344. Cited by: §2.
-  (2017) BoSCC: bag of spatial context correlations for spatially enhanced 3D shape representation. IEEE Transactions on Image Processing 26 (8), pp. 3707–3720. Cited by: §2.
-  (2018) Deep Spatiality: unsupervised learning of spatially-enhanced global and local 3D features by deep neural network with coupled softmax. IEEE Transactions on Image Processing 27 (6), pp. 3049–3063. Cited by: §2.
-  (2019) 3D2SeqViews: aggregating sequential views for 3D global feature learning by cnn with hierarchical attention aggregation. IEEE Transactions on Image Processing 28 (8), pp. 3986–3999. Cited by: §2.
-  (2019) View Inter-Prediction GAN: unsupervised representation learning for 3D shapes by learning global shape memories to support local view predictions. In AAAI, pp. 8376–8384. Cited by: §5, §6.1.
-  (2019) SeqViews2SeqLabels: learning 3D global features via aggregating sequential views by rnn with attention. IEEE Transactions on Image Processing 28 (2), pp. 685–672. Cited by: §2.
-  (2019) Y2Seq2Seq: cross-modal representation learning for 3D shape and text by joint reconstruction and prediction of view and word sequences. In AAAI, pp. 126–133. Cited by: §2.
-  (2019) Multi-angle point cloud-vae:unsupervised feature learning for 3D point clouds from multiple angles by joint self-reconstruction and half-to-half prediction. In ICCV, Cited by: §2, §2.
-  (2019) 3DViewGraph: learning global features for 3D shapes from a graph of unordered views with attention. In IJCAI, Cited by: §2.
-  (2017) Hierarchical surface prediction for 3D object reconstruction. In International Conference on 3D Vision, pp. 412–420. Cited by: §2.
-  (2019) Render4Completion: synthesizing multi-view depth maps for 3D shape completion. ArXiv abs/1904.08366. Cited by: §2.
-  (2020) 3D shape completion with multi-view consistent inference. In AAAI, Cited by: §2.
-  (2018) Unsupervised learning of shape and pose with differentiable point clouds. In Advances in Neural Information Processing Systems, pp. 2807–2817. Cited by: §2.
-  (2018) Neural 3D mesh renderer. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 3907–3916. Cited by: §2, §6.2, Table 3.
-  (2019) CAPNet: continuous approximation projection for 3d point cloud reconstruction using 2d supervision. AAAI. Cited by: §2.
Learning efficient point cloud generation for dense 3d object reconstruction.
AAAI Conference on Artificial Intelligence, Cited by: §2.
-  (2018) Paparazzi: surface editing by way of multi-view image processing. ACM Transactions on Graphics. Cited by: §2.
-  (2019) Beyond pixel norm-balls: parametric adversaries using an analytically differentiable renderer. In International Conference on Learning Representations, Cited by: §2.
-  (2019) Soft rasterizer: differentiable rendering for unsupervised single-view mesh reconstruction. CoRR abs/1901.05567. Cited by: §6.2, §6.2, Table 3.
-  (2019) Soft rasterizer: a differentiable renderer for image-based 3D reasoning. The IEEE International Conference on Computer Vision. Cited by: §2, §6.2.
-  (2019) Learning to infer implicit surfaces without 3D supervision. In Advances in Neural Information Processing Systems, Cited by: §2, §6.2, Table 3.
-  (2019) Point2Sequence: learning the shape representation of 3D point clouds with an attention-based sequence to sequence network. In AAAI, pp. 8778–8785. Cited by: §2, §2.
-  (2019) L2G auto-encoder: understanding point clouds by local-to-global reconstruction with hierarchical self-attention. In ACMMM, Cited by: §2.
-  (2019) Occupancy networks: learning 3D reconstruction in function space. In IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §1, §2, §6.1, §6.2, §6.3, Table 1, Table 3, Table 5.
-  (2019) Deep level sets: implicit surface representations for 3D shape inference. CoRR abs/1901.06802. Cited by: §2.
-  (2019) DIFFER: moving beyond 3D reconstruction with differentiable feature rendering. In CVPR Workshops, Cited by: §2.
-  (2019) Texture fields: learning texture representations in function space. Cited by: §2.
-  (2019) DeepSDF: learning continuous signed distance functions for shape representation. In IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §1, §2, §5, §6.1.
-  (2017) PointNet: deep learning on point sets for 3D classification and segmentation. In IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §2.
-  (2017) PointNet++: deep hierarchical feature learning on point sets in a metric space. In Advances in Neural Information Processing Systems, pp. 5105–5114. Cited by: §2.
-  (2018) Matryoshka networks: predicting 3d geometry via nested shape layers.. In CVPR, pp. 1936–1944. Cited by: §2, §6.2, §6.2, §6.2, Table 3.
-  (2019) PIFu: pixel-aligned implicit function for high-resolution clothed human digitization. IEEE International Conference on Computer Vision. Cited by: §1, §2.
-  (2019) Scene representation networks: continuous 3D-structure-aware neural scene representations. In Advances in Neural Information Processing Systems, Cited by: §2.
-  (2017) Octree generating networks: efficient convolutional architectures for high-resolution 3D outputs. In IEEE International Conference on Computer Vision, pp. 2107–2115. Cited by: §2.
-  (2019) What do single-view 3D reconstruction networks learn?. In The IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §6.2.
-  (2018) Multi-view consistency as supervisory signal for learning shape and pose prediction. In Computer Vision and Pattern Regognition, Cited by: §2.
-  (2017) Multi-view supervision for single-view reconstruction via differentiable ray consistency. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 209–217. Cited by: §2.
-  (2018) Pixel2Mesh: generating 3D mesh models from single RGB images. In European Conference on Computer Vision, pp. 55–71. Cited by: §2, Table 3.
-  (2019) 3DN: 3D deformation network. In CVPR, Cited by: Table 3.
-  (2019) DISN: deep implicit surface network for high-quality single-view 3D reconstruction. In NeurIPS, Cited by: §1, §2, §6.2, §6.3, Table 3, Table 5.
-  (2019) Differentiable surface splatting for point-based geometry processing. ACM Transactions on Graphics 38 (6). Cited by: §2.
-  (2019) Pixel2Mesh++: multi-view 3D mesh generation via deformation. In IEEE International Conference on Computer Vision, Cited by: §2.
-  (2020) Point cloud completion by skip-attention network with hierarchical folding. In The IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §2.
-  (2017) MarrNet: 3D shape reconstruction via 2.5D sketches. In Advances in Neural Information Processing Systems, pp. 540–550. Cited by: §2.
-  (2016) Learning a probabilistic latent space of object shapes via 3D generative-adversarial modeling. In Advances in Neural Information Processing Systems, pp. 82–90. Cited by: §2.
-  (2016) Perspective transformer nets: learning single-view 3D object reconstruction without 3D supervision. In Advances in Neural Information Processing Systems, pp. 1696–1704. Cited by: §2, §6.2, Table 3.
-  (2018) Learning to reconstruct shapes from unseen classes. In Advances in Neural Information Processing Systems, pp. 2257–2268. Cited by: §2.