1 Introduction
Representing 3D objects efficiently is a prerequisite for a multitude of contemporary computer vision and machine learning applications, including robotic manipulation
kehoe2015survey and autonomous driving yang2018pixor . 3D registration devices used currently to create those representations, such as LIDARs and depth cameras, sample object surfaces and output a set of 3D points called a point cloud.Processing point clouds poses several challenges. First of all, the size of the point cloud can vary between objects and processing variablesize inputs is cumbersome for contemporary neural networks used in practical applications. Although one can subsample or upsample point clouds, it requires additional processing steps, continuous signed distance functions
park2019deepsdf or even separate models yifan2019patch ; yu2018pu . Other solutions to that problem rely on discretizing 3D space into regular 3D voxel grids wu20153d ; wu2016learning , collections of images su2015multi or occupancy grids ji20123d ; maturana2015voxnet . These approaches, however, increase the memory footprint of object representations and lead to quantization losses. Secondly, processing point clouds with neural networks is challenging due to the lack of ordering within sets of 3D points. More precisely, permuting the points in the cloud can lead to inconsistent outputs. DeepSets zaheer2017deep and PointNet qi2017pointnet ; qi2017pointnet++ address this problem by including permutation invariant layers in neural network architectures. Nonetheless, the same modifications cannot be used when the task requires a model to produce outputs of various sizes, e.g. in the case of point cloud reconstruction tasks.More recent methods that create representations of 3D objects from variablesize unordered point clouds rely on generative neural networks that treat point clouds as a sample from a 3D probability distribution yang2019pointflow ; stypulkowski2019conditional ; spurek2020hypernetwork . PointFlow yang2019pointflow returns probability distributions of the 3D object point cloud, instead of an exact set of points. Its main limitation, however, is a computationally expensive training process caused by conditioning the Continuous Normalizing Flow (CNF) module grathwohl2018ffjord
of the network with the autoencoder latent space. As a consequence, PointFlow models require a significant number of parameters which results in a high memory footprint of the model and long training procedure. To reduce this burden and simplify the model, HyperCloud
spurek2020hypernetwork uses a hypernetwork, instead of a CNF module as in PointFlow, to return weights of a fullyconnected target networkthat maps a uniform distribution on a 3D ball to a 3D point cloud. Although the simplicity of this approach leads to increased efficiency of HyperCloud, the quantitative results obtained by the model are inferior to those of PointFlow, mostly because conventional fullyconnected neural networks are not capable of modeling complex 3D point cloud structures. Even though using more sophisticated CNF as a target network could address this shortcoming, the formulation of HyperCloud does not allow sampling from noncompact support prior, required by the Continuous Normalizing Flow (CNF) to work.
In this paper, we take a fundamentally different approach to representing 3D objects and, inspired by mesh triangulation methods used in computer graphics edelsbrunner_2000 , we model objects as families of surfaces. More specifically, we consider a point cloud as a sample from a distribution on object surfaces with additive noise introduced by a registration device, such as LIDAR. To model this distribution, we propose a new Spherical LogNormal function which mimics the topology of 3D objects and provides noncompact support. This, in turn, enables effective utilization of a CNF model as a part of a hypernetwork, instead of a fullyconnected neural network as done in HyperCloud spurek2020hypernetwork .
The resulting generative model we introduce in this work, dubbed HyperFlow^{1}^{1}1The code is available https://github.com/maciejzieba/HyperFlow., produces stateoftheart generative results both for point clouds and mesh representations. Because we rely on a hypernetwork instead of conditioning a CNF with the autoencoder latent space, our model uses far fewer parameters of the CNF function. As a result, we reduce the training time and corresponding memory footprint of the model by over an order of magnitude with respect to the competing PointFlow.
Our contributions can be summarized as follows:

We introduce a new HyperFlow generative network that models 3D objects as families of surfaces and allows to build stateoftheart point cloud representations that can be transformed into 3D meshes by leveraging generative properties of a target network.

We propose a new Spherical LogNormal distribution which models a point cloud density with noncompact support and, hence, can be effectively used by a CNF model.

To the best of our knowledge, our work is the first approach to train a CNF as a target network which reduces its training time and memory footprint by over an order of magnitude, while preserving stateoftheart generative capabilities.
2 Spherical LogNormal distribution and the triangulation trick
In this section, we introduce a Spherical LogNormal distribution that models density of point clouds around surfaces of 3D object and show how it can be used to generate meshes via the socalled triangulation trick.
Since our approach relies on flowbased models, a density distribution has to fulfil several conditions to be used in practice. First of all, flowbased methods cannot be trained on probability distributions with compact support. For instance, it is not possible to train a flowbased model on a 3D ball, as proposed in HyperCloud spurek2020hypernetwork
, since computing the loglikelihood cost function used in flows would return infinity for this distribution. As a result, the model does not converge due to numerical instability. Secondly, we would like to model the probability distribution of the surface (mesh representation), which is twodimensional (the border of a 3D object). Therefore, a Gaussian distribution in
is not a good choice, since it models only elements in 3D. Finally, the density distribution should be topologically coherent with the density of the modeled object. More precisely, because of the way registration devices sample space around object surfaces, point clouds are populated with the highest density around object edges and missing points within object structure. Modeling this density with a distribution that does not allow discontinuities is infeasible as per Theorem 2.1 theorem1 .Theorem 2.1.
There is no continuous invertible map between the 3ball and the 2sphere that respects the boundary.
For modeling the surface of an object with a continuous, invertible map, one shall consider the topology of the object rezende2015variational ; grathwohl2018ffjord ; behrmann2018invertible . To learn a transformation that is continuous, invertible and provides results close to object boundary, one has to choose a prior that is topologically similar to the expected point cloud, i.e. has the same number of discontinuities^{2}^{2}2Continuous normalizing flows (FFJORD grathwohl2018ffjord ) are able to approximate discontinuous density functions. This, however, remains insufficient to model highquality 3D point clouds while generating continuous parametrization of object surfaces. Consequently, in our approach, we propose a density distribution without compact support and with a single discontinuity, which corresponds to topology of 3D objects represented with point clouds.. Therefore, we construct a probability distribution on a sphere without compact support.
Spherical LogNormal distribution on .
A probability distribution on a sphere in can by constructed by using onedimensional density distribution, which takes only positive real values In such case, we can define spherical density distribution as:
(1) 
where is a surface area of a dimensional unitary sphere and is a onedimensional density, which takes only positive real values. We use onedimensional density distribution along radius of unit sphere in all directions. In our model, we use a Lognormal distribution
that is a continuous probability distribution of a random variable, whose logarithm is normally distributed and, hence, provides a noncompact support.
Spherical LogNormal distribution in .
To develop an intuition behind the proposed distribution, we start with a simple visualization in . Fig. 2 shows level sets and sample from Spherical LogNormal distribution with different parameters . Spherical LogNormal distribution does not have a compact support and can therefore be used in a flowbased architecture. Furthermore, we can force the distribution to concentrate as close as possible to a 2D sphere boundaries.
In , our Spherical LogNormal distribution is defined as:
(2) 
In order to use our distribution in a flowbased model, we need to compute its loglikelihood function:
(3) 
Finally, sampling elements from our Spherical LogNormal distribution can be done by following a simple procedure. First sample from onedimensional Gaussian then sample from dimensional Gaussian . Sample form Spherical LogNormal we obtain by the following equation:
We avoid numerical instabilities of training by applying a straightforward strategy to find the right values of parameter: we start with an arbitrary large value of and reduce it linearly during training.
Triangulation trick
To model 3D object surfaces as meshes using HyperFlow generative model, we need to investigate the relationship between point clouds and object surfaces. In principle, a point cloud representing a 3D object can be considered a set of samples located on the surface of the object with additive noise introduced by a registration device. We use Spherical LogNormal to model this distribution with peak density around object surfaces (in 2D, around circle edges, in 3D close to the surface of the sphere) and limited by the radius of the distribution. Once we obtain a parametrized distribution of a point cloud which models object surface together with a registration noise, we can produce a mesh with a simple operation which we call the triangulation trick.
The triangulation trick involves transferring vertices of a sphere mesh through a target network the same way as 3D points, as shown in Part C of Fig 1. Since the target network transforms a sample from a Spherical LogNormal distribution into a 3D point cloud, when we feed it with a sphere triangulation, it outputs a mesh. In fact, when we substitute samples from Spherical LogNormal distribution with sphere vertices, we effectively assume minimal registration noise. Processing vertices by the target network pretrained on point clouds allows us to directly generate denoised mesh representation of object surfaces and obtain a highquality 3D object rendering. The generative character of our HyperFlow model enables construction of the entire mesh by processing only vertices with a target network, without the need for information about the connections between them, as done in traditional rendering methods.
Fig. 3 presents reconstructions obtained using Gaussian and Spherical LogNormal distributions. We look at the crosssections of the reconstructions to observe the main differences on how the input distribution is transformed into a final model by a target network. For the Gaussian distribution, its tails are transformed into object details, such as wing tips and airplane rear aileron. Therefore, we cannot claim that the peak density models surfaces of the object, while its tails model the registration noise. For Spherical LogNormal, its distribution tails are spread along object surfaces, modeling registration noise. This allows us to produce the final mesh through the triangulation trick, effectively denoising 3D meshbased object representation and yielding highquality results, as shown in Fig. 4.
3 HyperFlow: hypernetwork and Continuous Normalizing Flows for generating 3D point clouds
In this section, we present our HyperFlow model that leverages a hypernetwork framework to train a Continuous Normalizing Flow grathwohl2018ffjord target network and generate 3D point clouds together with its meshbased representation. Since HyperFlow encompasses previously introduced autoencoderbased PointFlow yang2019pointflow with conditioned continuous normalizing flow modules, and HyperCloud method ha2016hypernetworks , that also leverages hypernetworks, we briefly describe these two approaches before presenting ours.
Autoencoderbased generative model for 3D Point Clouds
Let us first present the autoencoder architecture. The basic aim of autoencoder is to transport the data through a typically, but not necessarily, lower dimensional latent space while minimizing the reconstruction error. Thus, we search for an encoder and decoder functions, which minimizes the reconstruction error. In the Autoencoderbased generative model we additionally ensure that the data transported to the latent comes from the prior distribution (typically Gaussian one) kingma2013auto ; tolstikhin2017wasserstein ; tabor2018cramer .
Continuous normalizing flow
Generative models are one of the fastest growing areas of deep learning. Variational Autoencoders (VAE)
kingma2013auto and Generative Adversarial Networks (GAN) goodfellow2014generative are the most popular approaches. Another model gained popularity – Normalizing Flow (NF) rezende2015variational. A flowbased generative model is constructed by a sequence of invertible transformations. Unlike the other two methods mentioned previously, the model explicitly learns the data distribution and therefore the loss function is simply the negative loglikelihood.
Normalizing Flow (NF) rezende2015variational is able to model complex probability distributions. A normalizing flow transforms a simple prior distribution (usually Gaussian one) into a complex one (represented by data distribution ) by applying a sequence of invertible transformation functions: . Flowing through a chain of transformations we obtain a probability distribution of the final target variable.
Then the probability density of the output variable is given by the change of variables formula:
(4) 
where can be computed from using the inverse flow: In such framework, both the inverse map and the determinant of the Jacobian should be computable.
The continuous normalizing flow chen2018neural is a modification of the above approach, where instead of a discrete sequence of iterations we allow the transformation to be defined by a solution to a differential equation where is a neural network that has an unrestricted architecture. Continuous Normalizing Flows (CNF ) is a solution of differential equations with the initial value problem , . In such a case we have
(5) 
where defines the continuoustime dynamics of the flow and .
The log probability cost function with prior distribution with density can be computed by:
(6) 
In PointFlow yang2019pointflow authors show that CNF can be used for modeling 3D objects. Instead of directly parametrizing the distribution of points in a shape (fixed size 3D point cloud), PointFlow models this distribution as an invertible parameterized transformation of 3D points from a prior distribution (e.g., a 3D Gaussian). Intuitively, under this model, generating points for a given shape involves sampling points from a generic Gaussian prior, and then moving them according to this parameterized transformation to their new location in the target shape.
Hypernetwork
Hypernetworks, introduced in ha2016hypernetworks , are defined as neural models that generate weights for a separate target network solving a specific task. Making an analogy between hypernetworks and generative models, the authors of sheikh2017stochastic , use this mechanism to generate a diverse set of target networks approximating the same function. Hypernetworks can also be used for functional representations of images klocek2019hypernetwork .
In the case of generating 3D point clouds, objects are represented by a neural network. Autoencoder based architecture "produces" the neural network which transforms prior distribution into elements from a point cloud. In HyperCloud spurek2020hypernetwork autoencoder based architecture takes as an input point cloud and directly produces weights to another neural network, which models elements from a 3D object.
HyperFlow
In this section, we present details of our novel model dubbed HyperFlow^{3}^{3}3We make our implementation available at https://github.com/maciejzieba/HyperFlow which encompasses and extends prior works by training continuous normalizing flow modules to model 3D point cloud distributions with a hypernetwork framework. Our model is inspired by a Variational Autoencoder (VAE) kingma2013auto ; rezende2014stochastic framework that allows learning from a dataset of observations of . VAE models data distribution via a latent variable with a prior distribution , and a decoder which reconstructs the distribution of condition on a given . The model is trained together with an encoder by minimizing the lower bound on the loglikelihood of the observations (ELBO).
Instead of using a Gaussian prior over shape representations as done in yang2019pointflow , we add another CNF to model a learnable prior . The corresponding ELBO cost function can be rewritten after yang2019pointflow as:
(7) 
where is the entropy and is the prior distribution with trainable parameters .
We propose to adapt the above cost function to a hypernetwork framework. We therefore introduce our HyperFlow model that consists of two main parts, as shown in Fig. 1. The first one is a hypernetwork that outputs weights (Fig. 1 Part A) of another neural network. The second one is a target network (Fig. 1 Part B) which models the distribution of elements on the surface of a 3D object. Using autoencoder terminology, we define three elements: an encoder, a decoder and a prior distribution.The encoder can reduce data dimensionality by mapping it to a lowerdimensional latent space . We follow achlioptas2017learning and use a simple permutationinvariant encoder to predict .
We use over shape representations proposed by PointFlow yang2019pointflow . The assumed probability distribution on the latent pace can be more complex than the commonly used and not given in an explicit form. In such a framework, we use an additional continuous normalizing flow , which transfers latent space into a Gaussian prior. Finally, we propose to use a decoder that returns weights of the target network , instead of 3D points as done in yang2019pointflow ; stypulkowski2019conditional . The resulting hypernetwork contains an encoder , a decoder and a flow (Fig. 1 Part A).
The hypernetwork takes as an input a pointcloud and returns weights to that defines the continuoustime dynamics of the flow . CNF takes an element from the prior distribution and transfers it to an element on the surface of the object, see Part B: target network in Fig. 1. In our work, we use a Freeform Jacobian of Reversible Dynamics (FFJORD) grathwohl2018ffjord and transformation between Spherical LogNormal distribution and the 3D object. As presented in Sec. 2 this choice of distribution function allows one to create a continuous mesh representation with the triangulation trick.
The cost function of HyperFlow consists of two parts. The first one correspond to hypernetwork. This part of the architecture is similar to PointFlow. The second one is a cost function of CNF corresponding to target network. The final cost function of our HyperFlow model can be calculated using Eq. (7):
where is the entropy function, is a CNF cost function between point cloud and Spherical LogNormal density and is a CNF cost function between latent representation and a Gaussian prior.
Airplane  Chair  Car  

Method  JSD  MMD  COV  JSD  MMD  COV  JSD  MMD  COV  
CD  EMD  CD  EMD  CD  EMD  CD  EMD  CD  EMD  CD  EMD  
lGAN  3.61  0.269  3.29  47.90  50.62  2.27  2.61  7.85  40.79  41.69  2.21  1.48  5.43  39.20  39.77 
PCGAN  4.63  0.287  3.57  36.46  40.94  3.90  2.75  8.20  36.50  38.98  5.85  1.12  5.83  23.56  30.29 
PointFlow  4.92  0.217  3.24  46.91  48.40  1.74  2.42  7.87  46.83  46.98  0.87  0.91  5.22  44.03  46.59 
HyperCloud  4.84  0.266  3.28  39.75  43.70  2.73  2.56  7.84  41.54  46.67  3.09  1.07  5.38  40.05  40.05 
HyperFlow  5.39  0.226  3.16  46.66  51.60  1.50  2.30  8.01  44.71  46.37  1.07  1.14  5.30  45.74  47.44 
4 Experiments
In this section, we present the evaluation of our model against the competing methods on two tasks: 3D point clouds generation and 3D mesh generation. Furthermore, we test the efficiency of our approach in terms of training time and memory footprint. All experiments are done on a stationary unit with a Nvidia GeForce GTX 1080 GPU. If not stated otherwise, default parameters are used.
Generating 3D point clouds
We compare the generative capabilities with competing approaches: latentGAN achlioptas2017learning , PCGAN li2018point , PointFlow yang2019pointflow , HyperCloud spurek2020hypernetwork . We follow the evaluation protocol of yang2019pointflow and train each model using point clouds from one of the three categories in the ShapeNet dataset shapenet : airplane, chair, and car. Tab. 1 presents the results and shows that HyperFlow obtains comparable or superior generative results to the stateoftheart PointFlow method.
Generating 3D meshes
The main advantage of our method, when compare to the reference solutions, is the ability to generate highquality 3D point clouds as well as meshes using the triangulation trick presented in Sec. 2. For evaluation of the quality of mesh grid representation, we follow the evaluation protocol of spurek2020hypernetwork . For PointFlow, we use the triangulation trick and create object meshes by feeding the target network a 3D sphere. For HyperCloud and our HyperFlow method we use a sphere with radius . As can be seen in Tab. 2, PointFlow that uses a Gaussian distribution as a prior provides results inferior to HyperCloud and HyperFlow, while our HyperFlow method offers the best performance, thanks to using Spherical LogNormal as a prior instead of a compact support distribution function as in HyperCloud. More qualitative mesh results as well as detailed description of metrics used in our experiments can be found in the supplementary material.
Training time and memory footprint comparison
Airplane  Chair  Car  
Sphere R  JSD  MMD  COV  JSD  MMD  COV  JSD  MMD  COV  
CD  EMD  CD  EMD  CD  EMD  CD  EMD  CD  EMD  CD  EMD  
PointFlow  
R=2.795  22.26  0.49  6.65  44.69  20.74  19.28  4.28  13.38  36.85  20.84  16.59  1.6  8.00  20.17  17.04 
R=3.136  26.46  0.60  6.89  39.50  19.01  22.52  4.89  14.47  32.47  17.22  20.21  1.75  7.80  21.59  17.3 
R=3.368  29.65  0.68  6.84  40.49  16.79  24.68  5.36  14.97  31.41  17.06  24.10  1.96  8.35  18.75  17.04 
HyperCloud  
R=1  9.51  0.45  5.29  30.60  28.88  4.32  2.81  9.32  40.33  40.63  5.20  1.11  6.54  37.21  28.40 
HyperFlow  
R=1  6.55  0.38  3.65  40.49  48.64  4.26  3.33  8.27  41.99  45.32  5.77  1.39  5.91  28.40  37.21 
Fig. 5 displays a comparison between our HyperFlow method and the competing PointFlow. For a fair comparison we evaluated the architectures used in the previous sections that obtain best quantitative results. The models were trained on the car dataset. Our HyperFlow approach leads to a significant reduction in both training time and memory footprint due to a more compact flow architecture enabled by a hypernetwork framework.
5 Conclusions
In this work, we introduce a novel HyperFlow method that uses a hypernetwork to model 3D objects as families of surfaces and, hence, allows to build stateoftheart point cloud reconstructions and meshbased object representations. To model a distribution of a point cloud we propose a new Spherical LogNormal distribution with noncompact support that can be effectively used by a CNF model. Finally, we believe our work is the first approach to train CNF as a target network which reduces training cost and opens new research paths for modeling complex 3D structures, such as indoor scenes.
Broader Impact
This research can be beneficial for researchers and engineers working in the space of 3D point clouds and related registration devices, such as LIDARs and depth cameras. As such, the proposed methods can be used in the context of autonomous driving and robotics. Further extensions of this work can be beneficial for people with disparities, especially related to sensory disorders, such as shortsightedness or blindness, as 3D capturing devices can effectively extend their way of interacting and perceiving the external world. On the other hand, robotic automation resulting from this work can potentially put at disadvantage people whose livelihoods depend on manual execution of jobs that can be substituted with robotics. In case of system failure, the consequences include problems with handling outputs of registration devices, such as LIDARs and depth cameras. Our method does not leverage any biases in the data.
6 Supplementary material
In this supplementary material, we first present the full description of evaluation metrics used in the experiments. We then describe two experiments showing the relationship between Gaussian distribution and Spherical LogNormal distribution proposed in our work. Finally, we show an extended set of visualizations obtained by HyperFlow.
6.1 Description of evaluation metrics
Following the methodology for evaluating generative fidelity and diversification among samples proposed in achlioptas2017learning and yang2019pointflow , we use the following evaluation metrics: JensenShannon Divergence, Coverage, Minimum Matching Distance 1nearest Neighbor Accuracy.
JensenShannon Divergence (JSD): a measure of the distance between two empirical distributions and , defined as:
Coverage (COV): a measure of generative capabilities in terms of richness of generated samples from the model. For two point cloud sets coverage is defined as a fraction of points in that are in the given metric the nearest neighbor to some points in .
Minimum Matching Distance (MMD): since COV only takes the closest point clouds into account and does not depend on the distance between the matchings additional metric was introduced. For point cloud sets , MMD is a measure of similarity between point clouds in to those in .
We examine the generative capabilities of our HyperFlow model with respect to the existing reference approaches. We strictly follow the methodology presented in yang2019pointflow . We train each model using point clouds from one of the three categories in the ShapeNet dataset: airplane, chair, and car.
6.2 Scheduling parameters of Spherical LogNormal
In our model we use Spherical LogNormal density with and . Using Spherical LogNormal density with small might be unstable since density distributing has small tails, see Fig. 2 (in main paper). At the beginning of training a loglikelihood cost function in some points might be close to zeros (numerically unstable).
Therefore, in the training procedure we start with large and reduce such parameter to . We use linear scheduling. In the case of starting and final value of with epochs we reduce the parameter by in each epoch.
Our model is approximately 10 times faster than PointFlow (see experimental section in main paper), and can be easily trained on HyperFlow density. In PointFlow architecture is larger and it is diffitult to train such model on our distribution from scratch. This process can be accelerated by using pretrained model on classical Gaussian distribution. In such a case we can start from Spherical LogNormal distribution with parameters and which approximate Gaussian distribution (see Theorem 6.1). In Fig. 6 we present comparison between samples from Gaussian distribution and Spherical LogNormal distribution with such parameters. Thanks to such solution we can take a model already trained on Gaussian distribution and train it further with our strategy.
Theorem 6.1.
Classical Gaussian distribution in can be approximated by Spherical LogNormal distribution (with log normal distribution) with parameters:
Proof.
Observe that both Gaussian and Spherical LogNormal distributions are spherical. This means that to compare them it is enough to consider the distributions of the radius. In the case of Gaussian in , the distribution of radius is given by
distribution, which has mean and variance given by
On the other hand, LogNormal (LN) distribution with parameters and has mean and variance given by
Now we have to solve above system of equations and calculate parameters by i .
∎
6.3 Families of surfaces
In this section we would like to describe in a more detailed way, how HyperFlow approximates objects by families of surfaces. Let us recall that Fig. 3 of the main paper compares how the prior density is modified for the model with Gaussian prior and Spherical LogNormal. For the Gaussian distribution, its tails are transformed into object details, such as wing tips and airplane rear aileron. Therefore, we cannot claim that the peak density models surfaces of the object, while its tails model the registration noise, as is the case for our Spherical LogNormal distribution. For Spherical LogNormal, the distribution tails are spread along object surfaces, modeling registration noise. This allows us to produce the final mesh through the triangulation trick, effectively denoising 3D meshbased object representation and yielding highquality results. In HyperFlow we use triangulation on unit sphere. It is motivated by the fact that point on surfaces has symmetric noise (gaussian noise). Nevertheless, we can use triangulation on sphere with different radii (corresponding to different percent of the density). To compare the models, for both of them we can draw the images of spheres which contain inside the same percentage of the data. In such a case we obtain families of surfaces. In Fig. 7 we present meshes obtained by different radii which contains and percent of the density. Spherical LogNormal stabilizes triangulation, while for model with normal prior relatively high fluctuations can be observed.
Usually, it is enough to use triangulation on unit sphere. But in some cases we can obtain better meshes by changing radius of the sphere. For instance, some elements from ShapeNet do not contain only surfaces of objects. In the case of some cars, we have additional elements like steering wheel, see Fig. 8. In such a case, we can use triangulation trick with a larger radius sphere for obtaining better mesh representation, see Fig. 8.
6.4 Visualization of mesh representation obtained by HyperFlow
Below we present:
Overall, our HyperFlow method offers stable and highquality object meshes at significantly lower computation cost than the competing point cloud generative models.
References
 [1] Ben Kehoe, Sachin Patil, Pieter Abbeel, and Ken Goldberg. A survey of research on cloud robotics and automation. IEEE Transactions on automation science and engineering, 12(2):398–409, 2015.

[2]
Bin Yang, Wenjie Luo, and Raquel Urtasun.
Pixor: Realtime 3d object detection from point clouds.
In
Proceedings of the IEEE conference on Computer Vision and Pattern Recognition
, pages 7652–7660, 2018.  [3] Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 165–174, 2019.
 [4] Wang Yifan, Shihao Wu, Hui Huang, Daniel CohenOr, and Olga SorkineHornung. Patchbased progressive 3d point set upsampling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5958–5967, 2019.
 [5] Lequan Yu, Xianzhi Li, ChiWing Fu, Daniel CohenOr, and PhengAnn Heng. Punet: Point cloud upsampling network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2790–2799, 2018.
 [6] Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1912–1920, 2015.
 [7] Jiajun Wu, Chengkai Zhang, Tianfan Xue, Bill Freeman, and Josh Tenenbaum. Learning a probabilistic latent space of object shapes via 3d generativeadversarial modeling. In Advances in neural information processing systems, pages 82–90, 2016.

[8]
Hang Su, Subhransu Maji, Evangelos Kalogerakis, and Erik LearnedMiller.
Multiview convolutional neural networks for 3d shape recognition.
In Proceedings of the IEEE international conference on computer vision, pages 945–953, 2015.  [9] Shuiwang Ji, Wei Xu, Ming Yang, and Kai Yu. 3d convolutional neural networks for human action recognition. IEEE transactions on pattern analysis and machine intelligence, 35(1):221–231, 2012.
 [10] Daniel Maturana and Sebastian Scherer. Voxnet: A 3d convolutional neural network for realtime object recognition. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 922–928. IEEE, 2015.
 [11] Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R Salakhutdinov, and Alexander J Smola. Deep sets. In Advances in neural information processing systems, pages 3391–3401, 2017.
 [12] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 652–660, 2017.
 [13] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Advances in neural information processing systems, pages 5099–5108, 2017.
 [14] Guandao Yang, Xun Huang, Zekun Hao, MingYu Liu, Serge Belongie, and Bharath Hariharan. Pointflow: 3d point cloud generation with continuous normalizing flows. In Proceedings of the IEEE International Conference on Computer Vision, pages 4541–4550, 2019.
 [15] Michał Stypułkowski, Maciej Zamorski, Maciej Zięba, and Jan Chorowski. Conditional invertible flow for point cloud generation. arXiv preprint arXiv:1910.07344, 2019.
 [16] Przemysław Spurek, Sebastian Winczowski, Jacek Tabor, Maciej Zamorski, Maciej Zięba, and Tomasz Trzciński. Hypernetwork approach to generating point clouds. Proceedings of the 37th International Conference on Machine Learning (ICML), 2020.
 [17] Will Grathwohl, Ricky TQ Chen, Jesse Betterncourt, Ilya Sutskever, and David Duvenaud. Ffjord: Freeform continuous dynamics for scalable reversible generative models. arXiv preprint arXiv:1810.01367, 2018.
 [18] Herbert Edelsbrunner. Triangulations and meshes in computational geometry. Acta Numerica, 9:133–213, 2000.
 [19] Austin Dill, ChunLiang Li, Songwei Ge, and Eunsu Kang. Getting topology and point cloud generation to mesh. CoRR, abs/1912.03787, 2019.
 [20] Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. arXiv preprint arXiv:1505.05770, 2015.
 [21] Jens Behrmann, Will Grathwohl, Ricky TQ Chen, David Duvenaud, and JörnHenrik Jacobsen. Invertible residual networks. arXiv preprint arXiv:1811.00995, 2018.
 [22] David Ha, Andrew Dai, and Quoc V Le. Hypernetworks. arXiv preprint arXiv:1609.09106, 2016.
 [23] Diederik P Kingma and Max Welling. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
 [24] Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, and Bernhard Schoelkopf. Wasserstein autoencoders. arXiv preprint arXiv:1711.01558, 2017.
 [25] Jacek Tabor, Szymon Knop, Przemysław Spurek, Igor Podolak, Marcin Mazur, and Stanisław Jastrzębski. Cramerwold autoencoder. arXiv preprint arXiv:1805.09235, 2018.
 [26] Ian Goodfellow, Jean PougetAbadie, Mehdi Mirza, Bing Xu, David WardeFarley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.

[27]
Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud.
Neural ordinary differential equations.
In Advances in neural information processing systems, pages 6571–6583, 2018.  [28] AbdulSaboor Sheikh, Kashif Rasul, Andreas Merentitis, and Urs Bergmann. Stochastic maximum likelihood optimization via hypernetworks. arXiv preprint arXiv:1712.01141, 2017.
 [29] Sylwester Klocek, Lukasz Maziarka, Maciej Wolczyk, Jacek Tabor, Jakub Nowak, and Marek Smieja. Hypernetwork functional image representation. In International Conference on Artificial Neural Networks, pages 496–510. Springer, 2019.
 [30] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014.
 [31] Panos Achlioptas, Olga Diamanti, Ioannis Mitliagkas, and Leonidas Guibas. Learning representations and generative models for 3d point clouds. arXiv preprint arXiv:1707.02392, 2017.
 [32] ChunLiang Li, Manzil Zaheer, Yang Zhang, Barnabas Poczos, and Ruslan Salakhutdinov. Point cloud gan. arXiv preprint arXiv:1810.05795, 2018.
 [33] Angel X. Chang, Thomas A. Funkhouser, Leonidas J. Guibas, Pat Hanrahan, QiXing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and Fisher Yu. Shapenet: An informationrich 3d model repository. CoRR, abs/1512.03012, 2015.
Comments
There are no comments yet.