Choosing a shape representation is a fundamental problem for any geometric task. Especially, with the advent of deep methods for geometry, it defines what operations are possible (e.g. convolution), what choices of architecture can be used (e.g. graph  or point networks [21, 22]), and what input modality (e.g. point clouds or images) can be used for training. Naturally, finding a proper differentiable representation for geometry has been of much research interest recently, with much focus on 3D [18, 20, 8, 23, 21, 12]. A wide variety of 3D representations exist in the literature and are used for a variety of tasks from surface reconstruction [13, 3, 14], shape completion , predicting shape from images , semantic segmentation  and many more.
At a high level, geometric representations can be grouped into two: explicit representations, where the surface of an object is explicitly represented using for example, meshes , parameterized patches [12, 24] or point clouds [21, 22]; and implicit methods, where a 3D object is defined by a scalar function in (for example by defining the surface as a level set of this function) [18, 8, 23, 11, 20, 4]
. With deep networks, a recent trend is to use a neural network to represent the scalar function for a shape[8, 18, 20, 24]. Explicit representations have the benefit that they make surface extractions easy – e.g. via Marching Cubes  – while the implicit ones are easy to embed into a deep network with simple architectures. Recently, hybrid representations [10, 7] have been proposed to combine the best of both.
Of particular relevance to our work is CvxNet , which represent shapes as the intersection of a finite number of half spaces. This representation is a universal approximator of convex domains – similar to ours – as well as non-convex ones via composition. However, they are still implicit when it comes to modelling overlap. They train to make their decompositions non-overlapping through an additional loss term and therefore have no guarantee that it would also be non-overlapping during inference. While this can be of minor importance for reasoning tasks such as shape classification, it is problematic for others such as physical simulation.
Inspired by , we propose a novel representation that guarantees non-overlapping convexes. In other words, any network trained with our representation generates non-overlapping convexes by construction. We encode geometric information in the form of a point set , and generates the collection of convexes as the corresponding collection of their Voronoi cells . This representation is hybrid: the position of the seeds is explicit, and extracting the surface only requires to compute their Voronoi Diagram – a task for which a number of robust and efficient software libraries exist . Note that differently from iso-surface extraction, the Voronoi Diagram is unique and resolution independent – no parameter needs to be selected to compute it. Interestingly for our purposes, it is possible to closely approximate the Voronoi Diagram with a differentiable implicit function, which is ideal for training.
We follow the trend pioneered by  and seek functional networks – where the output of our network is a function that can be queried at a desired location
. Given the fixed vector, we express this function as the the piecewise constant function over the Voronoi diagram of the point set where the value of the function at points in the cell have value :
where we assume that – in other words, we fix half of the sites to represent the “inside” (1) of a shape, and other half to represent the “outside” (0) of a shape.
Given an input (e.g. image, point cloud, voxel grid) from a training dataset , an encoder maps to a latent code which a decoder maps to the collection of Voronoi centers: . Figure 1 illustrates this architecture visually. The parameters of encoder and decoder are then trained by minimizing a reconstruction loss:
where is the ground truth occupancy function corresponding to
. If we compare our representation to the one provided by ReLU functional networks[18, 8, 20], we differ in a fundamental way: our learnable parameters have localized support, while the transition boundaries of an MLP generally have a global support.
While the reconstruction loss lies at the core of our method, minimizing this loss is ill-posed. In particular, there exist an infinite space of solutions where voronoi cell agrees with the occupancy of the ground truth. To remedy this, we develop a number of regularizers that aid our training process. Notably, these losses do not typically produce pareto-optimal variants of the trained network.
Let be a set of points such that half of are labelled 1, and let be the occupancy function of the associated Voronoi diagram. Assume that there are three points labelled 1 so that the triangle they form is contained in . Then, there exist an infinite number of minimizers to (2).
Assume without loss of generality that are all labelled and the triangle they form is inside . Then let be any point inside this triangle. Label with , and define by adding this labeled point to . Then is a minimizer of (2) for . In fact since the produces the same function as . ∎
To differentiate through our Voronoi function, we generalize (1) by replacing the argmin with a soft-argmin. Given , we first define a vector :
where is a temperature parameter and then formulate the soft version of (1) as:
hence the temperature hyper-parameter controls the soft-argmin approximation to argmin. In all experiments in the paper we set .
We naturally want to prevent our Voronoi sites from drifting far away from the data, which can be enforced in a smooth way via :
where extracts the dimension and . We favor this to the use of output layers with bounded ranges as  noting how these can suffer of vanishing gradients.
Signed distance Loss
As we prescribe the Voronoi (inside/outside) classes rather than optimizing them, it is clear that if the , then the corresponding should be inside, or in other words (and symmetrically for ). Hence, we can define a loss that induces strong gradients towards the satisfaction of this property. With let us define the distance function to , and with the distance function to its complement space , and then define:
Note that all correct approximations of the ground truth occupancy lie in the null space of this loss. Thus, simply accelerates training and does not prevent the network from finding a global minimum to the problem.
Centroidal Voronoi loss
To remedy the ill posedness (Lemma 1) of the reconstruction loss (2), we add a loss that pushes each Voronoi point towards the centroid of its corresponding cell. A Voronoi diagram whose points lie at the centroid of its cells is known as centroidal. Centroidal Voronoi tesselations have cells with roughly equal shape and have been used for many years in graphics to generate high quality tesselations of space [2, 6, 16]. Asking the Voronoi diagram to be as centroidal as possible prevents points from clustering and introduces a unique reconstruction minimum. Given Voronoi sites , we augment the sites with points on the boundary with 0 (outside), we compute their Delaunay triangulation, and express its corresponding graph Laplacian operator via a sparse matrix ; a CVD-like loss can then be expressed by:
3 Experiments and Results
Overfitting a Sphere
We start by evaluating the reconstruction power of our network in terms of number of degrees of freedom used for a simple 3D dataset (Figure 4). We compare our method to the state of the art OccNet  and DeepSDF . Note that while both OccNet and DeepSDF guarantee
continuity, the number of neurons necessary to generate reconstructions comparable to Voronoi networks in terms of Hausdorff distance to the ground truth isthree orders of magnitudes larger than with our approach. Figure 3 plots the number of parameters for the function versus Haussdorff distance from the ground truth for all 3 methods. Figure 4 shows the reconstructions for each method visually.
We evaluate our formulation on the MNIST dataset by treating the digits as an occupancy function in the domain that needs to be predicted. We compare our method against OccNet . Both methods use a 4 layer fully connected encoder with 1024 neurons per layer. The encoder maps an MNIST digit image to a 16 dimensional latent variable. The decoder for our method is a 3 layer fully connected network with 1024 neurons per layer which maps the latent code to 128 Voronoi cells. The decoder for occnet has one hidden layer with a varying number of neurons. The decoder maps a latent code and point
to a probability of occupancy.
We start by visualizing the tSNE embedding in Figure 6. Notice that while the method was trained in a self-supervised fashion, the latent space was able to organize the various digits by clearly separating the semantic classes. It is interesting to note how part of the “8” embedding space is wedged between the “3” and the “5”, reflecting the geometric similarity between these characters, and the required topological changes to interpolate between them. To show this, we also visualize a path in the embedding space by encoding two digits, and then interpolating their latent codes; see Figure 2. Notice how the topology of the “9” is first converted into the one of a “5”, then into a “6” and finally smoothly deformed towards the target configuration.
We conclude our experiments by evaluating (on the test set) the auto-encoding performance on MNIST. Note in this comparison we keep the capacity of the encoder portion of our auto-encoder consistent across the various baselines. In particular, we compare our Voronoi decoders to popular implicit pipelines that use a multi-layer perceptron as a (conditional) implicit decoder [18, 20, 8]. Figure 5 shows randomly drawn results illustrate how the Voronoi decoder allows for a significantly more compact representation of occupancy than occnet. Table 1 compares statistics of voronoi reconstructions versus occnet on the test set with varying number of degrees of freedom.
4 Conclusion and Future Work
We introduced a new differentiable layer for solid geometry representation leveraging the Voronoi diagram. Similarly to [18, 20, 8], we expect our solution to scale to the modeling of 3D objects with minor modifications. The challenge will be the identification of a random sampling tailored to evaluate the expectation of . While CvxNet  introduced the idea of hybrid representation learning, where training is performed in the implicit domain, and inference in the explicit domain (i.e. generates meshes), our network can infer discrete geometry as the crust separating the inside/outside Voronoi cells, removing the need for the iso-surfacing post-processing (e.g. marching cubes ).
Our work is early in its stage. As future work, we plan to apply our method to higher dimensional data, to produce meshing of volume and not only surfaces, to analyze the benefit it brings in physical simulations.
-  Narendra Ahuja, Byong An, and Bruce Schachter. Image representation using voronoi tessellation. Computer Vision, Graphics, and Image Processing, 29(3):286 – 295, 1985.
-  Pierre Alliez, Eric Colin De Verdire, Olivier Devillers, and Martin Isenburg. Isotropic surface remeshing. In Shape Modeling International., 2003.
-  Matan Atzmon and Yaron Lipman. Sal: Sign agnostic learning of shapes from raw data. arXiv:1911.10414, 2019.
-  Matan Atzmon, Haggai Maron, and Yaron Lipman. Point convolutional neural networks by extension operators. CoRR, abs/1803.10091, 2018.
-  C Bradford Barber, David P Dobkin, David P Dobkin, and Hannu Huhdanpaa. The quickhull algorithm for convex hulls. ACM Transactions on Mathematical Software (TOMS), 1996.
-  Mario Botsch and Leif Kobbelt. A remeshing approach to multiresolution modeling. In Proceedings of the symposium on Geometry processing, 2004.
-  Zhiqin Chen, Andrea Tagliasacchi, and Hao Zhang. Bsp-net: Generating compact meshes via binary space partitioning. arXiv:1911.06971, 2019.
Zhiqin Chen and Hao Zhang.
Learning implicit fields for generative shape modeling.
Proc. of Comp. Vision and Pattern Recognition (CVPR), 2019.
-  Angela Dai and Matthias Niezner. Scan2mesh: From unstructured range scans to 3d meshes. In Proc. of Comp. Vision and Pattern Recognition (CVPR), 2019.
-  Boyang Deng, Kyle Genova, Soroosh Yazdani, Sofien Bouaziz, Geoffrey Hinton, and Andrea Tagliasacchi. Cvxnet: Learnable convex decomposition. arXiv:1909.05736, 2019.
-  Kyle Genova, Forrester Cole, Daniel Vlasic, Aaron Sarna, William T Freeman, and Thomas Funkhouser. Learning shape templates with structured implicit functions. arXiv:1904.06447, 2019.
-  Thibault Groueix, Matthew Fisher, Vladimir G Kim, Bryan C Russell, and Mathieu Aubry. Atlasnet: A papier-mache approach to learning 3d surface generation. arXiv:1802.05384, 2018.
-  Hugues Hoppe, Tony DeRose, Tom Duchamp, John McDonald, and Werner Stuetzle. Surface reconstruction from unorganized points. 1992.
-  Michael Kazhdan, Matthew Bolitho, and Hugues Hoppe. Poisson surface reconstruction. In Proceedings of Symposium on Geometry processing, 2006.
-  Ilya Kostrikov, Joan Bruna, Daniele Panozzo, and Denis Zorin. Surface networks. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2540–2548, 2018.
-  Yang Liu, Wenping Wang, Bruno Lévy, Feng Sun, Dong-Ming Yan, Lin Lu, and Chenglei Yang. On centroidal voronoi tessellation—energy smoothness and fast computation. ACM Trans. on Graphics (Proc. of SIGGRAPH), 2009.
-  William E Lorensen and Harvey E Cline. Marching cubes: A high resolution 3d surface construction algorithm. In ACM Trans. on Graphics (Proc. of SIGGRAPH), 1987.
-  Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3d reconstruction in function space. arXiv:1812.03828, 2018.
Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodola, Jan Svoboda,
and Michael M Bronstein.
Geometric deep learning on graphs and manifolds using mixture model cnns.In Proc. of Comp. Vision and Pattern Recognition (CVPR), 2017.
-  Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. Proc. of Comp. Vision and Pattern Recognition (CVPR), 2019.
-  Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proc. of Comp. Vision and Pattern Recognition (CVPR), 2017.
-  Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J. Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. CoRR, abs/1706.02413, 2017.
-  Shunsuke Saito, Zeng Huang, Ryota Natsume, Shigeo Morishima, Angjoo Kanazawa, and Hao Li. Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization. arXiv:1905.05172, 2019.
-  Francis Williams, Teseo Schneider, Cláudio T. Silva, Denis Zorin, Joan Bruna, and Daniele Panozzo. Deep geometric prior for surface reconstruction. CoRR, abs/1811.10943, 2018.