VoronoiNet: General Functional Approximators with Local Support

12/08/2019 ∙ by Francis Williams, et al. ∙ 0

Voronoi diagrams are highly compact representations that are used in various Graphics applications. In this work, we show how to embed a differentiable version of it – via a novel deep architecture – into a generative deep network. By doing so, we achieve a highly compact latent embedding that is able to provide much more detailed reconstructions, both in 2D and 3D, for various shapes. In this tech report, we introduce our representation and present a set of preliminary results comparing it with recently proposed implicit occupancy networks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Choosing a shape representation is a fundamental problem for any geometric task. Especially, with the advent of deep methods for geometry, it defines what operations are possible (e.g. convolution), what choices of architecture can be used (e.g. graph [19] or point networks [21, 22]), and what input modality (e.g. point clouds or images) can be used for training. Naturally, finding a proper differentiable representation for geometry has been of much research interest recently, with much focus on 3D [18, 20, 8, 23, 21, 12]. A wide variety of 3D representations exist in the literature and are used for a variety of tasks from surface reconstruction [13, 3, 14], shape completion [9], predicting shape from images [8], semantic segmentation [21] and many more.

At a high level, geometric representations can be grouped into two: explicit representations, where the surface of an object is explicitly represented using for example, meshes [15], parameterized patches [12, 24] or point clouds [21, 22]; and implicit methods, where a 3D object is defined by a scalar function in (for example by defining the surface as a level set of this function) [18, 8, 23, 11, 20, 4]

. With deep networks, a recent trend is to use a neural network to represent the scalar function for a shape

[8, 18, 20, 24]. Explicit representations have the benefit that they make surface extractions easy – e.g. via Marching Cubes [17] – while the implicit ones are easy to embed into a deep network with simple architectures. Recently, hybrid representations [10, 7] have been proposed to combine the best of both.

[width=] fig/teaser.png

Figure 1: We propose a new differentiable implicit representation of solid object based on Voronoi diagrams. An encoder generate a latent representation, which a decoder converts into a collection of sites . Our layer receives these sites in input, and generate a function that can be evaluated at a point .

Of particular relevance to our work is CvxNet [10], which represent shapes as the intersection of a finite number of half spaces. This representation is a universal approximator of convex domains – similar to ours – as well as non-convex ones via composition. However, they are still implicit when it comes to modelling overlap. They train to make their decompositions non-overlapping through an additional loss term and therefore have no guarantee that it would also be non-overlapping during inference. While this can be of minor importance for reasoning tasks such as shape classification, it is problematic for others such as physical simulation.

[width=] fig/interp.png

Figure 2:

We encode the leftmost (3) and rightmost (5) digits in latent space and then linearly interpolate the corresponding latent codes.

Inspired by [1], we propose a novel representation that guarantees non-overlapping convexes. In other words, any network trained with our representation generates non-overlapping convexes by construction. We encode geometric information in the form of a point set , and generates the collection of convexes as the corresponding collection of their Voronoi cells . This representation is hybrid: the position of the seeds is explicit, and extracting the surface only requires to compute their Voronoi Diagram – a task for which a number of robust and efficient software libraries exist [5]. Note that differently from iso-surface extraction, the Voronoi Diagram is unique and resolution independent – no parameter needs to be selected to compute it. Interestingly for our purposes, it is possible to closely approximate the Voronoi Diagram with a differentiable implicit function, which is ideal for training.

2 Method

We follow the trend pioneered by [8] and seek functional networks – where the output of our network is a function that can be queried at a desired location 

. Given the fixed vector

, we express this function as the the piecewise constant function over the Voronoi diagram of the point set where the value of the function at points in the cell have value :

(1)

where we assume that – in other words, we fix half of the sites to represent the “inside” (1) of a shape, and other half to represent the “outside” (0) of a shape.

Given an input (e.g. image, point cloud, voxel grid) from a training dataset , an encoder maps to a latent code which a decoder maps to the collection of Voronoi centers: . Figure 1 illustrates this architecture visually. The parameters of encoder and decoder are then trained by minimizing a reconstruction loss:

(2)

where is the ground truth occupancy function corresponding to

. If we compare our representation to the one provided by ReLU functional networks 

[18, 8, 20], we differ in a fundamental way: our learnable parameters have localized support, while the transition boundaries of an MLP generally have a global support.

Regularizers

While the reconstruction loss lies at the core of our method, minimizing this loss is ill-posed. In particular, there exist an infinite space of solutions where voronoi cell agrees with the occupancy of the ground truth. To remedy this, we develop a number of regularizers that aid our training process. Notably, these losses do not typically produce pareto-optimal variants of the trained network.

Lemma 1.

Let be a set of points such that half of are labelled 1, and let be the occupancy function of the associated Voronoi diagram. Assume that there are three points labelled 1 so that the triangle they form is contained in . Then, there exist an infinite number of minimizers to (2).

Proof.

Assume without loss of generality that are all labelled and the triangle they form is inside . Then let be any point inside this triangle. Label with , and define by adding this labeled point to . Then is a minimizer of (2) for . In fact since the produces the same function as . ∎

Soft-Voronoi

To differentiate through our Voronoi function, we generalize (1) by replacing the argmin with a soft-argmin. Given , we first define a vector :

(3)

where is a temperature parameter and then formulate the soft version of (1) as:

(4)

hence the temperature hyper-parameter controls the soft-argmin approximation to argmin. In all experiments in the paper we set .

Bounds loss

We naturally want to prevent our Voronoi sites from drifting far away from the data, which can be enforced in a smooth way via [7]:

(5)

where  extracts the dimension and . We favor this to the use of output layers with bounded ranges as [7] noting how these can suffer of vanishing gradients.

Figure 3: Plot of the number of parameters (x-axis) vs. Hausdorff distance (y-axis) from the ground truth for the overfitted sphere example using (left) Voronoi and (right) OccNet.

[width=] fig/sphere.png

Figure 4:

We compare the reconstruction power in terms of neural capacity of our VoronoiNet (top) vs. the one of traditional multi-layer perceptrons used in OccNet 

[18] (bottom) on a simple 3D sphere – note these are overfitting results on a single example.

Signed distance Loss

As we prescribe the Voronoi (inside/outside) classes rather than optimizing them, it is clear that if the , then the corresponding should be inside, or in other words (and symmetrically for ). Hence, we can define a loss that induces strong gradients towards the satisfaction of this property. With let us define the distance function to , and with the distance function to its complement space , and then define:

(6)

Note that all correct approximations of the ground truth occupancy lie in the null space of this loss. Thus, simply accelerates training and does not prevent the network from finding a global minimum to the problem.

Centroidal Voronoi loss

To remedy the ill posedness (Lemma 1) of the reconstruction loss (2), we add a loss that pushes each Voronoi point towards the centroid of its corresponding cell. A Voronoi diagram whose points lie at the centroid of its cells is known as centroidal. Centroidal Voronoi tesselations have cells with roughly equal shape and have been used for many years in graphics to generate high quality tesselations of space [2, 6, 16]. Asking the Voronoi diagram to be as centroidal as possible prevents points from clustering and introduces a unique reconstruction minimum. Given Voronoi sites , we augment the sites with points on the boundary with 0 (outside), we compute their Delaunay triangulation, and express its corresponding graph Laplacian operator via a sparse matrix ; a CVD-like loss can then be expressed by:

(7)

[width=] fig/qualitative.png

Figure 5:

A qualitative comparisons of the representation power of different neural decoders as the number of degrees of freedom is increased.

[width=] fig/tsne.png

Figure 6: A tSNE embedding of our latent code on the MNIST dataset, where the ground truth class has been used to color their identity.

3 Experiments and Results

Overfitting a Sphere

We start by evaluating the reconstruction power of our network in terms of number of degrees of freedom used for a simple 3D dataset (Figure 4). We compare our method to the state of the art OccNet [18] and DeepSDF [20]. Note that while both OccNet and DeepSDF guarantee

continuity, the number of neurons necessary to generate reconstructions comparable to Voronoi networks in terms of Hausdorff distance to the ground truth is

three orders of magnitudes larger than with our approach. Figure 3 plots the number of parameters for the function versus Haussdorff distance from the ground truth for all 3 methods. Figure 4 shows the reconstructions for each method visually.

Mnist

We evaluate our formulation on the MNIST dataset by treating the digits as an occupancy function in the  domain that needs to be predicted. We compare our method against OccNet [18]. Both methods use a 4 layer fully connected encoder with 1024 neurons per layer. The encoder maps an MNIST digit image to a 16 dimensional latent variable. The decoder for our method is a 3 layer fully connected network with 1024 neurons per layer which maps the latent code to 128 Voronoi cells. The decoder for occnet has one hidden layer with a varying number of neurons. The decoder maps a latent code and point

to a probability of occupancy.

Embedding space

We start by visualizing the tSNE embedding in Figure 6. Notice that while the method was trained in a self-supervised fashion, the latent space was able to organize the various digits by clearly separating the semantic classes. It is interesting to note how part of the “8” embedding space is wedged between the “3” and the “5”, reflecting the geometric similarity between these characters, and the required topological changes to interpolate between them. To show this, we also visualize a path in the embedding space by encoding two digits, and then interpolating their latent codes; see Figure 2. Notice how the topology of the “9” is first converted into the one of a “5”, then into a “6” and finally smoothly deformed towards the target configuration.

We conclude our experiments by evaluating (on the test set) the auto-encoding performance on MNIST. Note in this comparison we keep the capacity of the encoder portion of our auto-encoder consistent across the various baselines. In particular, we compare our Voronoi decoders to popular implicit pipelines that use a multi-layer perceptron as a (conditional) implicit decoder [18, 20, 8]. Figure 5 shows randomly drawn results illustrate how the Voronoi decoder allows for a significantly more compact representation of occupancy than occnet. Table 1 compares statistics of voronoi reconstructions versus occnet on the test set with varying number of degrees of freedom.

Method Mean Std Med
OccNet 128 83.803001 28.211296 85.692169
OccNet 512 76.165771 28.211296 75.422150
OccNet 16k 52.658348 14.332524 53.036644
Voronoi 128 57.996124 17.018425 58.294270
Table 1: Autoencoder statistics for different methods with different degrees of freedom. Note how Voronoi with 128 cells is comparable to OccNet with with 4 orders of magnitude more parameters.

4 Conclusion and Future Work

We introduced a new differentiable layer for solid geometry representation leveraging the Voronoi diagram. Similarly to [18, 20, 8], we expect our solution to scale to the modeling of 3D objects with minor modifications. The challenge will be the identification of a random sampling tailored to evaluate the expectation of . While CvxNet [10] introduced the idea of hybrid representation learning, where training is performed in the implicit domain, and inference in the explicit domain (i.e. generates meshes), our network can infer discrete geometry as the crust separating the inside/outside Voronoi cells, removing the need for the iso-surfacing post-processing (e.g. marching cubes [17]).

Our work is early in its stage. As future work, we plan to apply our method to higher dimensional data, to produce meshing of volume and not only surfaces, to analyze the benefit it brings in physical simulations.

References