3D Shape Variational Autoencoder Latent Disentanglement via Mini-Batch Feature Swapping for Bodies and Faces

11/24/2021
by   Simone Foti, et al.
UCL
33

Learning a disentangled, interpretable, and structured latent representation in 3D generative models of faces and bodies is still an open problem. The problem is particularly acute when control over identity features is required. In this paper, we propose an intuitive yet effective self-supervised approach to train a 3D shape variational autoencoder (VAE) which encourages a disentangled latent representation of identity features. Curating the mini-batch generation by swapping arbitrary features across different shapes allows to define a loss function leveraging known differences and similarities in the latent representations. Experimental results conducted on 3D meshes show that state-of-the-art methods for latent disentanglement are not able to disentangle identity features of faces and bodies. Our proposed method properly decouples the generation of such features while maintaining good representation and reconstruction capabilities.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 5

page 12

page 13

page 14

page 15

page 16

page 17

04/20/2021

Disentangled Face Identity Representations for joint 3D Face Recognition and Expression Neutralisation

In this paper, we propose a new deep learning-based approach for disenta...
08/18/2019

Geometric Disentanglement for Generative Latent Shape Models

Representing 3D shape is a fundamental problem in artificial intelligenc...
04/17/2019

Learning Interpretable Disentangled Representations using Adversarial VAEs

Learning Interpretable representation in medical applications is becomin...
08/06/2021

GLASS: Geometric Latent Augmentation for Shape Spaces

We investigate the problem of training generative models on a very spars...
05/08/2020

Variance Constrained Autoencoding

Recent state-of-the-art autoencoder based generative models have an enco...
12/31/2015

Autoencoding beyond pixels using a learned similarity metric

We present an autoencoder that leverages learned representations to bett...
12/10/2017

Shape optimization in laminar flow with a label-guided variational autoencoder

Computational design optimization in fluid dynamics usually requires to ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The generation of 3D human faces and bodies is a complex task with multiple potential applications ranging from movie and game productions, to augmented and virtual reality, as well as healthcare applications. Currently, the generation procedure is either manually performed by highly skilled artists or it involves semi-automated avatar design tools. Even though these tools greatly simplify the design process, they are usually limited in flexibility because of the intrinsic constraints of the underlying generative models [gruber2020interactive]. Blendshapes [loper2015smpl, osman2020star, tena2011interactive], 3D morphable models [blanz1999morphable, ploumpis2019combining, li2017learning], autoencoders [ranjan2018generating, gong2019spiralnet++, aumentado2019geometric, cosmo2020limp]

, and generative adversarial networks 

[cheng2019meshgan, gecer2020synthesizing, li2020learning, abrevaya2019decoupled] are currently the most used generative models, but they all share one particular issue: the creation of local features is difficult or even impossible. In fact, not only do generative coefficients (or latent variables) lack any semantic meaning, but they also create global changes in the output shape. For this reason, we focus on the problem of 3D shape creation by enforcing disentanglement among sets of generative coefficients controlling the identity of a character.

Following [bengio2013representation, higgins2016beta, kim2018disentangling]

we define a disentangled latent representation as one where changes in one latent unit affects only one factor of variation while being invariant to changes in other factors. More interpretable and structured latent representations of data that expose their semantic meaning have been widely researched in the artificial intelligence community 

[higgins2016beta, kim2018disentangling, kulkarni2015deep, esmaeili2019structured, ding2020guided], but this is still an open problem especially for generative models of 3D shapes [aumentado2019geometric]

. Given the superior representation capabilities, the reduced number of parameters, and the stable training procedures, we decide to focus our study on deep-learning-based generative models and in particular on variational autoencoders (VAEs). In this field, recent work has tried to address the latent disentanglement problem for 3D shapes and managed to decouple the control over identity and expression (or pose) 

[aumentado2019geometric, cosmo2020limp, abrevaya2019decoupled], but they are still unable to properly disentangle identity features. Some success has been achieved in the generation of 3D shapes of furniture [nash2017shape, yang2020dsm], but the structural variability of the data requires complex architectures with multiple encoders and decoders for different furniture parts. In contrast, our method relies on a single VAE which is trained by curating the mini-batch generation procedure and with an additional loss. The intuition behind our method is that if we swap features (e.g. nose, ears, legs, arms, etc.) across the input data in a controlled manner (Fig. 1, Left), we not only know a priori which shapes within a mini-batch have (do not have) the same feature, but we also know which are (are not) created from the same face (body). These differences and similarities across shapes should be captured in the latent representation. Therefore, assuming that different subsets of latent variables correspond to different features, we can partition the latent space and leverage the structure of the input batch to encourage a more disentangled, interpretable, and structured representation.

With the objective of building a model capable of generating 3D meshes, we define our VAE architecture extending [gong2019spiralnet++]. This state-of-the-art model proved to be fast and capable of better capturing non-linear representations of 3D meshes, while leveraging very intuitive convolutional operators characterised by a reduced number of parameters. Nonetheless, the network choice is arbitrary and we expect our method to be working also with other network configurations and operators. Even though we consider meshes as our primary data structures, it is also worth noting that, by providing semantic segmentations of the different features, our method is applicable to voxel- or point-cloud-based generative models. We believe that the generality of the proposed method is particularly important in the current geometric deep learning field, where definitions of 3D convolutions and pooling operators are still an open problem.

To summarise, the key contributions of our approach are: (i) the definition of a new mini-batching procedure based on feature swapping, (ii) the introduction of a novel loss function capable of leveraging shape differences and similarities within each mini-batch, and (iii) the consequent creation of a 3D-VAE capable of generating 3D meshes from a more interpretable and structured latent representation. Our code and pre-trained models will be made available online.

2 Related Work

In this section, we first discuss existing work on 3D generative models of faces and bodies, followed by state-of-the art approaches for latent disentanglement of autoencoder-based generative models.

Generative Models

Blendshape models manually created by artists linearly interpolate local features between two or more manually selected shapes. These models are common as consumer-level avatar design tools adopted by several videogame engines. Even though they guarantee control over the generation of local features, they are very large models usually built with only a few subjects and are capable of offering only very limited flexibility and expressivity 

[gruber2020interactive]

. A widespread approach to overcome these limitations is to rely on linear statistical 3D morphable models (3DMM). These models are based upon the identity space of a population of 3D shapes, and are usually built by applying a principal component analysis (PCA) over the entire dataset. They are always built with the assumption that shapes are registered between each other and in dense point correspondence. This allows the generation of meaningful and morphologically realistic shapes as linear combinations of training data. This technique was pioneered by

[blanz1999morphable] and further developed and adopted by many researchers [egger20203d]. Interestingly, [gruber2020interactive] divided the face in different local patches and trained a PCA model for each region in order to control the generation of different facial features. The generation of new faces and interactive face sculpting are then achieved through a constrained optimisation. Recently, [ploumpis2019combining, ploumpis2020towards] combined multiple 3DMMs to create the first combined, large-scale, full-head morphable model. In particular, the universal head model (Uhm[ploumpis2019combining] combines the Large-Scale Face Model (LSFM) [booth20163d], which was built with face scans from subjects, with the LYHM head model [dai2020statistical]. In [ploumpis2020towards] it was extended by combining also a detailed ear model, eye and eye region models, as well as basic models for mouth, teeth, tongue and inner mouth cavity. As further detailed in Sec. 4, given the high diversity of Uhm we decided to train our face model on heads from [ploumpis2019combining].

PCA-based models and blendshapes are often combined. For instance, Smpl[loper2015smpl] learns linear PCA models of male and female body shapes from approximately scans per gender, and subsequently uses the resulting principal components as body shape blendshapes capable of efficiently controlling the identity of a subject. The same approach is used also by Star [osman2020star], which not only creates more realistic pose deformations than [loper2015smpl], but it also leverages additional scans to improve the generalisation capabilities of the model. Given its better generalisation with respect to other state-of-the-art methods, we trained our body model on shapes generated from Star.

Recently, advances in the geometric deep learning community allowed to efficiently define convolutional operators on 3D data such as meshes and point-clouds. [ranjan2018generating]

is the first AE for 3D meshes of faces based on a graph convolutional neural network. This model was built using significantly less parameters than state-of-the-art PCA-based models and showed lower reconstruction errors as well as better generalisation to unseen faces. Other AE-based architectures leveraging different convolutional operators over different datasets were subsequently introduced 

[aumentado2019geometric, cosmo2020limp, litany2018deformable, yuan2020mesh, zhou2020fully]. Despite the remarkable performance of these models, we decided to adopt the base architecture of [gong2019spiralnet++], which further improved upon previous methods by defining a more intuitive convolutional operator based on dilated spiral convolutions (i.e. spiral++ convolution).

An alternative line of work considers generative adversarial networks (GANs) instead of autoencoders. The first GAN operating on 3D meshes was proposed in [cheng2019meshgan] and it allowed to disentangle identity from expression generative factors. Other methods usually map 3D shapes to the image domain and then train adversarial networks with traditional 2D convolutions [abrevaya2019decoupled, gecer2020synthesizing, li2020learning]. GAN models are generally able to generate more detailed and realistic 3D shapes than autoencoders at the cost of being more unstable and difficult to train.

As aforementioned in Sec. 1, with the exception of artistically-created blendshape models and [gruber2020interactive], none of the other methods here described allow to control local changes during the generation process because their generative coefficients lack any semantic meaning, are not easily interpretable and are not properly disentangled.

Autoencoder Latent Disentanglement

Latent disentanglement for the generation of 3D shapes has been explored mostly in relation to the disentanglement of identity and pose generative factors. [aumentado2019geometric] created a two level architecture combining a point-cloud AE with a VAE where the latent space is successfully partitioned by relying on multiple geometric losses and disentanglement penalties. [cosmo2020limp] achieves similar results by training a point-cloud VAE while controlling the amount of distortion incurring in the construction of the latent space. As mentioned in Sec. 1, these methods are not capable of disentangling generative factors controlling the identity of different subjects. Methods such as [yang2020dsm, nash2017shape], on the other hand, are able to control different parts of furniture meshes, but they require complex architectures with multiple encoders and decoders controlling the different parts. Even though part hierarchies have to be considered in the model formulation, differently from faces and bodies, discontinuities between different parts are not a problem when generating furniture.

Research on latent disentanglement of AEs often focuses on the scenario in which only raw observations are available without any supervision about the generative factors, and it is usually performed on images. [higgins2016beta] proposed a simple modification to a VAE [kingma2013auto]. By increasing the weight of the Kullback–Leibler (KL) divergence, -VAE showed better latent disentanglement properties at the expense of a reduced quality of the generated samples. Subsequent work, such as [kumar2017variational, kim2018disentangling], tried to overcome this limitation. The DIP-VAE [kumar2017variational] leverages an additional regularisation term on the expectation of the approximate posterior over observed data. The Factor VAE [kim2018disentangling] encourages the latent distribution to be factorial, and therefore independent across dimensions, by using a latent discriminator and by adding a total correlation term in the VAE loss function. An interesting approach to encourage latent variables to represent predefined transformations was proposed in [kulkarni2015deep]

, where mini-batches are created combining active and inactive transformations and gradients influencing the latent are modified during backpropagation. However, this method requires synthetic datasets created with known properties that can be used during training to achieve the disentanglement. Recently,

[esmaeili2019structured] proposed a VAE in which the objective function is hierarchically decomposed to control the relative levels of statistical independence between groups of variables and for individual variables in the same group. The recursive formulation of the loss introduces additional terms for any variable that has to be disentangled and works only where the factors of variation are uncorrelated scalar variables, a requirement that hampers the applicability of the model in real-world scenarios. Finally, the Guided-VAE [ding2020guided] in its unsupervised setting leverages a secondary decoder that learns a set of PCA bases that are used to guide the training over simple geometrical shapes. Nevertheless, being the secondary decoder based on a PCA, latent variables suffer the same problems of PCA models.

Among the aforementioned methods for latent disentanglement the DIP-VAE [kumar2017variational] and Factor VAE [kim2018disentangling] showed good disentanglement performance also on in-the-wild image datasets while requiring only minor modifications to the VAE formulation. For this reason, we implemented a DIP-VAE and a Factor VAE operating on meshes and compared them against our method.

3 Method

The proposed method (Fig. 1) allows us to obtain more interpretable and structured latent representations for self-supervised 3D generative models. This is achieved by training a mesh-convolutional variational autoencoder (Sec. 3.1) with a mini-batch controlled feature swapping procedure and a latent consistency loss (Sec. 3.2).

Figure 2: Examples of feature swapping for different features and different subjects.

3.1 Mesh Variational Autoencoder

A 3D manifold triangle mesh is defined as , where is its vertex embedding, is the edge connectivity that defines its topology, and are its triangular faces. Assuming that meshes share the same topology across the entire dataset, and are constant and meshes differ from one another only for the position of their vertices, which are assumed to be consistently aligned, scaled, and with point-wise correspondences. Since traditional convolutional operators are not compatible with the non-Euclidean nature of meshes, we build our generative model with the simple yet efficient approach defined in [gong2019spiralnet++]. Convolution operators are thus defined as learnable functions over pre-computed dilated spiral sequences, while pooling and un-pooling operators as sparse matrix multiplications with pre-computed transformations that are obtained with a quadric sampling procedure [gong2019spiralnet++, ranjan2018generating].

Our 3D-VAE is built as an encoder-decoder pair (Fig. 1Centre), where the decoder is used as a generative model and is referred to as generator. Following this convention, we define our architecture as a pair of non-linear functions . Let be the vertex embedding domain and the latent distribution domain, we have defined as a variational distribution that approximates the intractable model posterior distribution, and described by the likelihood

. Throughout the entire network, each spiral++ convolutional layer is followed by an ELU activation function. However, in

they are interleaved with pooling layers and in by un-pooling layers. There are also three fully connected layers: two of them are the last layers of predicting the mean and the diagonal covariance of the variational distribution, the other is the first layer of and transforms back into a low-dimensional mesh that can be processed by mesh convolutions.

During training, the following loss is minimised:

(1)

where and are weighting constants. is the mean squared error between the input () and the corresponding output () vertices. This reconstruction loss encourages the output of the VAE to be as close as possible to its input. is the Kullback–Leibler (KL) divergence pushing the variational distribution towards the prior distribution

, which is defined as a standard spherical Gaussian distribution. Finally,

is a smoothing term based on the uniform Laplacian [nealen2006laplacian] that is computed on the output vertices as:

where is the Laplacian of the n-th output vertex, and the set of its neighbouring vertices with cardinality . is efficiently computed by relying on matrix operators. Concretely, we have , where is the Laplacian operator with random walk normalisation, is the adjacency matrix and is the diagonal degree matrix with

. Note that vertices are normalised by subtracting the per-vertex mean of the training set and dividing the result by the per-vertex standard deviation of the training set, thus the losses in Eq. 

1 are computed on normalised vertices. Also, all loss terms are reduced across mini-batches with a mean reduction.

Figure 3: Random samples and vertex-wise distances showing the effects of traversing three randomly selected latent variables (see Supplementary Material to observe the effects for all the latent variables.)

3.2 Mini-Batch Feature Swapping and Latent Consistency Loss

We aim to obtain a generative model where vertices corresponding to specific mesh features are controlled by a predefined set of latent variables. Therefore, we start by defining arbitrary mesh features on a mesh template (Fig. 1Right). Features are manually defined by colouring mesh vertices. Since vertices have point-wise correspondences (Sec. 3.1) features can be easily identified for every other mesh in the dataset without manually segmenting them. This allows us to swap features from one mesh to another by replacing the vertices corresponding to the selected feature (Fig. 2).

Feature swapping is at the core of our method and it allows us to curate the mini-batch generation in order to properly shape and constrain the latent representation of each mesh. Each mini-batch of size can be thought of as a squared matrix of size , where each element is the vertex embedding of a different mesh. As it can be seen from Fig. 1 (Left), while elements on the diagonal of this matrix are loaded from the dataset, the remaining elements are created online by swapping features. Every time a mini-batch is created, a feature is randomly selected and swapped. Therefore, each row of the matrix contains the same mesh with different features, while each column contains different meshes with the same feature. Interestingly, the naive implementation of the feature swapping causes visible surface discontinuities in most input meshes (Fig. 2), but discontinuities are not present in reconstructed meshes thanks to the Laplacian regulariser in Eq. 1.

Obviously, when a mini-batch is encoded we obtain a batched latent. As we can see in Fig. 1 (Centre), for each we have a corresponding which is evenly split in subsets of latent variables, one for each mesh feature (). Note that even though every latent subset has the same number of variables, uneven splits are also admissible.

Every time a mini-batch is created by swapping a feature , we can define . is the subset of latent variables controlling the feature swapped across the current mini-batch. is the part that controls everything else and is defined as . Inspired by both triplet losses and [sanyal2019learning], and thanks to our curated mini-batching, we can enforce differences and similarities in the latent representation of the different by requiring matched pairs to have a distance in latent space that is smaller by a margin, , than the distance for unmatched pairs. We traverse the diagonal of the mini-batch latent matrix and compare all the elements on the row containing the diagonal element with those in the column containing (). When considering we enforce latent similarities across columns and latent differences across rows by evaluating: , with . This is justified by the fact that elements in have the same mesh feature across columns and different mesh features across rows. Vice versa, when considering , which controls all the other mesh features for the current mini-batch, we enforce similarities row-wise and differences column-wise by evaluating: with . We thus define our latent consistency loss as:

(2)

where is a batch normalisation term that considers all the latent distances comparisons performed while computing . Combining Eq. 1 with Eq. 2 and said a weighting coefficient, we can formulate the total loss as:

(3)

4 Experiments

Figure 4: Effects of traversing each latent variable across different mesh features. For each latent variable (abscissas) we represent the per-feature mean distances computed after traversing the latent variable from its minimum to its maximum value. For each latent variable, we expect a high mean distance in one single feature and low values for all the others.

Datasets

Our main objective is to train a generative model capable of generating different identities from a set of feature-disentangled latent variables. For our experiments we require datasets containing as many subjects as possible in a neutral expression. However, most open source datasets for 3D shapes of faces, bodies, or animals contain only a limited number of subjects captured in different expressions or poses (e.g.

Mpi-Dyna [pons2015dyna], Smpl [loper2015smpl], Surreal [varol2017learning], Coma [ranjan2018generating], Smal [zuffi20173d], etc.). For this reason, we rely on two linear models that were built using a conspicuous number of subjects and that are released for non-commercial scientific research purposes: Uhm [ploumpis2019combining] and Star [osman2020star] (Sec. 2). From these models we randomly generate meshes and create one dataset for faces and one for bodies. We use of the data for training, for validation, and for testing.

Implementation Details

All networks were implemented in PyTorch and trained for

epochs using the Adam optimiser [kingma2014adam] with a fixed learning rate of and mini-batch size (note that the feature swapping is applied to our method only). Spiral convolutions 111The SpiralNet++ implementation was made available with an MIT license. had spiral length of and spiral dilation of . The last convolutional layer of and the first of had features while all the others . The sampling factors used during the quadric sampling for the creation of the up- and down-sampling transformation matrices were set to . Since the two datasets have a significantly different number of vertices ( and ), networks operating on faces have convolutional layers interleaved with sampling operators in both and , while networks operating on bodies have only . For the same reason latent sizes are different: variables for faces and for bodies. Considering that the face template was segmented in regions and the body template in , each has variables for faces and for bodies. The weight of the Laplacian regulariser was set to , while the latent consistency weight was for faces and for bodies. and were set to for both datasets. Training was performed on a single Nvidia Quadro P5000 for faces and on an Nvidia GeForce GTX 1050Ti for bodies. We run approximately experiments in GPU days.

Comparison with Other Methods

Method Mean Recon. () Max Recon. () Diversity () JSD () MMD () COV(%, ) 1-NNA (%, )
CD EMD CD EMD CD EMD
VAE (
VAE (
DIP-VAE-I
DIP-VAE-II
Factor VAE
Proposed
Table 1: Quantitative comparison between our model and other state-of-the-art methods for self-supervised latent disentanglement. All methods were trained on the same dataset. Mean and Max Recon. refer to the the mean and maximum average per-vertex errors across the test set. Values are computed in millimetres. The diversity is computed as detailed in Sec. 4. All the other metrics for the evaluation of the generation capabilities were introduced in [yang2019pointflow].

We compare our method with other self-supervised methods based on encoder-decoder pairs. For a fair comparison, all methods share the same underlying architecture, which we refer to as VAE and which is already detailed in Sec. 3.1. Consistently with the current literature [ranjan2018generating, litany2018deformable, foti2020intraoperative, yuan2020mesh], we found that the weight coefficient () on the KL divergence in VAEs for meshes is smaller than the one used for images. In fact, with the VAE is not able to reconstruct the data. Thus, we report results on VAEs with . It is worth noting that the discrepancy between meshes and images does not allow to define a -VAE with the same criteria used in the literature ([higgins2016beta]. We also compare our method with the DIP-VAE-I, DIP-VAE-II, and Factor VAE. To the best of our knowledge this is the first attempt to use them in the mesh domain. Therefore, for the two DIP-VAEs, we set

and, following the hyperparameter tuning strategy adopted in the original implementation 

[kumar2017variational], we tune and . Here we report results for DIP-VAE-I with and as well as for DIP-VAE-II with and , which qualitatively showed better disentanglement performances. Factor VAE is trained with a discriminator learning rate of , and a total correlation weight .

We first evaluate the quality of the different models in terms of reconstruction errors, diversity of the generated samples, Jensen-Shannon Divergence (JSD) [achlioptas2018learning], Coverage (COV) [achlioptas2018learning], Minimum matching distance (MMD) [achlioptas2018learning], and 1-nearest neighbour accuracy (1-NNA) [yang2019pointflow] (Tab. 1). Mean and maximum reconstruction errors are computed with respect to mean per-vertex errors across the test set. The diversity is computed as the mean of mean per-vertex distances among pairs of meshes randomly generated with the model. The other metrics are computed by leveraging the Chamfer (CD) and Earth Mover (EMD) distances on randomly selected pairs of vertices. Note that since the original formulation of 1-NNA expects scores converging to , in Tab. 1 we report absolute differences between the original score and the target value. From Tab. 1 we observe that while most methods for latent disentanglement have significantly increased reconstruction errors, our method closely match the VAE. We also notice that while most models have similar diversity, Factor VAE is able to generate more diverse data. While this property seems to be desirable, observing some randomly generated sample (Fig. 3), we argue that sampled faces are less realistic. The other metrics used to evaluate the generation capabilities of the different models show that our method is comparable with the others, thus proving that our mini-batching procedure and latent consistency loss do not negatively affect the generation capabilities.

Figure 5: Results of our method on bodies. A: samples randomly generated with the proposed method trained on body meshes. B: visual representation of all the different body features for which we seek to obtain a disentangled latent representation. C: effects of latent variable traversals for each latent variable across different body features. D: vertex-wise distances showing the effects of traversing five latent variables (see Supplementary Material for all the latent variables).

Evaluation of Latent Disentanglement

Previous work evaluated the latent disentanglement on either datasets where labelled data were available or on custom-made datasets of images whose generative factors could be used as labels. Examples of such datasets [higgins2016beta, kumar2017variational, kim2018disentangling] are binary images of geometric shapes (e.g. circles, rectangles, etc.) where shape deformation parameters are known, or images rendered with controlled camera and lighting positions. Even though both our datasets are generated from existing models, these models lack control over the generative factors, thus traditional metrics such as Z-Diff [higgins2016beta], SAP [kumar2017variational], and Factor [kim2018disentangling] scores cannot be computed. In addition, the few unsupervised disentanglement metrics currently existing [zaidi2020measuring], are not suitable for our evaluation because [liu2020metrics] is tailored for the evaluation of the disentanglement of style and content information, while [duan2019unsupervised] is used for model and hyperparameter selection thus requiring multiple computationally expensive hyperparameter sweeps. Therefore, we decide to evaluate the effects caused on the generated meshes while traversing each latent variable. We generate two meshes corresponding to each latent variable: one is created setting one latent variable to its minimum () and all the remaining to their mean value (), the other replacing the minimum with the maximum value (). The per-vertex Euclidean distances between the two shapes represent the effects of perturbing a single latent variable. These effects can be qualitatively assessed by observing meshes rendered with vertex colours proportional to the distances (Fig. 3, and Fig. 5 D). Alternatively, distances corresponding to each feature (Fig. 1, Right and Fig. 5 A) can be averaged and subsequently plotted as in Fig. 4 and Fig. 5 C. This representation clearly highlights how perturbing each latent variable affects the different features. While most methods appear to be difficult to interpret and mostly entangled, our method shows a significantly more structured, interpretable, and disentangled latent representation than other methods. Interestingly, in the VAE with we observe a polarised regime in which only a subset of latent variables control the generated shapes. However, these variables are also controlling the same features, thus disentanglement is not achieved. Since the polarised regime occurs in -VAEs [rolinek2019variational], we can consider this VAE to be a -VAE operating on meshes.

Direct Manipulation

Similarly to [gruber2020interactive] our method supports direct manipulation of the generated 3D meshes. A user is thus able to select one or multiple vertices, specify their new desired location, and our method automatically generates a new mesh locally deformed to satisfy the user edit. This is achieved through a small optimisation procedure over the latent representation. We use the Adam optimiser for iterations and with a fixed learning rate of . Given the subset of vertices manually selected from the currently generated mesh, and their desired positions , with representing the number of selected vertices, we optimise: . Note that the optimisation over guarantees the locality of the manipulation (Fig. 6, IIa) and it is achieved by setting to zero the gradients computed over . This is made possible by our method and its improved latent disentanglement. An optimisation over the entire latent representation would cause visible global changes (Fig. 6, IIb), thus making impossible the direct manipulation.

Figure 6: Direct manipulation of the generated mesh. (I) The user selects an arbitrary number of vertices (blue) and their new desired position (red), then our method generates a locally edited mesh fitting the desired locations. Results are reported optimising only (IIa) and optimising the entire (IIb).

5 Conclusion

We proposed a novel approach to learn a more disentangled, interpretable, and structured latent representation for 3D VAEs. This is achieved by curating the mini-batching procedure with feature swapping and introducing an additional latent consistency loss. Even though our method is able to disentangle predefined subsets of latent variables, we do not guarantee orthogonality and disentanglement among the variables within each subsets. Nonetheless, we can increase the number of subsets to achieve finer control over the generated model. The main limitations of our work are the assumptions made on the training data. A consistent scaling and alignment, as well as dense-point correspondences, and a fixed mesh topology are common for generative models of 3D faces (and bodies) and useful for an efficient feature swapping. However, this assumption could be relaxed to make our method suitable for more general 3D problems if a different architecture was implemented and semantic segmentations of each 3D shape were available. As future work, we aim at introducing and properly disentangling expressions (or poses) while retaining the superior latent disentanglement over identity features made possible by our method.

Acknowledgement

This work was supported by the Wellcome Trust/EPSRC [203145Z/16/Z]. The views expressed in this publication are those of the author(s) and not necessarily those of the Wellcome Trust.

References