polygen_pytorch
PolyGen implementation in pytorch.
view repo
Polygon meshes are an efficient representation of 3D geometry, and are of central importance in computer graphics, robotics and games development. Existing learningbased approaches have avoided the challenges of working with 3D meshes, instead using alternative object representations that are more compatible with neural architectures and training approaches. We present an approach which models the mesh directly, predicting mesh vertices and faces sequentially using a Transformerbased architecture. Our model can condition on a range of inputs, including object classes, voxels, and images, and because the model is probabilistic it can produce samples that capture uncertainty in ambiguous scenarios. We show that the model is capable of producing highquality, usable meshes, and establish loglikelihood benchmarks for the meshmodelling task. We also evaluate the conditional models on surface reconstruction metrics against alternative methods, and demonstrate competitive performance despite not training directly on this task.
READ FULL TEXT VIEW PDFPolyGen implementation in pytorch.
Polygon meshes are an efficient representation of 3D geometry, and are widely used in computer graphics to represent virtual objects and scenes. Automatic mesh generation enables more rapid creation of the 3D objects that populate virtual worlds in games, film, and virtual reality. In addition, meshes are a useful output in computer vision and robotics, enabling planning and interaction in 3D space.
Existing approaches to 3D object synthesis rely on the recombination and deformation of template models (DBLP:journals/tog/KalogerakisCKK12; DBLP:journals/tog/ChaudhuriKGK11), or a parametric shape family (DBLP:journals/cgf/SmelikTBB14)
. Meshes are challenging for deep learning architectures to work with because of their unordered elements and discrete face structures. Instead, recent deeplearning approaches have generated 3D objects using alternative representations of object shape—voxels
(DBLP:conf/eccv/ChoyXGCS16), pointclouds, occupancy functions (DBLP:conf/cvpr/MeschederONNG19), and surfaces (DBLP:conf/cvpr/GroueixFKRA18)—however mesh reconstruction is left as a postprocessing step and can yield results of varying quality. This contrasts with the human approach to mesh creation, where the mesh itself is the central object, and is created directly with 3D modelling software. Human created meshes are compact, and reuse geometric primitives to efficiently represent realworld objects.Neural autoregressive models have demonstrated a remarkable capacity to model complex, highdimensional data including images
(DBLP:conf/icml/OordKK16), text (radford2019language) and raw audio waveforms (DBLP:conf/ssw/OordDZSVGKSK16). Inspired by these methods we present PolyGen, a neural generative model of meshes, that autoregressively estimates a joint distribution over mesh vertices and faces.
PolyGen consists of two parts: A vertex model, that unconditionally models mesh vertices, and a face model, that models the mesh faces conditioned on input vertices. Both components make use of the Transformer architecture (DBLP:conf/nips/VaswaniSPUJGKP17), which is effective at capturing the longrange dependencies present in mesh data. The vertex model uses a masked Transformer decoder to express a distribution over the vertex sequences. For the face model we combine Transformers with pointer networks (DBLP:conf/nips/VinyalsFJ15) to express a distribution over variable length vertex sequences.
We evaluate the modelling capacity of PolyGen using loglikelihood and predictive accuracy as metrics, and compare statistics of generated samples to real data. We demonstrate conditional mesh generation with object class, images and voxels as input and compare to existing mesh generation methods. Overall, we find that our model is capable of creating diverse and realistic geometry that is directly usable in graphics applications.
Our goal is to estimate a distribution over meshes from which we can generate new examples. A mesh is a collection of 3D vertices , and polygon faces , that define the shape of a 3D object. We split the modelling task into two parts: i) Generating mesh vertices , and ii) generating mesh faces
given vertices. Using the chain rule we have:
(1)  
(2) 
We use separate vertex and face models, both of which are autoregressive; factoring the joint distribution over vertices and faces into a product of conditional distributions. To generate a mesh we first sample the vertex model, and then pass the resulting vertices as input to the face model, from which we sample faces (see Figure 2). In addition, we optionally condition both the vertex and face models on a context , such as the mesh class identity, an input image, or a voxelized shape.
3D meshes typically consist of collections of triangles, but many meshes can be more compactly represented using polygons of variable sizes. Meshes with variable length polygons are called gon meshes:
(3)  
(4) 
where is the number of faces in the th polygon and can vary for different faces. This means that large flat surfaces can be represented with a single polygon e.g. the top of the circular table in Figure 3. In this work we opt to represent meshes using gons rather than triangles. This has two main advantages: The first is that it reduces the size of meshes, as flat surfaces can be specified with a reduced number of faces. Secondly, large polygons can be triangulated in many ways, and these triangulations can be inconsistent across examples. By modelling gons we factor out this triangulation variability.
A caveat to this approach is that gons do not uniquely define a 3D surface when is greater than 3, unless the vertices it references are planar. When rendering nonplanar gons, polygons are first triangulated by e.g. projecting vertices to a plane (DBLP:journals/algorithmica/Held01), which can cause artifacts if the polygon is highly nonplanar. In practice we find that most of the gons produced by our model are either planar, or close to planar, such that this is a minor issue. Triangle meshes are a subset of gon meshes, and PolyGen can therefore be used to model them if required.
The goal of the vertex model is to express a distribution over sequences of vertices. We order the vertices from lowest to highest by coordinate, where represents the vertical axis. If there are vertices with the same value, we order by and then by value. After reordering, we obtain a flattened sequence by concatenating tuples of coordinates. Meshes have variable numbers of vertices, so we use a stopping token to indicate the end of the vertex sequence. We denote the flattened vertex sequence and its elements as . We decompose the joint distribution over as the product of a series of conditional vertex distributions:
(5) 
We model this distribution using an autoregressive network that outputs at each step the parameters of a predictive distribution for the next vertex coordinate. This predictive distribution is defined over the vertex coordinate values as well as over the stopping token
. The model is trained to maximize the logprobability of the observed data with respect to the model parameters
.Architecture. The basis of the vertex model architecture is a Transformer decoder (DBLP:conf/nips/VaswaniSPUJGKP17), a simple and expressive model that has demonstrated significant modeling capacity in a range of domains (DBLP:journals/corr/abs190410509; DBLP:conf/iclr/HuangVUSHSDHDE19; DBLP:conf/icml/ParmarVUKSKT18). Mesh vertices have strong nonlocal dependencies, with object symmetries and repeating parts, and the Transformer’s ability to aggregate information from any part of the input enables it to capture these dependencies. We use the improved Transformer variant with layer normalization inside the residual path, as in (DBLP:journals/corr/abs190410509; DBLP:journals/corr/abs191006764). See Figure 12 in the appendix for an illustration of the vertex model and appendix C for a full description of the Transformer blocks.
Vertices as discrete variables.
We apply 8bit uniform quantization to the mesh vertices. This reduces the size of meshes as nearby vertices that fall into the same bin are merged. We model the quantized vertex values using a Categorical distribution, and output at each step the logits of the distribution. This approach has been used to model discretized continuous signals in PixelCNN
(DBLP:conf/icml/OordKK16), and WaveNet (DBLP:conf/ssw/OordDZSVGKSK16), and has the benefit of being able to express distributions without shape limitations. Mesh vertices have strong symmetries and complex dependencies, so the ability to express arbitrary distributions is important. We find 8bit quantization to be a good tradeoff between mesh fidelity, and mesh size. However, it should be noted that 14bits or higher is typical for lossy mesh compression, and in future work it would be desirable to extend our methods to higher resolution meshes.Embeddings. We found the approach of using learned position and value embedding methods proposed in (DBLP:journals/corr/abs190410509) to work well. We use three embeddings for each input token: A coordinate embedding, that indicates whether the input token is an , , or coordinate, a position embedding, that indicates which vertex in the sequence the token belongs to, and a value embedding, which expresses a token’s quantized coordinate value. We use learned discrete embeddings in each case.
Improving efficiency. One of the downsides of using Transformers for modelling sequential data is that they incur significant computational costs due to the quadratic nature of the attention operation. This presents issues when it comes to scaling our models to larger meshes. To address this, we explored several modifications of the model inspired by (Salimans17). All of them relieve the computational burden by chunking the sequence into triplets of vertex coordinates and processing each of them at once. The first variant uses a mixture of discretized logistics to model whole 3D vertices. The second replaces the mixture with a MADEbased decoder (Germain15). Finally, we present variants that use a Transformer decoder but rely on different vertex embedding schemes. These modifications are described in more detail in appendix E.
The face model expresses a distribution over a sequence of mesh faces conditioned on the mesh vertices. We order the faces by their lowest vertex index, then by their next lowest vertex and so on, where the vertices have been ordered from lowest to highest as described in Section 2.2. Within a face we cyclically permute the face indices so that the lowest index is first. As with the vertex sequences, we concatenate the faces to form a flattened sequence, with a final stopping token. We write for this flattened sequence, with elements .
(6) 
As with the vertex model, we output a distribution over the values of at each step, and train by maximizing the loglikelihood of over the training set. The distribution is a categorical defined over where is the number of input vertices, and we include two additional values for the endface and stopping tokens.
Mesh pointer networks. The target distribution is defined over the indices of an input set of vertices, which poses the challenge that the size of this set varies across examples. Pointer networks (DBLP:conf/nips/VinyalsFJ15)
propose an elegant solution to this issue; Firstly the input set is embedded using an encoder, and then at each step an autoregressive network outputs a pointer vector that is compared to the input embeddings via a dotproduct. The resulting scores are then normalized using a softmax to form a valid distribution over the input set.
In our case we obtain contextual embeddings of the input vertices using a Transformer encoder . This has the advantage of bidirectional information aggregation compared to the LSTM used by the original pointer networks. We jointly embed newface and stopping tokens with the vertices, to obtain a total of input embeddings. A Transformer decoder operates on the sequence of faces and outputs pointers at each step. The target distribution can be obtained as
(7)  
(8)  
(9) 
See Figure 5 for an illustration of the pointer mechanism and Figure 13 in the appendix for an illustration of the whole face model. The decoder is a masked Transformer decoder that operates on sequences of embedded face tokens. It conditions on the input vertices in two ways, via dynamic face embeddings as explained in the next section, and optionally through crossattention into the sequence of vertex embeddings.
Embeddings. As with the vertex model we use learned position and value embeddings. We decompose a token’s position into the index of the face it belongs to, as well as the location of a token within a face, using separate learned embeddings for both. For value embeddings we follow the approach of pointer networks and simply embed the vertex indices by indexing into the contextual vertex embeddings outputted by the vertex encoder.
For both the vertex and face model only certain predictions are valid at each step. For instance, the coordinates must increase monotonically, and the stopping token can only be placed after an coordinate. Similarly mesh faces can not have duplicate indices, and every vertexindex must be referenced by at least one face. When evaluating the model we mask the predicted logits to ensure that the model can only make valid predictions. This has a nonnegative effect on the model’s loglikelihood scores, as it reassigns probability mass in the invalid region to values in the valid region (Table 1). Surprisingly, we found that masking during training to worsen performance to a small degree, so we always train without masking. For a complete description of the masks used, see appendix F.
Bits per vertex  Accuracy  
Model  Vertices  Faces  Vertices  Faces 
Uniform  24.08  39.73  0.004  0.002 
Valid predictions  21.41  25.79  0.009  0.038 
Draco* (draco)  Total: 27.68      
PolyGen  2.46  1.79  0.851  0.900 
 valid predictions  2.47  1.82  0.851  0.900 
 discr. embed. (V)  2.56    0.844   
 data augmentation  3.39  2.52  0.803  0.868 
+ cross attention (F)    1.87    0.899 
Model  Bits per vertex  Accuracy  Steps per sec 

Mixture  3.01    7.19 
MADE decoder  2.65  0.844  7.02 
Tr. decoder  2.50  0.851  4.07 
+ Tr. embed.  2.48  0.851  4.60 
Base model  2.46  0.851  2.98 
We can guide the generation of mesh vertices and faces by conditioning on a context. For instance, we can output vertices consistent with a given object class, or infer the mesh associated with an input image. It is straightforward to extend the vertex and face models to condition on a context . We incorporate context in two ways, depending on the domain of the input. For global features like class identity, we project learned class embeddings to a vector that is added to the intermediate Transformer representations following the selfattention layer in each block. For high dimensional inputs like images, or voxels, we jointly train a domainappropriate encoder that outputs a sequence of context embeddings. The Transformer decoder then performs crossattention into the embedding sequence, as in the original machine translation Transformer model.
For image inputs we use an encoder consisting of a series of downsampling residual blocks. We use preactivation residual blocks (DBLP:conf/eccv/HeZRS16)
, and downsample three times using convolutions with stride 2, taking input images of size
to feature maps of size where is the embedding dimensionality of the model. For voxel inputs we use a similar encoder but with 3D convolutions that takes inputs of shape to spatial embeddings of shape . For both input types we add coordinate embeddings to the feature maps before flattening the spatial dimensions. For more architecture details see appendix C.Our primary evaluation metric is loglikelihood, which we find to correlate well with sample quality. We also report summary statistics for generated meshes, and compare our model to existing approaches using chamferdistance in the image and voxel conditioned settings.
We train all our models on the ShapeNet Core V2 dataset (shapenet2015), which we subdivide into training, validation and testing splits. The training set is augmented as described in Section 3.2. In order to reduce the memory requirements of long sequences we filter out meshes with more than 800 vertices, or more than 2800 face indices after preprocessing. We train the vertex and face models for and
weight updates respectively, using four V100 GPUs per training run for a total batch size of 16. We use the Adam optimizer with a gradient clipping norm of 1.0, and perform cosine annealing from a maximum learning rate of
, with a linear warm up period of 5000 steps. We use a dropout rate of 0.2 for all models.In general we observed significant overfitting due to the relatively small size of the ShapeNet dataset, which is exacerbated by the need to filter out large meshes. In order to reduce this effect, we augmented the input meshes by scaling the vertices independently on each axis, using a random piecewiselinear warp for each axis, and by varying the decimation angle used to create gon meshes. For each input mesh we create 50 augmented versions which are then quantized (Section 2.2) for use during training. We found that augmentation was necessary to obtain good performance (Table 1). For full details of the augmentations and parameter settings see appendix A.
Rendering. In order to train imageconditional models we create renders of the processed ShapeNet meshes using Blender (Blender). For each augmented mesh, and each validation and testset mesh, we create renders at resolution, using randomly chosen lighting, camera and mesh material settings. For more details see appendix B.
We compare unconditional models trained under varying conditions. As evaluation metrics we report the negative loglikelihood obtained by the models, reported in bits per vertex, as well as the accuracy of next step predictions. For vertex models this is the accuracy of next vertex coordinate predictions, and for face models this is the accuracy of the next vertex index predictions. In particular we compare the effect of masking invalid predictions (Section 2.4), of using discrete rather than continuous coordinate embeddings in the vertex model (Section 5), of using data augmentation (Section 3.2), and finally of using crossattention in the face model. Unless otherwise specified we use embeddings of size 256, fully connected layers of size 1024, and 18 and 12 Transformer blocks for the vertex and face models respectively. As there are no existing methods that directly model mesh vertices and faces, we report the scores obtained by models that allocate uniform probability to the whole data domain, as well as models that are uniform over the region of valid predictions. We additionally report the compression rate obtained by Draco (draco), a mesh compression library. For details of the Draco compression settings see appendix G.
Table 1 shows the results obtained by the various models. We find that our models achieve significantly better modelling performance than the uniform and Draco baselines, which illustrates the gains achievable by a learned predictive model. We find that restricting the models predictions to the range of valid values results in a minor improvement in modelling performance, which indicates that the model is effective at assigning low probability to the invalid regions. Using discrete rather than continuous embeddings for vertex coordinates provides a significant improvement, improving bitspervertex from 2.56 to 2.46. Surprisingly, using crossattention in the face model harms performance, which we attribute to overfitting. Data augmentation has a strong effect on performance, with models trained without augmentation losing 1.64 bits per vertex on average. Overall, our best model achieves a loglikelihood score of 4.26 bits per vertex, and and predictive accuracy for the vertex and face models respectively. Figure 14 in the appendix shows random unconditional samples from the best performing model.
Table 2 presents a comparison of different variants of the vertex model as discussed in Section 5. The results suggest that the proposed variants can achieve a
reduction in training time with a minimal sacrifice in performance. Note that these models used different hyperparameter settings as detailed in Appendix E.
We compare the distribution of certain mesh summaries for samples from our model against the ShapeNet test set. If our model has closely matched the true data distribution then we expect these summaries to have similar distributions. We draw 1055 samples from our best unconditional model, and discard samples that don’t produce a stopping token within 1200 vertices, or 800 faces. We use nucleus sampling (DBLP:journals/corr/abs190409751) which we found to be effective at maintaining sample diversity while reducing the presence of degraded samples. Nucleus sampling helps to reduce sampling degradation by sampling from the smallest subset of tokens that account for top of probability mass.
Figure 7 shows the distribution of a number of mesh summaries, for samples from PolyGen as well as the true data distribution. In particular we show: the number of vertices, number of faces, node degree, average face area and average edge length for sampled and true meshes. Although these are coarse descriptions of a 3D mesh, we find our model’s samples to have a similar distribution for each mesh statistic. We observe that nucleus sampling with helps to align the model distributions with the true data for a number of statistics. Figure 8 shows an example 3D mesh generated by our model compared to a mesh obtained through postprocessing an occupancy function (DBLP:conf/cvpr/MeschederONNG19). We note that the statistics of our mesh resemble humancreated meshes to a greater extent.
Bits per vertex  Accuracy  

Context  Vertices  Faces  Total  Vertices  Faces 
None  2.46  1.79  4.26  0.851  0.900 
Class  2.43  1.81  4.24  0.853  0.899 
Image  2.30  1.81  4.11  0.857  0.900 
+ pooling  2.35  1.78  4.13  0.856  0.900 
Voxels  2.19  1.82  4.01  0.859  0.900 
+ pooling  2.28  1.79  4.07  0.856  0.900 
We train vertex and face models with three kinds of conditioning: class labels, images, and voxels. We use the same settings as the best unconditional model: discrete vertex embeddings with no cross attention in the face model. As with the unconditional models we use 18 layers for the vertex model and 12 layers for the face model. Figures 1 and 4 show classconditional samples. Figures 6 and 10 show samples from image and voxel conditional models respectively. Note that while we train on the ShapeNet dataset, we show ground truth meshes and inputs for a selection of representative meshes collected from the turbosquid online object repository.
Table 3 shows the impact of conditioning on predictive performance in terms of bitspervertex and accuracy. We find that for vertex models, voxel conditioning provides the greatest improvement, followed by images, and then by class labels. This confirms our expectations, as voxels characterize the coarse shape unambiguously, whereas images can be ambiguous depending on the object pose and lighting. However the additional context does not lead to improvements for the face model, with all conditional face models performing slightly worse than the best unconditional model. This is likely because mesh faces are to a large extent determined by the input vertices, and the conditioning context provides relatively little additional information. In terms of predictive accuracy, we see similar effects, with accuracy improving with richer contexts for vertex models, but not for face models. We note that the accuracy ceiling is less than , due to the inherent entropy of the vertex and face distributions, and so we expect diminishing gains as models approach this ceiling.
For image and voxel conditional models, we also compare to architectures that apply global average pooling to the outputs of the input encoders. We observe that pooling in this way negatively effects the vertex models’ performance, but has a small positive effect on the face models’ performance.
We additionally evaluate the image and voxel conditioned models on mesh reconstruction, where we use symmetric chamfer distance as the reconstruction metric. The symmetric chamfer distance is a distance metric between two point sets and . It is defined as:
(10) 
For each example in the test set we draw samples from the conditional model. We sample 2500 points uniformly on the sampled and target mesh and compute the corresponding chamfer distance. We compare our model to AtlasNet (DBLP:conf/cvpr/GroueixFKRA18), a conditional model that defines a mesh surface using a number of patches that have been Transformer using a deep network. AtlasNet outputs pointclouds and is trained to minimize the chamfer distance to a target pointcloud conditioned on image or pointcloud inputs. Compared to alternative methods, AtlasNet achieves good mesh reconstruction performance, and we therefore view it as a strong baseline. We train AtlasNets models in the image and voxel conditioned settings, that are adapated to use equivalent image and voxel encoders as we use for our model. For more details see appendix D.
Figure 9 shows the mesh reconstruction results. We find that when making a single prediction, our model performs worse than AtlasNet. This is not unexpected, as AtlasNet optimizes the evaluation metric directly, whereas our model does not. When allowed to make 10 predictions, our model achieves slightly better performance than AtlasNet. Overall we find that while our model does not always produce good mesh reconstructions, it typically produces a very good reconstruction within 10 samples, which may be sufficient for many practical applications.
Generative models of 3D objects exists in a variety of forms, including ordered (DBLP:journals/cgf/NashW17) and unordered (DBLP:conf/iclr/LiZZPS19; DBLP:journals/corr/abs190612320) pointclouds, voxels (DBLP:conf/eccv/ChoyXGCS16; DBLP:conf/nips/0001ZXFT16; DBLP:conf/iccv/TatarchenkoDB17; DBLP:conf/nips/RezendeEMBJH16). More recently there has been significant progress using functional representations, such as signed distance functions (DBLP:conf/cvpr/ParkFSNL19), and other implicit functions (DBLP:conf/cvpr/MeschederONNG19). There are relatively fewer examples of methods that explicitly generate a 3D mesh. Such works primarily use parameterized deformable meshes (DBLP:conf/cvpr/GroueixFKRA18), or form meshes through a collection of mesh patches. Our methods are distinguished in that we directly model the mesh data created by people, rather than alternative representations or parameterizations. In addition, our model is probabilistic, which means we can produce diverse output, and respond to ambiguous inputs in a principled way.
PolyGen’s vertex model is similar to PointGrow (sun2018pointgrow), which uses an autoregressive decomposition to model 3D point clouds, outputting discrete coordinate distributions using a selfattention based architecture. PointGrow operates on fixedlength pointclouds rather than variable vertex sequences, and uses a bespoke selfattention architecture, that is relatively shallow in comparison to modern autoregressive models in other domains. By contrast, we use stateoftheart deep architectures, and model vertices and faces, enabling us to generate high quality 3D meshes.
This work borrows from architectures developed for sequence modelling in natural language processing. This includes the sequence to sequence training paradigm
(DBLP:conf/nips/SutskeverVL14), the Transformer architecture (DBLP:conf/nips/VaswaniSPUJGKP17; DBLP:journals/corr/abs190410509; DBLP:journals/corr/abs191006764), and pointer networks (DBLP:conf/nips/VinyalsFJ15). In addition our work is inspired by sequential models of raw data, like WaveNet (DBLP:conf/ssw/OordDZSVGKSK16) PixelRNN and its variants (DBLP:conf/nips/OordKEKVG16; DBLP:conf/iclr/MenickK19), and Music Transformers (DBLP:conf/iclr/HuangVUSHSDHDE19).Our work is also related to PolygonRNN (DBLP:conf/cvpr/CastrejonKUF17; DBLP:conf/cvpr/AcunaLKF18), a method for efficient segmentation in computer vision using polygons. PolygonRNN take an input image and autoregressively outputs a sequence of coordinates that implicitly define a segmented region. PolyGen, by contrast operates in 3D space, and explicitly defines the connectivity of several polygons.
Finally our work is related to generative models of graph structured data such as GraphRNN (DBLP:conf/icml/YouYRHL18) and GRAN (DBLP:journals/corr/abs191000760), in that meshes can be thought of as attributed graphs. These works focus on modelling graph connectivity rather than graph attributes, whereas we model both the node attributes (vertex positions), as well as the incorporating these attributes in our model of the connectivity.
In this work we present PolyGen, a deep generative model of 3D meshes. We pose the problem of mesh generative as autoregressive sequence modelling, and combine the benefits of Transformers and pointer networks in order to flexibly model variable length mesh sequences. PolyGen is capable of generating coherent and diverse mesh samples, and we believe that it will unlock a range of applications in computer vision, robotics, and 3D content creation.
The authors thank Dan Rosenbaum, Sander Dieleman, Yujia Li and Craig Donner for useful discussions·
For each input mesh from the ShapeNet dataset we create 50 augmented versions which are used during training (Figure 11). We start by normalizing the meshes such that the length of the long diagonal of the mesh bounding box is equal to 1. We then apply the following augmentations, performing the same bounding box normalization after each. All augmentations and mesh rendering are performed prior to vertex quantization.
Axis scaling. We scale each axis independently, uniformly sampling scaling factors , and in the interval .
Piecewise linear warping. We define a continuous, piecewise linear warping function by dividing the interval into 5 even subintervals, sampling gradients
for each subinterval from a lognormal distribution with variance 0.5, and composing the segments. For
and coordinates, we ensure the warping function is symmetric about zero, by reflecting a warping function with three subintervals on about 0.5. This preserves symmetries in the data which are often present for these axes.Planar mesh decimation. We use Blender’s planar decimation modifier (https://docs.blender.org/manual/en/latest/modeling/modifiers/generate/decimate.html) to create gon meshes. This merges adjacent faces where the angle between surfaces is greater than a certain tolerance. Different tolerances result in meshes of different sizes with differing connectivity due to varying levels of decimation. We use this property for data augmentation and sample the tolerance degrees uniformly from the interval .
We use Blender to create rendered images of the 3D meshes in order to train imageconditional models (Figure 11). We use Blender’s Cycles (https://docs.blender.org/manual/en/latest/render/cycles/index.html) pathtracing renderer, and randomize the lighting, camera, and mesh materials. In all scenes we place the input meshes at the origin, scaled so that bounding boxes are 1m on the long diagonal.
Lighting. We use an 20W area light located 1.5m above the origin, with rectangle size 2.5m, and sample a number of 15W point lights uniformly from the range . We choose the location of each point light independently, sampling the and coordinates uniformly in the intervals , and sampling the coordinate uniformly in the interval .
Camera. We position the camera at a distance from the center of the mesh, where is sampled uniformly from , at an elevation sampled between , and sample a rotation uniformly between . We sample a focal length for the camera in . We also sample a filter size (https://docs.blender.org/manual/en/latest/render/cycles/render_settings/film.html) in , which adds a small degree of blur.
Object materials. We found the ShapeNet materials and textures to be applied inconsistently across different examples when using Blender, and in many cases no textures loaded at all. Rather than use the inconsistent textures, we randomly generated materials for the 3D meshes, in order to produce a degree of visual variability. For each texture group in the mesh we sampled a new material. Materials were constructed by linking Blender nodes (https://docs.blender.org/manual/en/latest/render/shader_nodes/introduction.html#textures). In particular we use a noise shader with detail = 16, scale , and scale draw from the interval
. The noise shader is used as input to a color ramp node which interpolates between the input color, and white. The color ramp node then sets the color of a diffuse BSDF material
https://docs.blender.org/manual/en/latest/render/shader_nodes/shader/diffuse.html, which is applied to faces within a texture group.We use the improved Transformer variant with layer normalization moved inside the residual path, as in (DBLP:journals/corr/abs190410509; DBLP:journals/corr/abs191006764). In particular we compose the Transformer blocks as follows:
(11)  
(12)  
(13)  
(14) 
Where and are residuals and intermediate representations in the ’th block, and the subscripts FC and MMH denote the outputs of fully connected and masked multihead selfattention layers respectively. We apply dropout immediately following the ReLU activation as this performed well in initial experiments.
Conditional models. As described in Section 2.5 For global features like class identity, we project learned class embeddings to a vector that is added to the intermediate Transformer representations following the selfattention layer in each block:
(15)  
(16) 
For high dimensional inputs like images, or voxels, we jointly train a domainappropriate encoder that outputs a sequence of context embeddings. The Transformer decoder performs crossattention into the embedding sequence after the selfattention layer, as in the original machine translation Transformer model:
(17)  
(18) 
The image and voxel encoders are both preactivation resnets, with 2D and 3D convolutions respectively. The full architectures are described in Table 4.
We use the same image and voxelencoders (Table 4) as for the conditional PolyGen models. For consistency with the original method, we project the final feature maps to 1024 dimensions, before applying global average pooling to obtain a vector shape representation. As in the original method, the decoder is an MLP with 4 fullyconnected layers of size 1024, 512, 256, 128 with ReLU nonlinearities on the first three layers and tanh on the final output layer. The decoder takes the shape representation, as well as 2D points as input, and outputs a 3D vector. We use 25 patches, and train with the same optimization settings as PolyGen (Section 3) but for steps.
Chamfer distance. To evaluate the chamfer distance for AtlasNet models, we first generate a mesh by passing 2D triangulated meshes through each of the AtlasNet patch models as described in (DBLP:conf/cvpr/GroueixFKRA18). We then sample points on the resulting 3D mesh.
In this section, we provide more details for the more efficient vertex model variants mentioned in Section 5.
In the first variant, instead of processing , and coordinates in sequence we concatenate their embeddings together and pass them through a linear projection. This forms the input sequence for a layer Transformer which we call the torso. Following (Salimans17) we output the parameters of a mixture of discretized logistics describing the joint distribution of a full 3D vertex. The main benefit of this model is that the selfattention is now performed for sequences which are times shorter. This manifests in a much improved training time (see 2). Unfortunately, the speedup comes at a price of significantly reduced performance. This may be because the underlying continuous components are not well suited to the peaky and multimodal vertex distributions.
In the second variant we lift the parametric distribution assumption and use a MADEstyle masked MLP (Germain15) with residual blocks to decode each output of a layer torso into a sequence of three conditional discrete distributions:
(19) 
As expected, this change improves the test data likelihood while simultaneously increasing the computation cost. We notice that unlike the base model the MADE decoder has direct access only to the coordinate components within a single vertex and must rely on the output of the torso to learn about the components of previously generated vertices.
We let the decoder attend to all the generated coordinates directly in the third alternative version of our model. We replace the MADE decoder with a layer Transformer which is conditioned on (this time produced by a layer torso) and operates on a flattened sequence of vertex components (similarly to the base model). The conditioning is done by adding to the embeddings of , and . While slower than the MADE version, the resulting network is significantly closer in performance to the base model.
Finally, we make the model even more powerful using a layer Transformer instead of simple concatenation to embed each triplet of vertex coordinates. Specifically, we sumpool the outputs of that Transformer within every vertex. In this variant, we reduce the depth of the torso to layers. This results in test likelihood similar to the that of the base model.
As mentioned in Section 2.2
we mask invalid predictions when evaluating our models. We identify a number of hard constraints that exist in the data, and mask the model’s predictions that violate these constraints. The masked probability mass is uniformly distributed across the remaining valid values. We use the following masks:
Vertex model.
The stopping token can only occur after an coordinate:
(20) 
coordinates are nondecreasing:
(21) 
coordinates are nondecreasing if their associated coordinates are equal:
(22) 
coordinates are increasing if their associated and coordinates are equal:
(23) 
Face model.
New face tokens can not be repeated:
(24) 
The first vertex index of a new face is not less than the first index in the previous face:
(25) 
Vertex indices within a face are greater than the first index in that face:
(26) 
Vertex indices within a face are unique:
(27) 
The first index of a new face is not greater than the lowest unreferenced vertex index:
(28) 
We compare our model in Table 1 to Draco (draco), a performant 3D mesh compression library created by Google. We use the highest compression setting, quantize the positions to 8 bits, and do not quantize in order to compare with the 8bit mesh representations that our model operates on. Note that the quantization performed by Draco is not identical to our uniform quantization, so the reported scores are not directly comparable. Instead they serve as a ballpark estimate of the degree of compression obtained by existing methods.
Figure 14 shows a random batch of unconditional samples generated using PolyGen with nucleus sampling ant top. The figure highlights . Firstly, the model learns to mostly output objects consistent with a shape class. Secondly, the samples contain a large proportion of certain object classes, including tables, chairs and sofas. This reflects the significant classimbalance of the ShapeNet dataset, with many classes being underrepresented. Finally, certain failure modes are present in the collection. These include meshes with disconnected components, meshes that have produced the stopping token too early, producing incomplete objects, and meshes that don’t have a distinct form that is recognizable as one of the shape classes.

