1 Introduction
Merging abstract concepts, also known as creative blending, is frequently seen as a fundamental component of creativity, as this merging can allow novel concepts to emerge from simple components Pereira (2007). One computational way to approach this is interpolation, a process that smoothly transitions from one instance to another.
In order to generate novel 3D shapes with machine learning, one must allow for such interpolations. The typical approach for incorporating this creative process is to interpolate in a learned latent space so as to avoid the problem of generating unrealistic instances by exploiting the model’s learned structure. In 2D images, this often utilizes the trained Generative Adversarial Network Radford et al. (2015); Brock et al. (2018)
or Autoencoder
Kingma and Welling (2013); Tolstikhin et al. (2017), which has shown promising results in creative generation Carter and Nielsen (2017). As for the basic requirement, the process of the interpolation is supposed to form a semantically smooth morphing Berthelot et al. (2018). While this approach is sound for synthesizing realistic media such as lifelike portraits or new designs for everyday objects, it subjectively fails to directly model the unexpected, unrealistic, or creative Bidgoli and Veloso (2019); Kingma and Dhariwal (2018).In this work, we present a method for learning how to interpolate point clouds. By encoding prior knowledge about realworld objects, the intermediate forms are both realistic and unlike any existing forms. We show not only how this method can be used to generate "creative" point clouds, but how the method can also be leveraged to generate 3D models suitable for sculpture.
2 Interpolation as Generation
2.1 Naive Interpolation
Consider two point clouds and , where each represent a 3dimensional point. To generate a point cloud that semantically lies in between the inputs and , a straightforward method is to generate a point cloud as follows:
Intuitively, this represents drawing a line between pairs of points in and and returning a points percent of the distance between them. While this is simple to implement, it does not produce results that are semantically in between the objects the point clouds represent, as can be seen in Figure 2.
2.2 Learned Point Cloud Interpolation
While prior algorithms for generating creative point clouds have relied on a pretrained model, our method directly learns a transformation from point clouds to point clouds by using interpolation as the guiding framework. We parameterize interpolation from a start point cloud to a goal point cloud where for each time step, we apply a learned transformation for each point independently, given an encoding of , denoted by .
In each of the above transformations, the function
Rosenblatt (1958). All of the transformations are trained to produce a set after transformations so that and are close in Chamfer Distance, an error metric frequently used with set generation tasks Achlioptas et al. (2017). The encoding network is parameterized as a Deep Sets model Zaheer et al. (2017).While this formulation allows us to visualize the trajectory of each point as it is transformed and allows us enough expressivity to approximate the goal point clouds, it does not enforce the requirement that each intermediate point cloud is realistic. For example it could allow a mapping to a completely meaningless intermediate state that would not be recognized as a plausible (if unusual) 3D object.
For this reason we introduce an additional loss term motivated by computer graphics Ezuz et al. (2018).
With this added term, we are able to maintain the topology of the beginning object, causing the network to find the most plausible correspondence between the source object and the target function. This loss function only penalizes the output of the algorithm but has the side effect of ensuring each intermediate step is topologically consistent as well.
2.3 Mesh Generation
This technique allows one to use the vertices from a mesh as input, providing us with the correspondence needed to mesh the output. This removes the problem of meshing for creative sculpture generation, the motivating factor for creative sculpture generating algorithms Ge et al. (2019). In the author’s opinion, the conflict between the representational advantage of point clouds for machine learning tools and the artist’s frequent need for solid 3D shapes has limited the adoption of generative models for sculptural art. Our method therefore represents an important step forward for creative AI.
References
 Learning representations and generative models for 3d point clouds. arXiv preprint arXiv:1707.02392. Cited by: §2.2.
 Understanding and improving interpolation in autoencoders via an adversarial regularizer. arXiv preprint arXiv:1807.07543. Cited by: §1.
 DeepCloud. the application of a datadriven, generative model in design. External Links: 1904.01083 Cited by: §1.
 Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096. Cited by: §1.

Using artificial intelligence to augment human intelligence
. Distill 2 (12), pp. e9. Cited by: §1.  Reversible harmonic maps between discrete surfaces. arXiv preprint arXiv:1801.02453. Cited by: §2.2.
 Developing creative ai to generate sculptural objects. External Links: 1908.07587 Cited by: §2.3.
 Autoencoding variational bayes. arXiv preprint arXiv:1312.6114. Cited by: §1.
 Glow: generative flow with invertible 1x1 convolutions. In Advances in Neural Information Processing Systems, pp. 10215–10224. Cited by: §1.
 Creativity and artificial intelligence: a conceptual blending approach. Vol. 4, Walter de Gruyter. Cited by: §1.
 Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. Cited by: §1.
 The perceptron: a probabilistic model for information storage and organization in the brain.. Psychological review 65 (6), pp. 386. Cited by: §2.2.
 Wasserstein autoencoders. arXiv preprint arXiv:1711.01558. Cited by: §1.
 Deep sets. In Advances in neural information processing systems, pp. 3391–3401. Cited by: §2.2.
Comments
There are no comments yet.