Hallucinating Point Cloud into 3D Sculptural Object

11/13/2018 ∙ by Chun-Liang Li, et al. ∙ Carnegie Mellon University 0

Our team of artists and machine learning researchers designed a creative algorithm that can generate authentic sculptural artworks. These artworks do not mimic any given forms and cannot be easily categorized into the dataset categories. Our approach extends DeepDream from images to 3D point clouds. The proposed algorithm, Amalgamated DeepDream (ADD), leverages the properties of point clouds to create objects with better quality than the naive extension. ADD presents promise for the creativity of machines, the kind of creativity that pushes artists to explore novel methods or materials and to create new genres instead of creating variations of existing forms or styles within one genre. For example, from Realism to Abstract Expressionism, or to Minimalism. Lastly, we present the sculptures that are 3D printed based on the point clouds created by ADD.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Many artworks generated with the assistance of machine learning algorithms have appeared in the last three years. Aesthetically impressive images have resulted that belong to the traditional art area of painting and have received global recognition. The generated art images from Elgammal et al. (2017) were favored by human viewers over the images of paintings at the prestigious Miami Art Basel. CloudPainter, which was the top prize winner of the International Robot Art Competition in 2018, proved that a machine can achieve the aesthetic level of professional painters (rob, ). Recently a painting that was generated by GANs (Goodfellow et al., 2014) appeared at the art auction by Christie’s (one, ). At the same time, there has been very little exploration in the area of 3D objects, which traditionally would belong to the area of Sculpture. The creative generation of 3D Object research by Lehman et al. successfully generated “evolved” forms, however, the final form was not far from the original form and it remained in the range of mimicking the original (Lehman et al., 2016). Another relevant work, Neural 3D Mesh Renderer (Kato et al., 2018)

focuses on adjusting the rendered mesh based on DeepDream 

(dee, ) textures.

Our team of artists and machine learning researchers designed a creative algorithm that can generate authentic sculptural artworks. These artworks do not mimic any given forms and cannot be easily categorized into the dataset categories. Our approach extends DeepDream (dee, ) from images to 3D point clouds. The proposed algorithm, Amalgamated DeepDream (ADD), leverages the properties of point clouds to create objects with better quality than the naive extension. ADD presents promise for the creativity of machines, the kind of creativity that pushes artists to explore novel methods or materials and to create new genres instead of creating variations of existing forms or styles within one genre. For example, from Realism to Abstract Expressionism, or to Minimalism. Lastly, we present the sculptures that are 3D printed based on the point clouds created by ADD.

2 Learning to Generate Creative Point Clouds

Using deep learning to generate new objects has been studied in different data types, such as music 

(Van Den Oord et al., 2016), images (Goodfellow et al., 2014), 3D voxels (Wu et al., 2016) and point clouds (Achlioptas et al., 2017; Li et al., 2018). However, these models are learned to generate examples from the “same” distribution of the given training data, instead of learning to generate “creative” objects (Elgammal et al., 2017). One alternative is to decode the combination

(a) Sampling
(b) Interpolation
Figure 1: Results of Li et al. (2018).

of latent codes of two objects in an autoencoder. Empirical evidence shows that decoding mixed codes usually produces semantically meaningful objects with features from the corresponding objects. This approach has also been applied to image creation 

Carter and Nielsen (2017). One could adopt encoding-decoding algorithms for point clouds (Yang et al., 2018; Groueix et al., 2018; Li et al., 2018) to the same idea. The sampling and interpolation results based on Li et al. (2018) are shown in Figure 1.

DeepDream for Point Clouds

In addition to simple generative models and interpolations, DeepDream leverages trained deep neural networks by enhancing patterns to create psychedelic and surreal images. Given a trained neural network

and an input image , DeepDream aims to modify to maximize (amplify) , where

is an activation function of

. Algorithmically, we iteratively modify via a gradient update

(1)
Figure 2: Naive DeepDream.

One special case of DeepDream is when we use a classification network as and we replace with the outputs of the final layer corresponding to certain labels. This is related to adversarial attack (Szegedy et al., 2013) and unrestricted adversarial examples (Brown et al., 2018; Liu et al., 2018). However, directly extending DeepDream to point clouds with neural networks for sets (Qi et al., 2017; Zaheer et al., 2017) results in undesirable point clouds with sparse areas as shown in Figure 2.

2.1 Amalgamated DeepDream (ADD)

To avoid sparsity in the generated point cloud, one compromise is to run DeepDream with fewer iterations or a smaller learning rate, but it results in limited difference from the input point cloud. Another approach is conducting post-processing to add points to sparse areas. However, without prior knowledge of the geometry of the objects, making smooth post-processed results is non-trivial (Hoppe et al., 1992).

input : trained and input , where
for  do
      
      
end for
Algorithm 1 Amalgamated DeepDream (ADD)

As opposed to images, mixing two point clouds is a surprisingly simple task, which can be done by taking the union of two point clouds because of their permutation invariance property. We propose a novel DeepDream variant for point clouds with a set-union operation in the creation process to address the drawbacks of the naive extension. To be specific, in each iteration after running the gradient update (1), we amalgamate the transformed point clouds with the input point clouds111We down-sample the point clouds periodically to avoid exploded number of points.. We call the proposed algorithm Amalgamated DeepDream (ADD) as shown in Algorithm 1. A running example of ADD targeting the transformation of bottle into cone in ModelNet40 (Wu et al., 2015), which is the same as Figure 2, is shown in Figure 3. In each iteration, ADD enjoys the advantage of deforming objects based on gradient updates with respect to a trained neural network as DeepDream without creating obviously sparse areas. In the meantime, it can better preserve the features of input point clouds with the amalgamation operation when we create new features based on DeepDream updates. More created point clouds are shown in Figure 4.

(a) Input (b) Iter. 5 (c) Iter. 10
Figure 3: Transforming a bottle into a cone via ADD.
Figure 4: ADD with single object input.
Figure 5: ADD with dual and triple object input.

In addition to using union in the transformation process in every iteration, we can push this idea further by using amalgamated point clouds as input instead of a single object. The results of ADD with the union of two or three objects are shown in Figure 5. Compared with Figure 4, ADD with multiple inputs results in objects with more geometric details benefited by a more versatile input space.

Figure 6: Created Sculpture from ADD.

From point clouds to realizable 3D sculptures

Our final goal is to create 3D sculptures in the real world. We use standard software MeshLab and Meshmixer to reconstruct a mesh from point clouds created by ADD. We then use Lulzbot TAZ 3D printer with dual-extruder, transparent PLA, and dissolvable supporting material to create the sculpture of the reconstructed mesh. The printed sculptures of the point cloud in Figure 5 (RHS) are shown in Figure 6.

=0mu plus 1mu

References