Developing Creative AI to Generate Sculptural Objects

08/20/2019 ∙ by Songwei Ge, et al. ∙ Carnegie Mellon University 8

We explore the intersection of human and machine creativity by generating sculptural objects through machine learning. This research raises questions about both the technical details of automatic art generation and the interaction between AI and people, as both artists and the audience of art. We introduce two algorithms for generating 3D point clouds and then discuss their actualization as sculpture and incorporation into a holistic art installation. Specifically, the Amalgamated DeepDream (ADD) algorithm solves the sparsity problem caused by the naive DeepDream-inspired approach and generates creative and printable point clouds. The Partitioned DeepDream (PDD) algorithm further allows us to explore more diverse 3D object creation by combining point cloud clustering algorithms and ADD.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 4

page 5

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

Will Artificial Intelligence (AI) replace human artists or will it show us a new perspective into creativity? Our team of artists and AI researchers explore artistic expression using Machine Learning (ML) and design creative ML algorithms to be possible co-creators for human artists.

In terms of AI-generated and AI-enabled visual artwork, there has been a good amount of exploration done over the past three years in the 2D image area traditionally belonging to the realm of painting. Meanwhile, there has been very little exploration in the area of 3D objects, which traditionally would belong to the realm of sculpture and could be easily extended into the area of art installations. The creative generation of 3D object research by Lehman et al. successfully generated “evolved” forms, however, the final form was not far from the original form and could be said to merely mimic the original [22]. Another relevant study, Neural 3D Mesh Renderer [20] focuses on adjusting the rendered mesh based on DeepDream [34] textures. In the art field, artist Egor Kraft’s Content Aware Studies project [11] explores the possibilities of AI to reconstruct lost parts of antique Greek and Roman sculptures. Nima et al. [27]

introduced Vox2Net adapted from pix2pix GAN which can extract the abstract shape of the input sculpture. DeepCloud 

[5]

exploited an autoencoder to generate point clouds, and constructed a web interface with analog input devices for users to interact with the generation process. Similar to the work of Lehman et al 

[22], the results generated by DeepCloud are limited to previously seen objects.

Figure 1: Sculpture generated by creative AI, PDD.

In this paper, we introduce two ML algorithms to create original sculptural objects. As opposed to previous methods, our results do not mimic any given forms and are original and creative enough not to be categorized into the dataset categories. Our method extends an idea inspired by DeepDream for 2D images to 3D point clouds. However, simply translating the method generates low-quality objects with local and global sparsity issues that undermine the integrity of reconstruction by 3D printing. Instead we propose Amalgamated DeepDream (ADD) which utilizes union operations during the process to generate both creative and realizable objects. Furthermore, we also designed another ML generation algorithm, Partitioned DeepDream (PDD), to create more diverse objects, which allows multiple transformations to happen on a single object. With the aid of mesh generation software and 3D printing technology, the generated objects can be physically realized. In our latest artwork, it has been incorporated into an interactive art installation.

Creative AI: ADD and PDD

Artistic images created by AI algorithms have drawn great attention from artists. AI algorithms are also attractive for their ability to generate 3D objects which can assist the process of sculpture creation. In graphics, there are many methods to represent a 3D object such as mesh, voxel and point cloud. In our paper, we focus on point cloud data which are obtained from two large-scale 3D CAD model datasets: ModelNet40 [39] and ShapeNet [9]. These datasets are preprocessed by uniformly sampling from the surface of the CAD models to attain the desired number of points. Point cloud is a compact way to represent 3D object using a set of points on the external surfaces and their coordinates such as those shown in Figures 2 and 3. We introduce two algorithms, ADD and PDD, which are inspired by 2D DeepDream to generate creative and realizable point clouds. In this section, we will briefly discuss the existing 3D object generation methods, and then elaborate the ADD and PDD algorithms and present some of the generated results.

Figure 2: ModelNet40 [39]
Figure 3: ShapeNet [9]

Learning to Generate Creative Point Clouds

Using deep learning to generate new objects has been studied in different data types, such as music 

[35], images [15], 3D voxels [38] and point clouds [1, 23]. An example of a simple generative model is shown in Figure 4.

[width=0.9]figs/GAN_Diagram.png

Figure 4: An example of generative model.

Here a low-dimensional latent space of the original object space

is learned in accordance with the underlying probability distribution.

(1)

A discriminator is usually used to help distinguish the generated object and a real object . Once the discriminator is fooled, the generator can create objects that look very similar to the original ones [15]. However, these generative models only learn to generate examples from the “same” distribution of the given training data, instead of learning to generate “creative” objects [13, 27].

Figure 5: Encoding and decoding in latent space

One alternative is to decode the convex combination of latent codes of two objects in an autoencoder to get final object

. Specifically, an autoencoder is a kind of feedforward neural network used for dimensionality reduction whose hidden units can be viewed as latent codes which capture the most important aspects of the object 

[29]. A representation of the process of encoding to and decoding from latent space can be seen in Figure 5.

where . Empirical evidence shows that decoding mixed codes usually produces semantically meaningful objects with features from the corresponding objects. This approach has also been applied to image creation [7]. One could adopt encoding-decoding algorithms for point clouds [40, 17, 23]

based on the same idea. The sampling and interpolation results based on 

Li et al. [23] are shown in Figure 6.

[trim=0.30pt 0.10pt 0.30pt 0, clip, width=0.33]figs/random1.png

(a) Sampling

[trim=0.20pt 0.20pt 0.20pt 0.10pt, clip, width=0.6]figs/random_object14.png

(b) Sampling

[trim=0.20pt 0.10pt 0.20pt 0.10pt, clip, width=0.5]figs/random_object5.png

(c) Sampling

[trim=0.150pt 0.10pt 0.150pt 0.10pt, clip, width=0.55]figs/inter2.png

(d) Interpolation

[trim=0.250pt 0.20pt 0.20pt 0.150pt, clip, width=0.43]figs/failed_interpolates_guitar_lamp1.png

(e) Interpolation

[trim=0.20pt 0.250pt 0.20pt 0.150pt, clip, width=0.53]figs/people_vase2.png

(f) Interpolation
Figure 6: Results of Li et al. [23].

Inspiration: DeepDream for Point Clouds

Figure 7: 2D DeepDream visualized

In contrast to simple generative models and interpolations, DeepDream leverages trained deep neural networks by enhancing patterns to create psychedelic and surreal images. For example in Figure 7, the neural network detects features in the input images similar to the object classes of bird, fish and dog and then exaggerates these underlying features. Given a trained neural network and an input image , DeepDream aims to modify to maximize (amplify) , where

is an activation function of

. After this process, is expected to display some features that are captured by . Algorithmically, DeepDream iteratively modifies via a gradient update with a certain learning rate .

(2)
Figure 8: Naive DeepDream with undesirable sparse surface.

An extension of DeepDream is to use a classification network as and we replace with the outputs of the final layer corresponding to certain labels. This is related to adversarial attack [32] and unrestricted adversarial examples [6, 24].

Unfortunately, directly extending the idea of DeepDream to point clouds with neural networks for sets [28, 41] results in undesirable point clouds suffering from both local and global sparsity as shown in Figure 8. To be specific, iteratively applying gradient update without any restrictions creates local holes in the surface of the object. Besides, the number of input points per object is limited by the classification model. Consequently the generated object is not globally dense enough to transform into mesh structure, let alone realize in the physical world.

To avoid local sparsity in the generated point cloud, one compromise is to run the naive 3D point cloud DeepDream with fewer iterations or a smaller learning rate, but it results in limited difference from the input point cloud. Another approach to solving the sparsity problem is conducting post-processing to add points to sparse areas. However, without prior knowledge of the geometry of the objects, making smooth post-processed results is non-trivial [19]. For this reason, we simply take inspiration from the gradient update strategy of DeepDream, while developing our novel algorithms.

Amalgamated DeepDream (ADD)

(a) Union of airplane and bottle
(b) Union of cone, vase and lamp
Figure 9: Amalgamated input for ADD.

As opposed to images, mixing two point clouds is a surprisingly simple task, which can be done by taking the union of two point clouds because of their permutation invariance property. For example, the union of two or three objects is shown in figure 9. This observation inspires us to use a set-union operation, which we call amalgamation, in the creation process to address the drawbacks of naive extension.

input : trained and input
output : generated object
for  do
      
       for  do
            
            
             if  is divisible by 10 then
                  
                   down-sample to fix the number of points
             end if
            
       end for
      
end for
Algorithm 1 Amalgamated DeepDream (ADD)
(a) Input
(b) Iteration 10
(c) Iteration 100
Figure 10: Transforming a bottle into a cone via ADD.

For object with any number of points, we first randomly take it apart into several subsets with the number of points that is required by the classification model, and then modify the input image through gradient updating (2). In the end, we amalgamate all the transformed subsets into a final object. By doing this we can generate as dense objects as we want as long as the input object allows. To solve the local sparsity, when running the gradient update (2) for each subset we amalgamate the transformed point clouds with the input point clouds after every 10 iterations. To avoid exploded number of points, we also down-sample the object after each amalgamation operation. We call the proposed algorithm Amalgamated DeepDream (ADD) as shown in Algorithm 1.

Experiments and results

To test our methods, we use DeepSet [41] with 1000 points capacity as our basic 3D deep neural network model. We randomly sampled 10000 points from the CAD models and feed them into ADD model for 100 iterations with learning rate set to 1. A running example of ADD targeting the transformation of bottle into cone in ModelNet40 [39], which is the same as Figure 8, is shown in Figure 10. During the transformation process, ADD enjoys the advantage of deforming objects based on gradient updates with respect to a trained neural network as DeepDream without creating obviously local sparse areas. It can also better preserve the features of input point clouds with the amalgamation operation when we create new features based on DeepDream updates. In addition, the amalgamation of several transformed subsets from original objects allows us to generate denser objects that are realizable in the physical world. More created point clouds from the objects in Figure 2 are shown in Figure 11.

(a) Input: bowl; Target: chair
(b) Input: keyboard; Target: car
Figure 11: ADD with single object input.
(a) Input: airplane, bottle;
Target: toilet
(b) Input: cone, vase, lamp;
  Target: person
Figure 12: ADD with dual and triple object input.

ADD with amalgamated inputs

In addition to using union during the transformation process, we can push this idea further by using amalgamated point clouds as input instead of a single object. We keep the experimental setting the same. Note that now we could have multiple of 10000 points as input. The results of ADD with the union objects in Figure 9 are shown in Figure 12. Compared with Figure 11, ADD with multiple objects as input results in objects with more geometric details benefited from a more versatile input space.

Partitioned DeepDream (PDD)

As the example shown in Figure 7, applying DeepDream to images produces more than one surreal patterns on each of the images [34]. In contrast, the nature of ADD is to deform the whole object simultaneously into a certain shape. It is essential to extend such a basic algorithm in order to allow multiple, separate transformations on a single object. Therefore, we propose Partitioned DeepDream as shown in Algorithm 2 which can be implemented based on various point cloud segmentation algorithms. When compared with ADD, PDD provides a more controllable method for artists to explore with and effectively increases the diversity of creation.

input : trained , number of segments , input and targets
output : generated object
PCSegmentation
for  do
       Standardize x by
      
       Recover by
      
end for
Algorithm 2 Partitioned DeepDream (PDD)
Figure 13: Segmentation results of an airplane with 4 clusters.
(a) Input: airplane
(b) Input: chair
Figure 14: PDD targeting on random classes

In contrast to the simplicity of mixing two point clouds, segmentation of point clouds is relatively hard and has been studied in depth [16, 26, 33]. Mainstream segmentation algorithms can be categorized into edge-based segmentation [30, 37], region growing segmentation [4], model-fitting segmentation [2, 14], hybrid segmentation [36] and machine learning segmentation [25]. In our work, manual segmentation is shown to obtain high-quality results but is extremely tedious and delegates too much responsibility to the artist. Instead, we explore machine learning methods which can automatically divide the object into several meaningful parts. For example, the segmentation results of an airplane with -means are displayed in Figure 13. Note that this method requires the number of segments as input.

We extend ADD explained in Algorithm 1 with the -means point cloud segmentation algorithm in our experiments. First, the input object is automatically divided into several segments using -means. Each segment must be standardized before it is fed into ADD, in order for it to be treated as one distinct object. With this preprocessing step ADD allows us to produce the desired features on different parts independently. In the end, we cancel the normalization and reunion all segments into one object.

(a) Target: airplane
(b) Target: person
(c) Target: bowl
Figure 15: PDD with guitar body only
(a) Input: bottle; Upper target: person; Lower: toilet
(b) Input: ashcan; Upper target: cone; Lower: night_stand
Figure 16: PDD on uniformly segmented objects

Experiments and results

The same data preprocessing methods and ADD model configurations are used here as mentioned in the aforementioned ADD experiments. As for the partition strategies, one can randomly select some targeted categories for different parts and PDD results are displayed in Figure 14. This process is totally automatic with little human involvement. However, the random factors would lead to some undesired scattered points. Another more reasonable method is to create novel objects with more human participation by deforming the separate segments by design. As shown in Figure 15, we create some point clouds from a guitar shown in Figure 3 by only hallucinating on their body parts, while keeping the neck part intact. As we can see, the created objects are only locally modified while the over layout remains that of a guitar.

PDD with manual segmentation strategies

In addition to automatic segmentation algorithms, two manual segmentation strategies have been tested. First, for highly symmetric objects it is reasonable to uniformly divide it into several parts. As shown in Figure 16, we manually partition a bottle and an ashcan from their middle, and run PDD with some selected targets on two segments separately.

Another intuitive way to segment point clouds is to uniformly segment the whole spaces into many cubic blocks and treat the points in each block separately. The results based on this strategy are presented in Figure 17 and 18. We partition the whole space containing the airplane into blocks alongside three axes, and generate features on each block with either a random class or cone class using PDD. In comparison to ADD, PDD can generate more delicate objects since it can manipulate different units of a single object. Besides, this also indicates a more expansive space for generating creative 3D objects.

Figure 17: PDD with block segmented objects targeting random categories.
Figure 18: PDD with block segmented objects targeting cone.

Realization of Sculptural Object

From points clouds to mesh

Given a novel point cloud, our task becomes generating a mesh – a description of an object using vertices, edges, and faces – in order to eliminate the spatial ambiguity of a point cloud representation. While mesh generation, also known as surface reconstruction, is readily provided in the free software package MeshLab [10], each possible algorithm has its own use case, so this task involves experimenting with each available algorithm to find what generates the best reconstruction for a given point cloud.

Our experiments focused on the following reconstruction algorithms: Delaunay Triangulation [8], Screened Poisson [21], Alpha Complex [18], and Ball-Pivoting [3]

. While each of these algorithms has its uses, ADD results in point clouds with a greater variance in sparsity than typical point clouds generated from real-world objects.

(a) Delaunay
Triangulation
(b) Screened
Poisson
(c) Alpha
Complex
(d) Ball
Pivoting
Figure 19: Reconstruction techniques

For all experiments, we limited the number of points to 10,000. If a generated point cloud had more than this number, we reduced using Poisson Disk Sub-sampling [12]. This sub-sampling method is more intelligent than a simple uniform resampling and produces points that appear to balance sparsity with finer definition.

An illustration of our findings can be found in Figure 19. Typically, Delaunay Triangulation produces a mesh that eliminates all detail of the point cloud. This approach will therefore only be appropriate when it is applied to an exceptionally sparse point cloud, as it would contain minimal detail regardless of the reconstruction technique. Screened Poisson extrapolates from the original point cloud, producing interesting shapes that are unfortunately misleading of the underlying structure. Alpha Complex and Ball-Pivoting are the two most consistent reconstruction methods and they both produce similar meshes. One important difference is that Alpha Complex produces non-manifold edges, all but making faithful 3D printing impossible.

As a result of these trials, we settled on Ball-Pivoting after Poisson Disk Sub-sampling to produce our final mesh.

Figure 20: Screen capture from Cura-Lulzbot software for 3d printing
Figure 21: Created sculpture from ADD.

From mesh to realizable 3D sculptures

Our final goal is to create 3D sculptures in the real world. We use standard software Meshmixer  [31] to solidify the surfaced reconstructed point cloud created by ADD. The solidified version of Figure 18(d) can be seen in Figure 1. We then use Lulzbot TAZ 3D printer with dual-extruder, transparent PLA, and dissolvable supporting material to create the sculpture of the reconstructed mesh. The printed sculptures of the point cloud in Figure 12 (RHS) are shown in Figure 21.

Figure 22: Sketch of Aural Fauna: Illuminato art installation and a Photo from the touch-sound interaction test with tablet computer interface

Conclusion

This paper describes our development of two ML algorithms for point cloud generation, ADD and PDD, which possess the ability to generate unique sculptural artwork, distinct from the dataset used to train the underlying model. These approaches allow more flexibility in the deformation in the underlying object as well as the ability to control the density of points required for meshing and printing 3D sculptures.

Through our development of ADD and PDD for sculptural object generation, we show AI’s potential for creative endeavours in art, especially the traditional field of sculpture and installation art. One challenge in the development of creative AI is the contradiction between the need of an objective metric for evaluating the quality of generated objects in the ML literature and the desire for creative and expressive forms in the art community. Reconciling these goals will require a mathematical definition of creativity that can help guide the AI’s generative process for each specific case. We will undertake these deep computational and philosophical questions in future work.

In the meantime, the realized 3D objects are an integral part of our ongoing project that is an interactive art installation. This project, Aural Fauna: Illuminato, presents an unknown form of organism, aural fauna, that reacts to the visitor’s sound and touch. Their sound is generated using machine learning. The creatures’ bodies are the generated sculptural objects by ADD and PDD. The sculptural objects could be installed on the wall or be hung from the ceiling like a swarm and the participant may interact with them using the touch-screen interface and their voice input as seen in Figure 22. This project aims to encapsulate the joint efforts of human artists and Creative AI as well as the viewer as partners in the endeavor.

References

  • [1] P. Achlioptas, O. Diamanti, I. Mitliagkas, and L. Guibas (2017) Learning representations and generative models for 3d point clouds. arXiv preprint arXiv:1707.02392. Cited by: Learning to Generate Creative Point Clouds.
  • [2] D. H. Ballard (1981) Generalizing the hough transform to detect arbitrary shapes. Pattern recognition 13 (2), pp. 111–122. Cited by: Partitioned DeepDream (PDD).
  • [3] F. Bernardini, J. Mittleman, H. Rushmeier, C. Silva, and G. Taubin (1999) The ball-pivoting algorithm for surface reconstruction. IEEE transactions on visualization and computer graphics 5 (4), pp. 349–359. Cited by: From points clouds to mesh.
  • [4] P. J. Besl and R. C. Jain (1988) Segmentation through variable-order surface fitting. IEEE Transactions on Pattern Analysis and Machine Intelligence 10 (2), pp. 167–192. Cited by: Partitioned DeepDream (PDD).
  • [5] A. Bidgoli and P. Veloso (2018-01) DeepCloud: the application of a data-driven, generative model in design. pp. . Cited by: Introduction.
  • [6] T. B. Brown, N. Carlini, C. Zhang, C. Olsson, P. Christiano, and I. Goodfellow (2018) Unrestricted adversarial examples. arXiv preprint arXiv:1809.08352. Cited by: Inspiration: DeepDream for Point Clouds.
  • [7] S. Carter and M. Nielsen (2017) Using artificial intelligence to augment human intelligence. Distill. Cited by: Learning to Generate Creative Point Clouds.
  • [8] F. Cazals and J. Giesen (2006) Delaunay triangulation based surface reconstruction. In Effective computational geometry for curves and surfaces, pp. 231–276. Cited by: From points clouds to mesh.
  • [9] A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su, et al. (2015) Shapenet: an information-rich 3d model repository. arXiv preprint arXiv:1512.03012. Cited by: Figure 3, Creative AI: ADD and PDD.
  • [10] P. Cignoni, M. Callieri, M. Corsini, M. Dellepiane, F. Ganovelli, and G. Ranzuglia (2008) MeshLab: an Open-Source Mesh Processing Tool. In Eurographics Italian Chapter Conference, V. Scarano, R. D. Chiara, and U. Erra (Eds.), External Links: ISBN 978-3-905673-68-5, Document Cited by: From points clouds to mesh.
  • [11] Content aware studies. Note: http://egorkraft.art/#cas Cited by: Introduction.
  • [12] M. Corsini, P. Cignoni, and R. Scopigno (2012) Efficient and flexible sampling with blue noise properties of triangular meshes. IEEE Transaction on Visualization and Computer Graphics 18 (6), pp. 914–924. Note: http://doi.ieeecomputersociety.org/10.1109/TVCG.2012.34 External Links: Link Cited by: From points clouds to mesh.
  • [13] A. Elgammal, B. Liu, M. Elhoseiny, and M. Mazzone (2017) CAN: creative adversarial networks, generating” art” by learning about styles and deviating from style norms. arXiv preprint arXiv:1706.07068. Cited by: Learning to Generate Creative Point Clouds.
  • [14] M. A. Fischler and R. C. Bolles (1981) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM 24 (6), pp. 381–395. Cited by: Partitioned DeepDream (PDD).
  • [15] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In NIPS, Cited by: Learning to Generate Creative Point Clouds, Learning to Generate Creative Point Clouds.
  • [16] E. Grilli, F. Menna, and F. Remondino (2017) A review of point clouds segmentation and classification algorithms. The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences 42, pp. 339. Cited by: Partitioned DeepDream (PDD).
  • [17] T. Groueix, M. Fisher, V. G. Kim, B. C. Russell, and M. Aubry (2018) AtlasNet: a papier-mache approach to learning 3d surface generation. arXiv preprint arXiv:1802.05384. Cited by: Learning to Generate Creative Point Clouds.
  • [18] B. Guo, J. Menon, and B. Willette (1997) Surface reconstruction using alpha shapes. In Computer Graphics Forum, Vol. 16, pp. 177–190. Cited by: From points clouds to mesh.
  • [19] H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, and W. Stuetzle (1992) Surface reconstruction from unorganized points. Cited by: Inspiration: DeepDream for Point Clouds.
  • [20] H. Kato, Y. Ushiku, and T. Harada (2018) Neural 3d mesh renderer. In CVPR, Cited by: Introduction.
  • [21] M. Kazhdan and H. Hoppe (2013) Screened poisson surface reconstruction. ACM Transactions on Graphics (TOG) 32 (3), pp. 29. Cited by: From points clouds to mesh.
  • [22] J. Lehman, S. Risi, and J. Clune (2016) Creative generation of 3d objects with deep learning and innovation engines. In Proceedings of the 7th International Conference on Computational Creativity, Cited by: Introduction.
  • [23] C. Li, M. Zaheer, Y. Zhang, B. Poczos, and R. Salakhutdinov (2018) Point cloud gan. arXiv preprint arXiv:1810.05795. Cited by: Figure 6, Learning to Generate Creative Point Clouds, Learning to Generate Creative Point Clouds.
  • [24] H. D. Liu, M. Tao, C. Li, D. Nowrouzezahrai, and A. Jacobson (2018) Adversarial geometry and lighting using a differentiable renderer. arXiv preprint arXiv:1808.02651. Cited by: Inspiration: DeepDream for Point Clouds.
  • [25] X. Lu, J. Yao, J. Tu, K. Li, L. Li, and Y. Liu (2016) PAIRWISE linkage for point cloud segmentation.. ISPRS Annals of Photogrammetry, Remote Sensing & Spatial Information Sciences 3 (3). Cited by: Partitioned DeepDream (PDD).
  • [26] A. Nguyen and B. Le (2013) 3D point cloud segmentation: a survey.. In RAM, pp. 225–230. Cited by: Partitioned DeepDream (PDD).
  • [27] D. Nima, S. Luca, and M. Mauro (2018) Vox2Net: from 3d shapes to network sculptures. In NIPS, Cited by: Introduction, Learning to Generate Creative Point Clouds.
  • [28] C. R. Qi, H. Su, K. Mo, and L. J. Guibas (2017) Pointnet: deep learning on point sets for 3d classification and segmentation. CVPR. Cited by: Inspiration: DeepDream for Point Clouds.
  • [29] C. Robert (2014) Machine learning, a probabilistic perspective. Taylor & Francis. Cited by: Learning to Generate Creative Point Clouds.
  • [30] A. D. Sappa and M. Devy (2001) Fast range image segmentation by an edge detection strategy. In 3-D Digital Imaging and Modeling, 2001. Proceedings. Third International Conference on, pp. 292–299. Cited by: Partitioned DeepDream (PDD).
  • [31] R. Schmidt and K. Singh (2010) Meshmixer: an interface for rapid mesh composition. In ACM SIGGRAPH 2010 Talks, pp. 6. Cited by: From mesh to realizable 3D sculptures.
  • [32] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus (2013) Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. Cited by: Inspiration: DeepDream for Point Clouds.
  • [33] A. J. Trevor, S. Gedikli, R. B. Rusu, and H. I. Christensen (2013) Efficient organized point cloud segmentation with connected components. Semantic Perception Mapping and Exploration (SPME). Cited by: Partitioned DeepDream (PDD).
  • [34] M. Tyka Inceptionism: going deeper into neural networks. Note: https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html Cited by: Introduction, Partitioned DeepDream (PDD).
  • [35] A. Van Den Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. W. Senior, and K. Kavukcuoglu (2016) WaveNet: a generative model for raw audio.. In SSW, Cited by: Learning to Generate Creative Point Clouds.
  • [36] M. Vieira and K. Shimada (2005) Surface mesh segmentation and smooth surface extraction through region growing. Computer aided geometric design 22 (8), pp. 771–792. Cited by: Partitioned DeepDream (PDD).
  • [37] M. A. Wani and H. R. Arabnia (2003) Parallel edge-region-based segmentation algorithm targeted at reconfigurable multiring network. The Journal of Supercomputing 25 (1), pp. 43–62. Cited by: Partitioned DeepDream (PDD).
  • [38] J. Wu, C. Zhang, T. Xue, B. Freeman, and J. Tenenbaum (2016) Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In NIPS, Cited by: Learning to Generate Creative Point Clouds.
  • [39] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao (2015) 3d shapenets: a deep representation for volumetric shapes. In CVPR, Cited by: Figure 2, Experiments and results, Creative AI: ADD and PDD.
  • [40] Y. Yang, C. Feng, Y. Shen, and D. Tian (2018) FoldingNet: point cloud auto-encoder via deep grid deformation. In CVPR, Vol. 3. Cited by: Learning to Generate Creative Point Clouds.
  • [41] M. Zaheer, S. Kottur, S. Ravanbakhsh, B. Poczos, R. R. Salakhutdinov, and A. J. Smola (2017) Deep sets. In NIPS, Cited by: Inspiration: DeepDream for Point Clouds, Experiments and results.

Authors Biographies

Songwei Ge is a Masters student in the Computational Biology Department at Carnegie Mellon University.

Austin Dill is a Masters student in the Machine Learning Department at Carnegie Mellon University.

Chun-Liang Li is a PhD candidate in the Machine Learning Department of Carnegie Mellon University. He received IBM Ph.D. Fellowship in 2018 and was the Best Student Paper runner-up at the International Joint Conference on Artificial Intelligence (IJCAI) in 2017. His research interest is on deep generative models from theories to practical applications.

Dr. Eunsu Kang is a Korean media artist who creates interactive audiovisual installations and AI artworks. Her current research is focused on creative AI and artistic expressions generated by Machine Learning algorithms. Creating interdisciplinary projects, her signature has been seamless integration of art disciplines and innovative techniques. Her work has been invited to numerous places around the world including Korea, Japan, China, Switzerland, Sweden, France, Germany, and the US. All ten of her solo shows, consisting of individual or collaborative projects, were invited or awarded. She has won the Korean National Grant for Arts three times. Her researches have been presented at prestigious conferences including ACM, ICMC, ISEA, and NeurIPS. Kang earned her Ph.D. in Digital Arts and Experimental Media from DXARTS at the University of Washington. She received an MA in Media Arts and Technology from UCSB and an MFA from the Ewha Womans University. She had been a tenured art professor at the University of Akron for nine years and is currently a Visiting Professor with emphasis on Art and Machine Learning at the School of Computer Science, Carnegie Mellon University.

Lingyao Zhang earned her Master’s degree in the Machine Learning Department at Carnegie Mellon University.

Manzil Zaheer earned his Ph.D. degree in Machine Learning from the School of Computer Science at Carnegie Mellon University under the able guidance of Prof Barnabas Poczos, Prof Ruslan Salakhutdinov, and Prof Alexander Smola. He is the winner of Oracle Fellowship in 2015. His research interests broadly lie in representation learning. He is interested in developing large-scale inference algorithms for representation learning, both discrete ones using graphical models and continuous with deep networks, for all kinds of data. He enjoys learning and implementing complicated statistical inference, data-parallelism, and algorithms in a simple way.

Dr. Barnabás Póczos is an associate professor in the Machine Learning Department at the School of Computer Science, Carnegie Mellon University. His research interests lie in the theoretical questions of statistics and their applications to machine learning. Currently he is developing machine learning methods for advancing automated discovery and efficient data processing in applied sciences including health-sciences, neuroscience, bioinformatics, cosmology, agriculture, robotics, civil engineering, and material sciences. His results have been published in top machine learning journals and conference proceedings, and he is the co-author of 100+ peer reviewed papers. He has been a PI or co-Investigator on 15+ federal and non-federal grants. Dr. Poczos is a member of the Auton Lab in the School of Computer Science. He is a recipient of the Yahoo! ACE award. In 2001 he earned his M.Sc. in applied mathematics at Eotvos Lorand University in Budapest, Hungary. In 2007 he obtained his Ph.D. in computer science from the same university. From 2007-2010 he was a postdoctoral fellow in the RLAI group at University of Alberta, then he moved to Pittsburgh where he was a postdoctoral fellow in the Auton Lab at Carnegie Mellon from 2010-2012.