Mix Match: Towards Omitting Modelling Through In-Situ Alteration and Remixing of Model Repository Artifacts in Mixed Reality

03/20/2020 ∙ by Evgeny Stemasov, et al. ∙ Association for Computing Machinery 0

The accessibility of tools to model artifacts is one of the core driving factors for the adoption of Personal Fabrication. Subsequently, model repositories like Thingiverse became important tools in (novice) makers' processes. They allow them to shorten or even omit the design process, offloading a majority of the effort to other parties. However, steps like measurement of surrounding constraints (e.g., clearance) which exist only inside the users' environment, can not be similarly outsourced. We propose Mix Match a mixed-reality-based system which allows users to browse model repositories, preview the models in-situ, and adapt them to their environment in a simple and immediate fashion. Mix Match aims to provide users with CSG operations which can be based on both virtual and real geometry. We present interaction patterns and scenarios for Mix Match, arguing for the combination of mixed reality and model repositories. This enables almost modelling-free personal fabrication for both novices and expert makers.



There are no comments yet.


page 2

page 3

page 4

page 5

page 7

page 9

page 10

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction and Motivation

footnotetext: now at Télécom Paris/IP-Paris

Personal fabrication continues to spread across various usage contexts, ranging from low-volume prototyping, makerspaces, and users’ homes. The cost of required hardware (e.g., 3D printers or CNC-mills) is continuously decreasing [43], while the variety of available devices is ever-increasing. This allows experienced users to create various artifacts specifically tailored to their needs. While the artifacts’ quality may not always match industry-grade production processes, they may still fulfill functional requirements. Personal fabrication therefore sees use beyond toys and trinkets, and instead enables practical changes in households [10, 43]. Users may repair broken objects, and likewise create entirely new artifacts, for instance for home improvement  [10] or enhanced accessibility [7, 29, 18]. All such use-cases empower users to alter their physical environments and democratize the process of fabrication.

However, the successful usage of such devices (e.g., 3D printers, CNC-mills) often requires some degree of knowledge of complex tools [4, 26]. CAD/CAM software was initially transferred from industrial usage, and only later experienced simplification aimed at novices [4]. Alternatively, it is possible to replace most, if not all, modelling with the usage of a model repository, where other users make their designs freely available to the public [2, 7, 39]. Users then omit modelling, and instead browse the repository for artifacts (i.e., solutions) to fabricate. While open model repositories provide users with ready-to-print artifacts, they exhibit conceptual limitations. The design effort is offloaded to other parties, but the knowledge and understanding of problem specifics (e.g., clearances or proportions) remains with the respective users and their unique requirements. More importantly, the entire physical context remains with the user and has to be mapped and measured [28]. For objects that are not explicitly standardized or hard to gauge, this quickly becomes an issue for novices and potentially time-consuming for more experienced makers [21]. Subsequently, these measurements and specified constraints can be mis-measured [21] or missed by the user, requiring additional iterations [44]. Our work aims to bridge this disconnect between the physical space of the end-user and the space of the model repository. We argue for an easy in-situ ”pick-and-choose” fabrication paradigm largely omitting the need for more complex modelling tools and operations.

We propose Mix&Match, a Mixed-Reality-based tool, which aims to leverage outsourced design effort through model repositories, while retaining relevant and easy in-situ adaptations. Mix&Match was implemented using a Magic Leap ML1 augmented reality (AR) head-mounted display (HMD) and the MyMiniFactory repository. We provide a visual interface to the model repository, allowing users to search for artifacts and browse through results in place (Figure 1b-c). The models can be compared in-situ and altered with modifications like scaling to ensure both aesthetic and functional fit to the environment (Figure 1a-d). As an AR-headset already provides depth-sensing, it can incorporate the physical environment into the selection (Figure 1) and alteration process (Figure 1c). To allow for simple adaptations to the environment of the user, Mix&Match provides Constructive Solid Geometry (CSG) operations, that can be based on digital artifacts, retrieved from the model repository. These Boolean operations are also applicable to real artifacts, as found in the users’ immediate vicinity, if they have been acquired appropriately. This allows the user to subtract geometry of a shelf from another part to ensure a friction fit or make a digital copy of a physical artifact and thereby treating the physical environment as a repository of its own (Figure 1). All features are aimed at increased ease of use through outsourced effort and the omission of modelling to create an easier access for novices and accelerate the process for more experienced makers.

Instead of limiting users of a model repository to pre-defined approaches, Mix&Match encourages the practice of in-situ remixing of artifacts, while treating the users’ physical environment in a similar fashion to a digital one. We propose and argue for an in-situ ”pick-and-choose”-based personal fabrication paradigm. Model repositories like Thingiverse or MyMiniFactory already provide readily available, free designs. However, with in-situ ”pick-and-choose”, we want to compensate for some of the inherent disadvantages their approach of outsourced design implicates: a disconnect between the physical environment of the user and the repositories’ functionality. The contributions of this work are:

  • Proof-of-concept implementation of Mix&Match, a Mixed-Reality-based tool that allows users to preview and alter model repository artifacts in-situ.

  • The notion of an in-situ ”pick-and-choose”-based personal fabrication paradigm and set of application scenarios and interaction flows for Mix&Match and comparable systems.

Ultimately, a system like Mix&Match allows to outsource many, if not all, parts of personal fabrication that do not have to be inherently personal. Design/modelling effort is outsourced to a crowd of experienced makers. Measurement is offloaded to a hardware system (e.g., depth cameras of an AR-headset). Fabrication of the artifact itself can be likewise offloaded to an external service. With these components delegated to other, often more competent parties, novices and experienced makers alike may achieve fitting results with fewer interaction cycles.

2 Related Work

Mix&Match builds upon multiple directions of research: fabrication or design with mixed reality, personal fabrication for novices and personal fabrication that interacts with its physical counterparts, along with research concerned with the use and improvement of model repositories.

2.1 Fabrication in or with Mixed Reality

Mixed or augmented reality, along with all related technologies, has shown to be a promising tool for personal fabrication activities. As such, it is able to provide previews of models or enable easier in-situ modelling of artifacts. Milette and McGuffin presented DualCAD, which combined a smartphone device and an HMD for 3D-modelling [30]. Mixed reality also allows users to interactively influence and guide a fabrication process, as for instance shown by Peng et al. with RoMA, where the authors combined augmented reality and a robotic arm [34]. Yamaoka and Kakehi presented MiragePrinter, where an aerial imaging plate combined the fabricated result of a 3D printer with output from modelling software [47]. This allowed users to rely on physical artifacts as guides and interactively control the fabrication process [47].

Mixed reality can also be used as a guidance for manual tasks done by the user. Yue et al. presented WireDraw, which supports users in the task of drawing in mid-air with a 3D-pen [48]. For subtractive manufacturing, Hattab and Taubin presented a method to support users carving an object with information projected onto the workpiece [14]. ExoSkin by Gannon et al. aided users with projected toolpaths to fabricate intricate shapes on the body – a complex but relevant feature for personal fabrication [13]. Jeong et al. applied this concept to the design process of linkage mechanisms in Mechanism Perfboard [19], while Müller et al. aimed to improve the ease of use of CNC mills with augmented reality [31]. Weichel et al. presented MixFab, which used mixed reality to provide users with a tool that actively includes scanned real-world artifacts and gesture-based modelling in the process [45].

The aforementioned works have in common that they situate modelling work in a spatial context, ideally co-locating it with relevant real-world features and improving processes of measurement and understanding. Mix&Match differs from them primarily in two ways: 1) It is not meant to be confined to a static setup. 2) it is not meant to be a ”pure” design tool that essentially makes users ”start from scratch”. Instead, Mix&Match relies on outsourced design effort, as provided by model repositories, to allow users to omit modelling as such.

2.2 Fabrication for Novices

It is important to consider the aspect that tools used for personal fabrication did not start out as explicitly novice-friendly. Research therefore focused on accessibility of the modelling processes itself. Drill Sergeant aimed to equip novices with a set of tools that are able to provide feedback and generally support the fabrication process [42], while CopyCAD by Follmer et al. allowed users to copy features from arbitraty objects to reference in a CNC-milling setup [11]. Makers’ Marks by Savage et al. allowed users without technical knowledge to design functional artifacts through sculpting a shape and annotating it with the desired features [41]. Turning coarse input into viable designs through sketches was also a prior topic. SketchChair, was a tool to let novices design and verify chairs [40]. Lau et al. generalized this concept later, aiming at arbitrary objects to be personalised [24]. Yung et al. presented Printy3D, which combined two paradigms to ease the process: the design happens in-situ and also employs tangibility in the interface [49].

With Mix&Match, we similarly aim to simplify the process of personal fabrication, but without the goal to simplify modelling tools. Instead, we aim to omit modelling (in its established sense) completely, while retaining relevant abilities to configure and alter artifacts.

2.3 Model Repositories and Remixes

Prior research has also focused on the usage and extension of model repositories. Alcock et al. categorized issues that novices or other users may have when it comes to usage and adaptation of model repository artifacts, identifying missing information, customization and customizability as issues present on Thingiverse [2]. Novices to 3D-printing and associated processes were the topic of Hudson et al., who identified common challenges like missing domain knowledge or the inability to customize existing designs [16]. ”Parameterized Abstractions of Reusable Things” were introduced as a framework by Hofmann et al. to counteract a disconnect between designed artifacts and their intended functionality [15]. Kim et al. aimed to improve on the error-prone process of measuring artifacts to be references in 3D-printing by introducing adjustable inserts or replaceable parts [21].

The concept of remixing model repository artifacts is an important process in online 3D-printing communities  [33]. Roumen et al. presented Grafter, a tool to aid in the process of remixing machines [39], while Follmer and et al. presented tools to do so for toys [12] and other physical artifacts [11]. Lindlbauer and Wilson, in contrast, presented Remixed Reality, where mediated reality served as a tool to alter one’s own physical context [27] from and in a digital environment.

Mix&Match aims to provide a novel, situated interface to model repositories, bridging the gap between outsourced designs and the users’ physical context, allowing in-situ previewing and remixing.

2.4 Fabrication for and with Real-world Artifacts

Personal fabrication may yield various artifacts: decorative figures, household items, replicas of existing objects, props, tools etc. No result is going to exist ”in a vacuum” – every artifact interacts with its environment. This concept was specified by Ashbrook et al. as augmented fabrication [3, 28], and was also prominent part of prior research [11, 45, 41]. Yamada et al. presented ReFabricator, a tool to actively integrate real-world objects as material in fabricated artifacts  [46]. In contrast to that, FusePrint by Zhu et al. incorporated real-world objects as references in a stereolithography printing process [50], while Huo et al. leveraged real-world features as an input for 3D design with Window-Shaping [17]. Lau et al. relied on a photograph to create fitting objects [25]. Ramakers et al. presented RetroFab, which allowed users to retroactively alter and enhance physical interfaces like desk lamps or toasters [36]. ThisAbles by Ikea presents 3D-printable improvements to furniture, to accommodate for users’ special needs [18]. Chen et al. presented a set of tools to combine real-world artifacts with 3D-printed ones. Reprise focused on customizeable adaptations for everyday tools and objects [10], Encore dealt with attachments and their fabrication [9], while Medley treated everyday objects as materials to augment 3D-printed objects [8]. The previously mentioned MixFab by Weichel et al. likewise incorporates real-world artifacts as a support for operations [45]. In contrast to these approaches, AutoConnect by Koyama et al. mostly automates the process of modelling 3D-printable connectors for various objects [22].

Mix&Match embraces the procedures presented here, extending them by leveraging model repository artifacts, while simultaneously providing ways to embed the physical context into the process by allowing for in-situ previews and alterations like CSG referencing the users’ physical context.

3 Concept and Interaction Space

Figure 2: Steps which the concept behind Mix&Match emphasizes during the design process. While each step’s results feed in to the following ones, users always have the possibility to circle back to prior steps, for instance to refine their search terms, or gather more alternative solutions to use or remix.

Mix&Match aims to allow users to omit the process of ”hand-crafting” (e.g., 3D-modelling) while still retaining meaningful alterations and customizations with respect to the users’ physical context. This is what we specify as in-situ ”pick-and-choose”. There, 2 aspects of interaction are important: functional interaction and aesthetic interaction. Functional interaction describes the actual practical tasks a fabricated artifact may fulfill. For example, whether a mount for a phone is actually able to hold it in a desired position, or whether a hook is mounted low enough to be reached and simultaneously high enough so that the clothing suspended from it does not touch the floor. These constraints and requirements are often found only in the specific users physical context and emerge from the spatial configuration their context has. While some constraints may be reduced to standardized components, they can rarely provide a complete picture of the environment an artifact will reside and function in. Aesthetic interaction describes the visual level of interaction between newly fabricated artifact and its future environment. This can be based on personal judgment of design, design consistency and general visual appeal. For instance, a newly acquired decorative planter may or may not fit the remaining objects on the countertop it is meant to be placed on. To ensure both appropriate aesthetic and functional interaction, different directions can be taken by users. They may rely either on measuring or on coarse visual judgement. They may also either accept the first adequate solution, or iterate further, either out of pure desire to do so, or if the first iteration does not fit its purpose333this excludes failures during the fabrication process, which are still a relevant factor [16]..

The interaction space surrounding Mix&Match and the in-situ ”pick-and-choose” paradigm consists of three fundamental principles:

  1. Outsourced design effort, relying on existing designs

    1. Existing designs are found in the real world.

    2. Existing designs are found in the virtual world.

  2. In-situ adaptation effort and remixing, referencing the physical context

  3. Variable degrees of effort to reach one’s goals to accommodate for different users and requirements

With Mix&Match, we aim to (mostly) omit modelling from the process of personal fabrication, while retaining the potential benefits of a modelled artifact: the prospect of an ideally tailored solution. This is in line with the notion of personal design, in contrast to personal fabrication [6, 5], which abstracts from the specifics of manufacturing and focuses on user-centered design processes. However, we argue that neither design nor fabrication have to be local (i.e., happen at the location of and be carried out by the user) to provide a successful and tailored artifact that fulfills the users’ requirements. Merely the successful configuration and tailoring of a solution, likely to exist in the diverse model repositories that have emerged, is a relevant and inherently personal part of personal fabrication. For instance, a user will likely find a design for a broom holder online, and would merely have to configure its diameter – if deemed necessary – for it to be an adequate solution. It is then not relevant who designed it or who will fabricate it; merely the tailoring to the user’s requirements is crucial.

Figure 2 describes the conceptual process we propose for the in-situ ”pick-and-choose” paradigm behind Mix&Match. While a similar notion already exists when one considers model repositories, we emphasize the unification of remote model repositories and the users’ physical context as sources for artifacts at the location where they are meant to be employed. This is depicted in table 1, in combination with two distinct patterns of (re-)use: ”as intended” in contrast to ”remixed / misued”. Artifacts can be copied from the digital repository, or from existing objects in the users’ vicinity and either be used according to their original specification (with simple alterations like scaling), or be creatively misused (e.g., repurposing a decorative figure to serve as a phone mount).

Table 1: Model origins (model repository, physical environment) combined with two distinct usage patterns: largely unaltered use (i.e., ”as intended”) and remixed use or misuse. Procedure per cell: retrieval, in-situ preview, fabricated result.

Starting with a specific goal, requirement or desire, the users initiate a search in the repository, or browse it without a clear search term. The users then may start gathering fitting alternatives for the task at hand. Up until this point, the interaction with Mix&Match is comparable to one with a model repository. The influence of the physical context is indirect, as it described the prior requirements. In addition, Mix&Match treats the physical environment as an equitable model repository. The users then may start to compare their alternatives (e.g., a set of headphone stands). With Mix&Match, this happens in-situ – right at the location where the artifact will interact with its environment. This allows both visual (i.e., aesthetic) and, to a degree, functional judgement. Afterwards, the users may start to alter the artifact or remix it with the help of features found in their physical environment (e.g., the thickness of a shelf, or the diameter of a pot). After verifying the design’s functionality visually (e.g., by checking clearances or diameters), the users then may hand off the design to be fabricated. Whether this happens in their own homes (e.g., with their own 3D-printer) or is outsourced (e.g., to a printing service) is less relevant, as the fabrication of the artifact is not an inherently personal part of the process.

Mix&Match emphasizes outsourced design effort by leveraging model repositories, while allowing users to preview and adapt the artifacts retrieved. Ideally, this allows users to omit the modelling process entirely. Consistently omitting modelling is a naïve ambition. It may be valid if the user chooses to fabricate a fully standardized component (e.g., an M2 screw with 2cm length). Few problems that are addressable with the means of personal fabrication are truly unique and may have been solved by someone else. However, the constraints and specifics imposed by the users and their physical context are not as easy to outsource. Therefore, while modelling from scratch might not be always needed, configuring may suffice. This is offered by customizer tools, for instance by Thingiverse 444www.thingiverse.com/apps/customizer, Accessed: 2.9.19 or MyMiniFactory 555www.myminifactory.com/customize, Accessed: 14.9.19, where dimensions of explicitly parametrized designs can be freely altered. Personal fabrication’s outlook is that each end every user is able to create custom-made, tailored artifacts for their very personal use case and context. In contrast to store-bought solutions, solutions that emerge from personal design and fabrication may achieve a high degree of fit and tailoring with respect to the users and their requirements. This does not necessarily mean that the design or the fabrication process need to happen ”from scratch” and be done by the user. Ideally, only relevant effort has to be spent by the user (while still being free to invest more time into it). With Mix&Match, we want to extend the notion of a model repository to any user’s physical context, outsourcing any effort not inherently vital to address a requirement.

4 Prototype Implementation

Our prototype system is implemented using Unity 2019.2 and the Magic Leap ML1666www.magicleap.com/magic-leap-one, Accessed: 14.9.19 head-mounted display (HMD). Mix&Match aims to be provide as much functionality as possible within a single system – ideally to replace software like CAD, a slicer and a printer interface. The following sections describe the implementation of the system.

4.1 Architecture

The architecture of the system is centered around the Magic Leap HMD, along with the REST (REpresentational State Transfer) API to a model repository. As a data source, we chose MyMiniFactory instead of Thingiverse, primarily because the former provides vetted and moderated results. Furthermore, MyMiniFactory emphasizes quality and printability of the provided models. The downside of this is a less abundant choice of models. Moreover, our search functionality explicitly filters out any results that do not permit remixing. An alteration for personal use only would likely comply with most licenses used in model repositories. It is nevertheless reasonable to feed the remixed models back into the ecosystem, if the users deem it appropriate. This is likely the case for adaptations that generalize to a degree, like addition of standardized mounts/fixtures or remixes that resulted from combination of multiple artifacts from the repository [33].

4.2 Interaction

The interaction with the system is meant to provide the most relevant functions of a model repository interface, while combining them with the scene understanding and spatial visualization a mixed reality headset provides. This is meant to support the

”pick-and-choose”-paradigm, by largely omitting modelling while retaining adaptivity of outsourced artifacts. This primarily includes searching the repository, choosing fitting models and previewing them. Figure 2 described the process users may follow with the in-situ ”pick-and-choose” paradigm. The following paragraphs describe the implementation of each step for Mix&Match. While they are described as a sequence, the users always have the option to return to prior steps to reevaluate their choices and the process (as seen in Figure 2). The following figures were captured either with the help of the ”capture service” of the Magic Leap HMD, or via ”Magic Leap Device Bridge” (MLDB). All exhibit an offset between the augmented content and the physical environment. To the user of the HMD, the imagery is properly aligned with the environment and exhibits proper occlusion by the user’s hands.

4.2.1 Searching and Gathering

Figure 3: Initial interface to perform search requests. Users can enter their search terms (a), scroll through a set of previews of the results (b) and can minimize this UI when needed.

Each design process with Mix&Match begins with the search interface, presented to the users (Figure 3). There, they are able to enter arbitrary search terms, similarly to the well-known web interfaces of Thingiverse or MyMiniFactory. The application then relays the search via REST to the API of MyMiniFactory, which returns a JSON response, used to populate the list of results. After a successful search, the users may scroll through the results, with the title and a thumbnail image being present. Selection of a result enqueues it to be downloaded and added to the preview carousel described next.

4.2.2 Comparison and Previewing

Figure 4: The placement carousel, which gathers all previously downloaded models (a). Users can cycle through the objects and place them in their environment (b).

Having collected an initial set of artifacts, the users may start to compare them in more detail. Each result is available through a carousel, arranged around and affixed to the controller, and is cycled through via the touchpad (Figure 4, a). In contrast to the interface in the searching step, the users now gain insight into the spatial aspects of the model they have downloaded. They are now able to examine the entire geometry to judge the functionality or the appeal of the artifact. By holding the carousel where the artifact is meant to be employed and cycling through the options, users may directly compare their available alternatives. Artifacts that do not meet their requirements can be removed from the list of options. The models can be placed and affixed into the space around the user (Figure 4, b). Each of the aforementioned actions is further supported by haptic and visual feedback. This allows the user to interact further with them, as described in the next section.

4.2.3 Alteration and Remixing

Figure 5: Users can grab and move or rotate objects they have retrieved and placed. Selected objects are highlighted with an outline (b). To scale them, users grab the object with the controller, and perform a pinch gesture (a).

Having placed an amount of models of their choosing, users can now interact and alter them in greater detail. The possible alterations include moving the object, rotating it and scaling (Figure 5). To move or rotate an object, users grab it with their controller and directly manipulate it while holding the trigger button. Scaling also requires a ”grabbing” with the controller – additionally, users have to perform a pinch gesture with their other hand, while moving their hands apart.

Figure 6: Interface for CSG operations and their reversal (undo). As the subtract operation is not commutative, the selection is color-coded (a). Results of all 3 operation types (b).

Beyond these basic operations, Mix&Match provides Boolean operations (CSG, constructive solid geometry [37, 23]) for the placed models. This allows users to combine (union) models, subtract them from one another (difference) and intersect them. After selecting two models, users are presented with an interface to choose one of the aforementioned operations (Figure 6). Union also serves as a simple grouping feature, known from other applications. These operations are considered to be destructive, and can therefore be undone (Figure 6, a) To allow users to alter the models they download, Mix&Match also provides access to 4 default primitives (cube, sphere, pyramid, cylinder), that can be interacted with, similarly to other models. They also serve as easy-to-use features for CSG Operations, where no suitable counterpart can be found in the users’ physical environment or the model repository (Figure 7).

Figure 7: CSG operations can likewise be based on real-world geometry, if available. Selecting the shelf and the cylinder while they intersect (a), allows the user to subtract the shelf from the cylinder for a friction-fit (b).

Our initial approach relied on the reconstructed environment mesh provided by the ML1 HMD. However, the resolution of the available mesh was too coarse to allow for precise geometric interaction between artifacts and the environment. Likewise, treating the device like a 3D-scanner does not yield appropriate results yet. As most modern HMDs provide some degree of depth sensing and world reconstruction, we argue that with sufficient maturity of the devices, a detailed environment mesh can be made available to users. It then serves as an additional geometry to reference in the process of customization. In its current state, Mix&Match relies on marker tracking and thereby reproduces environment features in an appropriate fidelity. All ”copy and paste” or CSG operations based on real-world artifacts or geometry are subsequently based on previously scanned or otherwise acquired 3D geometry.

4.3 Preprocessing, Postprocessing and Output

Multiple stages of processing happen without user intervention. After the download of a model, the mesh is pre-processed, prior to being handed to the user to be altered. Depending on the amount of detail a mesh has, a simplification/decimation step is executed. This is particularly relevant for highly detailed models, like 3D-scanned sculptures. An example can be seen in Figure 8, where a quality factor of 30% is applied to the model, reducing the polygon count from approximately 699k triangles down to approximately 210k. For low-detail meshes, this step is skipped, to preserve all features of the design. Afterwards, inconsistencies in terms of bounds and normal alignment are corrected.

Figure 8: A detailed model before (a) and after (b) the applied simplification as exported by Mix&Match. The loss in quality is almost negligible.

After completing all necessary operations, the user may start to finalize the design. This is triggered by the save button on the interface, which initiates the output process. The design is then saved in .stl format to the local storage of the device. As an additional step, Mix&Match can generate .gcode files directly on-device. Using the gsSlicer777www.github.com/gradientspace/gsSlicer, Accessed: 12.9.19 library, a machine-readable description for the fabrication process is generated. The results (exported mesh and the .gcode generated from it), including the necessary support structures for 3D-printing, can be seen in Figure 9.

Figure 9: Exported mesh of a user-selected model (a). G-code generated on-device, based on this mesh, as visualized by Pronterface999www.pronterface.com/, Accessed: 10.9.19  (b). Printed result, with supports removed (c).

5 Usage and Application Scenarios

The following paragraphs provide brief walkthroughs for 2 tasks users may tackle with Mix&Match. They aim to highlight the fact that despite providing only rudimentary modelling capabilities, Mix&Match allows for multiple, equally viable paths to a solution fulfilling the users’ functional and aesthetic requirements alike. Each path either emphasizes the users’ physical environment or the outsourced designs to a greater degree. Furthermore, each path exhibits a varying degree of effort that is needed to achieve a satisfying solution for the task. With these example scenarios, we want to emphasize the appeal of an in-situ ”pick-and-choose” procedure in the users’ own physical context.

5.1 Walkthrough 1: Finding a Fitting Pot for a Houseplant

A user has recently acquired a small houseplant. He intends to replace the original planter with a more intricate one. The target artifact has to fulfill both aesthetic requirements (i.e., fit the theme of his desktop), and functional requirements (i.e., fit the inner pot’s diameter). Mix&Match aims to support the user fulfill both requirements, offering variable degrees of effort needed.

5.1.1 Path 1 - Adapting a Fitting Design

Figure 10: Adaptation of a fitting planter design to the existing plant. Browsing (1), comparing alternatives (2), customizing / scaling to fit (3).

First, the user chooses to search for ”pot”, and initially selects a set of alternatives based on the thumbnails. He then cycles through the downloaded planters, removing the ones that do not appeal to him. Having decided on one design, he starts to scale the virtual pot until it fits the diameter and the depth of the real plant (Figure 10). If the scaled variant of the planter loses its visual appeal, the user may circle back to an earlier step, either searching and gathering more alternatives, or choosing a different one from the initially downloaded set.

5.1.2 Path 2 - Repurposing/Misusing/Remixing a Design

Figure 11: In-Situ remixing of a figure to create a planter. Searching for a base design (1), placing the components (figure and cylinder) and applying subtraction (2), visually verifying proportions (3).

Instead of searching for a planter, the user instead aims to recreate a design where a plant’s leaves represent a figures ”hairstyle” (Figure 11). He downloads the figure, places it on his table and scales it to coarsely enclose the planter. Afterwards, he creates a cylinder primitive from the provided interface and moves it to intersect the figure. Applying the subtract CSG operation yields a hole for the planter to fit in. Alternatively, the subtractive part of the CSG operation may be the pot itself, if it is scanned in sufficient detail.

5.1.3 Path 3 - Replicating an Existing Artifact

Figure 12: Replicating an existing physical artifact (planter). From left to right: Copying the mesh (1), previewing the result in terms of size (2), correcting the scale and proportions (3).

It is also possible to employ a real-world ”copy-and-paste”-like procedure. The user may already have a planter in use that is both visually appealing and fulfills its function (Figure 12). Subsequently, there is no immediate need to start browsing for other designs. It suffices to select the existing planter, duplicate it and proceed with further alterations, if the need arises. If the mesh of the planter is not fully separated from the environment mesh, the user may place a cube primitive at the location, covering the object to be selected. The intersect CSG operation then would provide the user with a separate mesh.

5.2 Walkthrough 2: Creating a Shelf-mounted Cloth Hook

For the second walkthrough, we present the task of finding and creating a cloth or coat hook, which is meant to be affixed on a shelf. The user is initially not sure, whether she wants to emphasize looks or functionality, and starts browsing the repository without a clearly defined path. As the repository presents a large amount of diverse artifacts, the user may feel compelled to repurpose or remix objects.

5.2.1 Path 1 - Adapting a Fitting Design

Figure 13: Choosing and altering an existing design. Browsing through hook designs (1), scaling of a fitting one (2), in-situ verification that the hook would be mounted high enough (3).

The simplest path is seen in Figure 13, where the user initiates a search for ”hook”. This not only yields cloth hooks, but also hooks for headphones or wires. She then proceeds to select one that originally was meant for headphones, but which seems robust enough to hold a coat or a bag. Finally, she takes a real bag to verify that the currently still virtual hook is high enough for the bag to hang above the table level.

5.2.2 Path 2 - Repurposing/Misusing/Remixing a Design

Figure 14: Converting an animal pendant to serve as a cloth hook. Cylinder primitive and the pendant as base elements (1), subtraction of the shelf from the base cylinder (2), union of the mounting cylinder and the pendant (3).

The user may likewise remix entirely different designs to achieve her goals (Figure 14). She sees a pendant that is meant to be worn as jewellery, depicting an animal head. As it is too thin to be directly mounted to the shelf, the user instantiates a primitive cylinder, as provided by Mix&Match. This cylinder serves as the core mounting material to the shelf. Afterwards, she applies the subtraction CSG operation, to cut out a portion of the shelf from the cylinder. Lastly, she uses the union operation to combine the mounting cylinder and the pendant for a novel coat hook. As before, she can also try to verify the functionality (i.e., the height the objects will hang at) visually, prior to printing.

5.2.3 Path 3 - Replicating and Altering an Existing Artifact

Figure 15: Replicating an existing hook. Selection of an existing hook (1), duplication and scaling of the hook (2) to achieve a fitting result (3).

Lastly, the user may leverage her own physical environment by copying and pasting an existing real-world artifact. This existing hook has already proven its function and provides a reference concerning a viable height for it to be mounted. While it may not be shelf-mounted, the user can convert it in the same fashion as the previously presented path, by either scaling it down, or alternatively adding a padding for the mount to fit the thinner shelf.

6 Future Work

Apart from conducting detailed usability evaluations with the presented system, expanding the scope of this concept to subtractive manufacturing is a conceivable next step. While processes like CNC milling likewise start with a 3D model, their foundational part is the material stock. One could either omit the concept of a material stock, or treat objects in the users physical context as stock for subtractive manufacturing. The alteration of artifacts, either from a model repository or the users’ environment would progress in the same fashion. Likewise, processes that venture beyond shape remixing and instead involve more complex remixing procedures (e.g., remixing of machines [39]) are intriguing to consider in-situ.

A reasonable extension of Mix&Match would be the introduction of a ”snapping” feature to support users with object alignment [32]. Beyond that, any feature that supports users with aspects like scaling, orienting, coloring [20] of artifacts or with any other type of remix procedure, is a viable extension of the ”pick-and-choose”-paradigm. Likewise, additional error tolerance could be introduced through constructs like springs [38] or automated generation of connectors [22]. Coloring in particular is a relevant feature, as it depends on the available fabrication process, but heavily influences the aesthetic interaction between the physical context and the artifact. Mix&Match aimed do provide an interface to the repository, but abstracted away the specifics of search. Filters and different ordering options were removed for clarity. On a more conceptual level, one could consider a more refined search feature, where specific features of artifacts (e.g., tooth counts of cogs) could be searched for. Physna is such a concept for a ”geometric search engine”, but targeted at industrial users [35]. Mix&Match did not incorporate user-centered ways to 3D-scan objects with a HMD. For an ideal scan, users would have to be guided to move around the object (i.e., be the sensor), or rotate the object themselves (i.e., be the turntable).

7 Conclusion

We presented Mix&Match, a tool to allow users to remix artifacts retrieved from model repositories and the physical context in-situ. It supports the proposed notion of the in-situ ”pick-and-choose”-paradigm. Mix&Match bridges the disconnect between the users’ physical context and the artifacts found in both digital model repositories and the users’ real environment.

Model repositories are an incredibly valuable resource for both novices and experienced makers. By delegating the design effort of various artifacts, the maker may focus on aspects of customization and personalization of these artifacts – refining, remixing and tailoring them. As such, few problems that have never been solved before will be met by makers. However, the intricate specifics of functional and aesthetic fit are often unique enough to warrant either the adaptation of existing artifacts or the design of entirely new ones. Both paths require the investment of time for both novices and experienced users: learning tools, measuring the environment, adapting or creating designs. Mix&Match, is a mixed-reality-based tool to allow users to alter and remix artifacts retrieved from model repositories in-situ. Mix&Match not only utilizes the remote, digital repository as a source for artifacts and features, but also the users’ physical context. This bridges the disconnect between the users’ unique physical context, and the versatile offers model repositories can make, making it easier to omit the process of modelling, while retaining predictable and appropriate results.

8 Acknowledgements

We thank Ali Askari and Jan Rixen for their feedback and thoughtful discussions.


  • [1]
  • [2] Celena Alcock, Nathaniel Hudson, and Parmit K. Chilana. 2016. Barriers to Using, Customizing, and Printing 3D Designs on Thingiverse. In Proceedings of the 19th International Conference on Supporting Group Work (GROUP ’16). ACM, New York, NY, USA, 195–199. DOI:http://dx.doi.org/10.1145/2957276.2957301 
  • [3] Daniel Ashbrook, Shitao Stan Guo, and Alan Lambie. 2016. Towards Augmented Fabrication: Combining Fabricated and Existing Objects. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’16). ACM, New York, NY, USA, 1510–1518. DOI:http://dx.doi.org/10.1145/2851581.2892509 
  • [4] Patrick Baudisch and Stefanie Mueller. 2017. Personal Fabrication. Foundations and Trends® in Human–Computer Interaction 10, 3–4 (May 2017), 165–293. DOI:http://dx.doi.org/10.1561/1100000055 
  • [5] Jan Borchers. 2013. An Internet of Custom-Made Things: From 3D Printing and Personal Fabrication to Personal Design of Interactive Devices. In Web Engineering (Lecture Notes in Computer Science), Florian Daniel, Peter Dolog, and Qing Li (Eds.). Springer Berlin Heidelberg, Berlin Heidelberg, 6–6.
  • [6] Jan Borchers and René Bohne. 2013. A Personal Design Manifesto. Fab @ CHI Workshop (2013), 4.
  • [7] Erin Buehler, Stacy Branham, Abdullah Ali, Jeremy J. Chang, Megan Kelly Hofmann, Amy Hurst, and Shaun K. Kane. 2015. Sharing Is Caring: Assistive Technology Designs on Thingiverse. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems - CHI ’15. ACM Press, Seoul, Republic of Korea, 525–534. DOI:http://dx.doi.org/10.1145/2702123.2702525 
  • [8] Xiang ’Anthony’ Chen, Stelian Coros, and Scott E. Hudson. 2018. Medley: A Library of Embeddables to Explore Rich Material Properties for 3D Printed Objects. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY, USA, 162:1–162:12. DOI:http://dx.doi.org/10.1145/3173574.3173736 
  • [9] Xiang ’Anthony’ Chen, Stelian Coros, Jennifer Mankoff, and Scott E. Hudson. 2015. Encore: 3D Printed Augmentation of Everyday Objects with Printed-Over, Affixed and Interlocked Attachments. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology - UIST ’15. ACM Press, Daegu, Kyungpook, Republic of Korea, 73–82. DOI:http://dx.doi.org/10.1145/2807442.2807498 
  • [10] Xiang ’Anthony’ Chen, Jeeeun Kim, Jennifer Mankoff, Tovi Grossman, Stelian Coros, and Scott E. Hudson. 2016. Reprise: A Design Tool for Specifying, Generating, and Customizing 3D Printable Adaptations on Everyday Objects. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST ’16). ACM, New York, NY, USA, 29–39. DOI:http://dx.doi.org/10.1145/2984511.2984512 
  • [11] Sean Follmer, David Carr, Emily Lovell, and Hiroshi Ishii. 2010. CopyCAD: Remixing Physical Objects with Copy and Paste from the Real World. In Adjunct Proceedings of the 23Nd Annual ACM Symposium on User Interface Software and Technology (UIST ’10). ACM, New York, NY, USA, 381–382. DOI:http://dx.doi.org/10.1145/1866218.1866230 
  • [12] Sean Follmer and Hiroshi Ishii. 2012. KidCAD: Digitally Remixing Toys through Tangible Tools. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, NY, 2401–2410. DOI:http://dx.doi.org/10.1145/2207676.2208403 
  • [13] Madeline Gannon, Tovi Grossman, and George Fitzmaurice. 2016. ExoSkin: On-Body Fabrication. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York, NY, USA, 5996–6007. DOI:http://dx.doi.org/10.1145/2858036.2858576 
  • [14] Ammar Hattab and Gabriel Taubin. 2019. Rough Carving of 3D Models with Spatial Augmented Reality. In Proceedings of the ACM Symposium on Computational Fabrication (SCF ’19). ACM, New York, NY, USA, 4:1–4:10. DOI:http://dx.doi.org/10.1145/3328939.3328998 
  • [15] Megan Hofmann, Gabriella Hann, Scott E. Hudson, and Jennifer Mankoff. 2018. Greater Than the Sum of Its PARTs: Expressing and Reusing Design Intent in 3D Models. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY, USA, 301:1–301:12. DOI:http://dx.doi.org/10.1145/3173574.3173875 
  • [16] Nathaniel Hudson, Celena Alcock, and Parmit K. Chilana. 2016. Understanding Newcomers to 3D Printing: Motivations, Workflows, and Barriers of Casual Makers. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York, NY, USA, 384–396. DOI:http://dx.doi.org/10.1145/2858036.2858266 
  • [17] Ke Huo, Vinayak, and Karthik Ramani. 2017. Window-Shaping: 3D Design Ideation by Creating on, Borrowing from, and Looking at the Physical World. In Proceedings of the Eleventh International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’17). ACM, New York, NY, USA, 37–45. DOI:http://dx.doi.org/10.1145/3024969.3024995 
  • [18] Inter IKEA Systems B.V., Access Israel, and Milbat NGO. 2019. Ikea This Ables. https://thisables.com/en/. (2019). (Accessed 01.09.2019).
  • [19] Yunwoo Jeong, Han-Jong Kim, and Tek-Jin Nam. 2018. Mechanism Perfboard: An Augmented Reality Environment for Linkage Mechanism Design and Fabrication. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, 411. DOI:http://dx.doi.org/10.1145/3173574.3173985 
  • [20] Yuhua Jin, Isabel Qamar, Michael Wessely, Aradhana Adhikari, Katarina Bulovic, Parinya Punpongsanon, and Stefanie Mueller. 2019. Photo-Chromeleon: Re-Programmable Multi-Color Textures Using Photochromic Dyes. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (UIST ’19). ACM, New York, NY, USA, 701–712. DOI:http://dx.doi.org/10.1145/3332165.3347905 
  • [21] Jeeeun Kim, Anhong Guo, Tom Yeh, Scott E. Hudson, and Jennifer Mankoff. 2017. Understanding Uncertainty in Measurement and Accommodating Its Impact in 3D Modeling and Printing. In Proceedings of the 2017 Conference on Designing Interactive Systems (DIS ’17). ACM, New York, NY, USA, 1067–1078. DOI:http://dx.doi.org/10.1145/3064663.3064690 
  • [22] Yuki Koyama, Shinjiro Sueda, Emma Steinhardt, Takeo Igarashi, Ariel Shamir, and Wojciech Matusik. 2015. AutoConnect: Computational Design of 3D-Printable Connectors. ACM Trans. Graph. 34, 6 (Oct. 2015), 231:1–231:11. DOI:http://dx.doi.org/10.1145/2816795.2818060 
  • [23] David H. Laidlaw, W. Benjamin Trumbore, and John F. Hughes. 1986. Constructive Solid Geometry for Polyhedral Objects. In ACM SIGGRAPH Computer Graphics, Vol. 20. ACM, New York, NY, 161–170.
  • [24] Manfred Lau, Jun Mitani, and Takeo Igarashi. 2012. Sketching and Prototyping Personalised Objects: From Teapot Lids to Furniture to Jewellery. National Conference on Rapid Design, Prototyping & Manufacture (2012), 8.
  • [25] Manfred Lau, Greg Saul, Jun Mitani, and Takeo Igarashi. 2010. Modeling-in-Context: User Design of Complementary Objects with a Single Photo. In Proceedings of the Seventh Sketch-Based Interfaces and Modeling Symposium (SBIM ’10). Eurographics Association, Aire-la-Ville, Switzerland, Switzerland, 17–24.
  • [26] Ghang Lee, Charles M. Eastman, Tarang Taunk, and Chun-Heng Ho. 2010. Usability Principles and Best Practices for the User Interface Design of Complex 3D Architectural Design and Engineering Tools. International Journal of Human-Computer Studies 68, 1 (Jan. 2010), 90–104. DOI:http://dx.doi.org/10.1016/j.ijhcs.2009.10.001 
  • [27] David Lindlbauer and Andy D. Wilson. 2018. Remixed Reality: Manipulating Space and Time in Augmented Reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18. ACM Press, Montreal QC, Canada, 1–13. DOI:http://dx.doi.org/10.1145/3173574.3173703 
  • [28] Chandan Mahapatra, Jonas Kjeldmand Jensen, Michael McQuaid, and Daniel Ashbrook. 2019. Barriers to End-User Designers of Augmented Fabrication. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). ACM, New York, NY, USA, 383:1–383:15. DOI:http://dx.doi.org/10.1145/3290605.3300613 
  • [29] Samantha McDonald, Niara Comrie, Erin Buehler, Nicholas Carter, Braxton Dubin, Karen Gordes, Sandy McCombe-Waller, and Amy Hurst. 2016. Uncovering Challenges and Opportunities for 3D Printing Assistive Technology with Physical Therapists. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’16). ACM, New York, NY, USA, 131–139. DOI:http://dx.doi.org/10.1145/2982142.2982162 
  • [30] A. Millette and M. J. McGuffin. 2016. DualCAD: Integrating Augmented Reality with a Desktop GUI and Smartphone Interaction. In 2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct) (ISMAR ’16). IEEE Computer Society, Merida, Mexico, 21–26. DOI:http://dx.doi.org/10.1109/ISMAR-Adjunct.2016.0030 
  • [31] Florian Müller, Maximilian Barnikol, Markus Funk, Martin Schmitz, and Max Mühlhäuser. 2018. CaMea: Camera-Supported Workpiece Measurement for CNC Milling Machines. In Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference on - PETRA ’18. ACM Press, Corfu, Greece, 345–350. DOI:http://dx.doi.org/10.1145/3197768.3201569 
  • [32] Benjamin Nuernberger, Eyal Ofek, Hrvoje Benko, and Andrew D. Wilson. 2016. SnapToReality: Aligning Augmented Reality to the Real World. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York, NY, USA, 1233–1244. DOI:http://dx.doi.org/10.1145/2858036.2858250 
  • [33] Lora Oehlberg, Wesley Willett, and Wendy E. Mackay. 2015. Patterns of Physical Design Remixing in Online Maker Communities. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15). ACM, New York, NY, USA, 639–648. DOI:http://dx.doi.org/10.1145/2702123.2702175 
  • [34] Huaishu Peng, Jimmy Briggs, Cheng-Yao Wang, Kevin Guo, Joseph Kider, Stefanie Mueller, Patrick Baudisch, and François Guimbretière. 2018. RoMA: Interactive Fabrication with Augmented Reality and a Robotic 3D Printer. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY, USA, 579:1–579:12. DOI:http://dx.doi.org/10.1145/3173574.3174153 
  • [35] Physna Inc. 2019. Shape Search | Physna. https://www.physna.com. (2019). (Accessed 16.09.2019).
  • [36] Raf Ramakers, Fraser Anderson, Tovi Grossman, and George Fitzmaurice. 2016. RetroFab: A Design Tool for Retrofitting Physical Interfaces Using Actuators, Sensors and 3D Printing. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York, NY, USA, 409–419. DOI:http://dx.doi.org/10.1145/2858036.2858485 
  • [37] Aristides AG Requicha and Herbert B. Voelcker. 1977. Constructive Solid Geometry. Technical Memorandum 25. University of Rochester, Rochester, N.Y. 46 pages.
  • [38] Thijs Roumen, Jotaro Shigeyama, Julius Cosmo Romeo Rudolph, Felix Grzelka, and Patrick Baudisch. 2019. SpringFit: Joints and Mounts That Fabricate on Any Laser Cutter. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology. ACM, New Orleans, LA, USA, 12. DOI:http://dx.doi.org/10.1145/3332165.3347930 
  • [39] Thijs Jan Roumen, Willi Müller, and Patrick Baudisch. 2018. Grafter: Remixing 3D-Printed Machines. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY, USA, 63:1–63:12. DOI:http://dx.doi.org/10.1145/3173574.3173637 
  • [40] Greg Saul, Manfred Lau, Jun Mitani, and Takeo Igarashi. 2011. SketchChair: An All-in-One Chair Design System for End Users. In Proceedings of the Fifth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’11). ACM, New York, NY, USA, 73–80. DOI:http://dx.doi.org/10.1145/1935701.1935717 
  • [41] Valkyrie Savage, Sean Follmer, Jingyi Li, and Björn Hartmann. 2015. Makers’ Marks: Physical Markup for Designing and Fabricating Functional Objects. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (UIST ’15). ACM, New York, NY, USA, 103–108. DOI:http://dx.doi.org/10.1145/2807442.2807508 
  • [42] Eldon Schoop, Michelle Nguyen, Daniel Lim, Valkyrie Savage, Sean Follmer, and Björn Hartmann. 2016. Drill Sergeant: Supporting Physical Construction Projects Through an Ecosystem of Augmented Tools. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’16). ACM, New York, NY, USA, 1607–1614. DOI:http://dx.doi.org/10.1145/2851581.2892429 
  • [43] Rita Shewbridge, Amy Hurst, and Shaun K. Kane. 2014. Everyday Making: Identifying Future Uses for 3D Printing in the Home. In Proceedings of the 2014 Conference on Designing Interactive Systems (DIS ’14). ACM, New York, NY, USA, 815–824. DOI:http://dx.doi.org/10.1145/2598510.2598544 
  • [44] Alexander Teibrich, Stefanie Mueller, François Guimbretière, Robert Kovacs, Stefan Neubert, and Patrick Baudisch. 2015. Patching Physical Objects. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (UIST ’15). ACM, New York, NY, USA, 83–91. DOI:http://dx.doi.org/10.1145/2807442.2807467 
  • [45] Christian Weichel, Manfred Lau, David Kim, Nicolas Villar, and Hans W. Gellersen. 2014. MixFab: A Mixed-Reality Environment for Personal Fabrication. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’14). ACM, New York, NY, USA, 3855–3864. DOI:http://dx.doi.org/10.1145/2556288.2557090 
  • [46] Suguru Yamada, Hironao Morishige, Hiroki Nozaki, Masaki Ogawa, Takuro Yonezawa, and Hideyuki Tokuda. 2016. ReFabricator: Integrating Everyday Objects for Digital Fabrication. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’16). ACM, New York, NY, USA, 3804–3807. DOI:http://dx.doi.org/10.1145/2851581.2890237 
  • [47] Junichi Yamaoka and Yasuaki Kakehi. 2016. MiragePrinter: Interactive Fabrication on a 3D Printer with a Mid-Air Display. In ACM SIGGRAPH 2016 Studio (SIGGRAPH ’16). ACM, New York, NY, USA, 6:1–6:2. DOI:http://dx.doi.org/10.1145/2929484.2929489 
  • [48] Ya-Ting Yue, Xiaolong Zhang, Yongliang Yang, Gang Ren, Yi-King Choi, and Wenping Wang. 2017. WireDraw: 3D Wire Sculpturing Guided with Mixed Reality. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 3693–3704. DOI:http://dx.doi.org/10.1145/3025453.3025792 
  • [49] Amanda K. Yung, Zhiyuan Li, and Daniel Ashbrook. 2018. Printy3D: In-Situ Tangible Three-Dimensional Design for Augmented Fabrication. In Proceedings of the 17th ACM Conference on Interaction Design and Children (IDC ’18). ACM, New York, NY, USA, 181–194. DOI:http://dx.doi.org/10.1145/3202185.3202751 
  • [50] Kening Zhu, Alexandru Dancu, and Shengdong (Shen) Zhao. 2016. FusePrint: A DIY 2.5D Printing Technique Embracing Everyday Artifacts. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems (DIS ’16). ACM, New York, NY, USA, 146–157. DOI:http://dx.doi.org/10.1145/2901790.2901792