DeepAI
Log In Sign Up

Learning to Accelerate Decomposition for Multi-Directional 3D Printing

Multi-directional 3D printing has the capability of decreasing or eliminating the need for support structures. Recent work proposed a beam-guided search algorithm to find an optimized sequence of plane-clipping, which gives volume decomposition of a given 3D model. Different printing directions are employed in different regions to fabricate a model with tremendously less support (or even no support in many cases).To obtain optimized decomposition, a large beam width needs to be used in the search algorithm, leading to a very time-consuming computation. In this paper, we propose a learning framework that can accelerate the beam-guided search by using a smaller number of the original beam width to obtain results with similar quality. Specifically, we use the results of beam-guided search with large beam width to train a scoring function for candidate clipping planes based on six newly proposed feature metrics. With the help of these feature metrics, both the current and the sequence-dependent information are captured by the neural network to score candidates of clipping. As a result, we can achieve around 3x computational speed. We test and demonstrate our accelerated decomposition on a large dataset of models for 3D printing.

READ FULL TEXT VIEW PDF
12/03/2018

General Support-Effective Decomposition for Multi-Directional 3D Printing

We present a method to fabricate general models by multi-directional 3D ...
07/08/2020

Best-First Beam Search

Decoding for many NLP tasks requires a heuristic algorithm for approxima...
06/29/2021

Rethinking the Evaluation of Neural Machine Translation

The evaluation of neural machine translation systems is usually built up...
11/02/2018

Importance of a Search Strategy in Neural Dialogue Modelling

Search strategies for generating a response from a neural dialogue model...
02/06/2019

Neural-Network Guided Expression Transformation

Optimizing compilers, as well as other translator systems, often work by...
09/09/2019

A Quantum Search Decoder for Natural Language Processing

Probabilistic language models, e.g. those based on an LSTM, often face t...

I Introduction

3D printing makes the rapid fabrication of complex objects possible. Its capability has been developed in many scenarios – from the micro-scale fabrication of bio-structures to in-situ construction of architecture. However, Fused Deposition Modeling (FDM) using planar layers with fixed 3D printing direction suffers from the need of support structures (shortly called support in the following context), which are used to prevent the collapse of material in overhang regions due to gravity. Supports bring in many problems, including hard-to-remove, surface damage and material waste as summarized in [1].

Fig. 1: A 5-DOF multi-directional 3D printing system that can deposit material along with different directions: (left) the printer head can move along , and axes and (right) the working table can rotate around two axes (see the arrows for the illustration of A-axis and C-axis).

To avoid using a large number of supports, our previous work [2] proposes an algorithm to decompose 3D models into a sequence of sub-components, the volume of which can be printed one by one along with different directions for different components. Candidates clipping planes are used as a set of samples to define the search space for determining an optimized sequence of decomposition. Different criteria are defined to ensure the feasibility and the manufacturability (e.g., collision-free and no floating region, etc.). The most important part of the work presented in [2] is a beam-guided search algorithm with progressive relaxation. The benefit of the beam search algorithm is that it can avoid being stuck in local minimum – a common problem of greedy search. Beam width is empirically used to balance the trade-off between computational efficiency and searching effectiveness. Though conducting a parallel implementation running on a computer with Intel(R) Core(TM) i7 CPU (4 cores), the method still results in an average computing time of 6 minutes. On the other hand, using is a compromise between performance and efficiency. Using a larger would give us better results since the search space is expanded linearly when increases – see Fig.2 for an example.

One question is, can we learn from the results generated by a large beam width so that even a search using a small beam width can produce comparable results? Our answer is yes. To achieve this goal, we propose to learn a scoring function for candidate clipping planes by using six feature metrics. With the help of these feature metrics, both the current and the sequence-dependent information are captured by the neural network to score candidates for clipping. The learning is conducted on the results of beam-guided search with large beam width (i.e., ) running on a large dataset of models for 3D printing, Thingi10k, recently published by [3]. As a result, we can achieve around 3 times acceleration while still keeping the similar quality on the results of volume decomposition.

Fig. 2: An example of using different widths in beam search is given on the frog model (ID: 81368 from the Thingi10k dataset [3]). A large number of supports are needed by using the conventional 3D printing (left). Multi-directional 3D printing can significantly reduce the need of supports, and the regions need additional supports can be reduced from (middle) to (right) of the total area when the beam width increases from to . Regions to be printed along different directions are displayed in different colors to represent the results of volume decomposition, and supporting structures represented by red struts are added.

In summary, we make the following contributions:

  • A learning-to-accelerate framework that can rank a set of candidate planes that best-fit the optimal results sampled on the large dataset, which significantly accelerates the beam search algorithm without sacrificing the performance.

  • A method to convert the trajectories generated during the beam-guided search to listwise ranking orders at distinct stages for training.

The computational efficiency of the proposed work is much better than our previous work [2] while keeping the quality of searching results at a similar level. The implementation of learning-based acceleration presented in this paper, together with the solid decomposition approach presented in [2] is available at GitHub111https://github.com/chenming-wu/pymdp/.

Ii Related Work

The problems caused by support have motivated a lot of research efforts to reduce the need for supports. There are three significant threads of research towards this goal: 1) proposing better patterns of supports so that the number of supports is smaller than the one generated by support generators (ref. [4, 5]

); 2) segmenting digital models into several pieces, each of which can be built in a support-free or support-effective manner; 3) using high degree-of-freedom (DOFs) robotic systems to automatically change the build direction so that the overhanging regions become safe regions that can be safely fabricated without the need of supports. Here we mainly review the prior work in the last two threads that are most relevant.

Ii-a Segmentation-based Methods

A digital model can be first segmented into different components for fabrication and then assembled back to form the original model. There are several methods that have explored to use segmentation to reduce the need of supports. Hu et al. [6] invented an algorithm to automatically decompose a 3D model into parts in approximately pyramidal shapes to be printed without support. Herholz et al. [7] proposed another algorithm to solve a similar problem by enabling slight deformation during decomposition where each component is in the shape of height-fields. RevoMaker [8] fabricated digital models by 3D printing on top of an existing cubic component, which can rotate itself to fabricate the shape of height-fields. Wei et al. [9] partitioned a shell model into a small number of support-free parts using a skeleton-based algorithm. Muntoni et al. [10] also tackled the problem of decomposing a 3D model into a small set of non-overlapped height field blocks, which can be fabricated by either molding or AM. These methods are mostly algorithmic systems that can be easily incorporated into off-the-shelf manufacturing devices. However, the capability of manufacturing hardware has not been considered in the design of algorithms.

Ii-B Multi-directional and Multi-axis Fabrication

The recent development in robotic systems enables researchers to think about a more flexible AM routine [11]. Adding more DOFs into the process of 3D printing seems promising and has gained much attention. Keating and Oxman [12] proposed to use a 6-DOF manufacturing platform driven by a robotic arm to fabricate the model either in an additive or subtractive manner. Pan et al. [13] rethink the process of Computer Numerical Control (CNC) machining and proposed a 5-axis motion system to accumulate materials. On-the-Fly Print system proposed by Peng et al. [14] is a fast, interactive printing system modified from an off-the-shelf Delta printing device but with two additional DOFs. Based on the same system, Wu et al. [15] proposed an algorithm that can plan the collision-free printing orders of edges for wireframe models.

Industrial robotic arms have been widely used in AM. For example, Huang et al. [16] built up a robotic system for 3D printing wireframe models on a 6-DOF KUKA robotic arm. Dai et al. [17] developed a voxel-growing algorithm for support-free printing digital models using a 6-DOF UR robotic arm. Shembekar et al. [18] proposed a method to fabricate conformal surfaces by collision-free 3D printing trajectories on a 6-DOF robotic arm. To reduce the expense of hardware, a -axis additive manufacturing is also proposed recently [19]. They adopted a flooding algorithm to plan collision-free and support-free paths. However, this approach can only be applied to tree-like 3D models with simple topology. Volume decomposition-based algorithms have been proposed in our prior work (ref. [20, 2]).

Fig. 3: A sequence of multi-directional 3D printing can be determined by computing a sequence of planar clipping (left), where the inverse order of clipping gives the sequence of multi-directional 3D printing (right). Details can be found in [2].

Ii-C Learning to Accelerate Search

Efficiently searching a feasible solution is a common problem in computer science, where most problems have ample search space and thus challenging to tackle. Recent research advances the state-of-the-art by incorporating machine learning techniques. For example, optimizing a program using different predefined operators is a combinatorial problem that is difficult to optimize. The work of Chen et al.  

[21]

learned domain-specific models with statistical costs to guide the search of tensor implementations over many possible choices for efficient deep-learning deployments. Recently, Adams et al. 

[22] improved a beam search algorithm for Halide program optimization. They proposed to learn a cost model to predict running time by using the input of the derived features. We aim at searching optimal sequences of operations as applying different cuts in different stages. Learning a scoring function is similar to the problem solved in [22].

Direct establishing a mapping from features to a score by supervised learning is difficult. Differently, we adopt the learning-to-rank (LTR) 

[23] technique to solve our problem. LTR is one of the traditional topics in information retrieval, which aims at ranking a set of documents given a query (e.g., a keyword) by a user. LTR learns a scoring model from query-document features, thereafter the predicted scores can be used to order (rank) the documents. There are three types of LTR approaches: pointwise, pairwise, and listwise. Pointwise LTR approach learns a direct mapping from a single feature to an exact score [23]. Pairwise LTR approach learns pairwise information between two features and convert the pairwise relationships to a ranking [24, 25]. Listwise LTR approach treats a permutation of features as a basic unit and learns the best permutation [26, 27, 28]. Our work is motivated by the idea of ranking the query-document’s features using listwise LTR. A scoring function with our model-plane features as input is learned to accelerate the beam search procedure.

Iii Preliminaries and Denotations

This section briefly introduces the idea of the beam-guided algorithm previously proposed in [2].

Iii-a Problem Formulation

Whether fabricating a model layer-by-layer needs additional supports can be determined by if risky faces exist on the surface of . A commonly used definition of identifying a risky face is

(1)

where (as the normal of ) gives the printing direction defined by a base plane , is the normal of and is the maximal self-supporting angle (ref.[1]). Face is risky if and otherwise it is called safe.

In [2], a multi-directional 3D printer is supervised by fabricating a sequence of parts decomposed from where:

  • components decomposed from satisfies

    (2)

    with denoting the union operator;

  • is an ordered sequence that can be collision-freely fabricated with

    (3)

    being the base plane of , where denotes the intersection operator;

  • is the working platform of a 3D printer;

  • All faces on a sub-region are safe according to determined by .

To achieve the decomposition satisfying all the above requirements, we use planes to cut . If every clipped sub-region satisfies the manufacturability criteria, we could use the inverse order of clipping as the sequence of printing for the multi-directional 3D printers (see Fig.3 for an illustration). The printing direction of a sub-part is determined by the normal of the clipping plane.

We formulate the problem of reducing the area of risky faces on as a problem that minimizes

(4)

where is the area of a face . As we are handling models represented by triangle meshes, the computation of is straightforward. The metric is employed to measure the quality of different sequences of volume decomposition. While minimizing the objective function defined in Eq.(4), we need to ensure the manufacturability of each component.

Fig. 4: An example of beam-guided searching trajectories generated in a case with . The trajectory in dark color is the best trajectory giving the lowest value of . The trajectories shown in light colors are the other trajectories having smaller values of than the one of .
Fig. 5: An example in the Thingi10k dataset (ID: 109926). Our learning-based method outperforms the beam-guided search algorithms with small beam width of . Here, indicates the percentage of the reduced risky area when using multi-directional 3D printing – the higher the better. From left to right, the results of the conventional beam search [2] when using different widths. It can be observed that better results with less risky area can be obtained when using large beam width. With the help of the scoring function learned in this paper, we can use a very small beam width (i.e., ) to obtain the same result obtained by large beam width (i.e., ) in conventional beam search – see the result shown in the right-most. Support structures are generated for multi-directional 3D printing by the method presented in [2] and given in the bottom-right corner of each column, where the less risky area results in less amount of support.

Iii-B Beam-guided Search

The beam-guided search is to optimize Eq.(4). Considering the manufacturing constraints as well as search efficiency, we define four constraints in beam-guided search.

Criterion I: All faces on should be self-supported.

Criterion II: The remained model obtained by every clipping should be connected to the printing platform .

Criterion III: The physical platform of the printer is always below the clipping plane.

Criterion IV: It is always preferred to have a large solid obtained above a clipping plane so that a large volume of solid can be fabricated along one fixed direction.

A beam-guided search [29] algorithm is proposed to guide the search. It builds a search tree that explores the search space by expanding promising nodes ( nodes as beam width) instead of the best one greedily (see Fig.4). It integrates the restrictive Criterion I (and its weak form) as an objective function to ensure that the beam search is broad enough to include both the local optimum and configurations that may lead to a global optimum. The other three criteria should also be satisfied during the beam-guided search. Defining the residual risky area of a model according to a clipping plane as

(5)

where separates into the part above and the part below . The proposed beam-guided search algorithm starts from an empty beam with the most restrictive requirement of , where is a threshold progressively increasing from a tiny number (e.g., 0.0001). Candidate clipping planes that satisfy this requirement and remove larger areas of risky faces have a higher priority to fill the beams. If there are still empty beams after the first ‘round’ of filling, we relax by letting until all beams are filled. Detail algorithm can be found in [2].

Iv Learning to Accelerate Decomposition

Iv-a Methodology

The beam-guided algorithm  [2]

constrains the search space by imposing the manufacturing constraints (Criteria II & III) and the volume heuristic (Criterion IV) while progressively relaxing the selection of ‘best’ candidates (Criterion I). Larger beam width

keeps more less-optimal candidates, which will have better chance of obtaining a globally optimal solution. We conduct an experiment on the Thingi10k dataset to compare different choices of , and it turns out that the average performance by is around better than the average performance generated by while it takes more than computing time to obtain those results. One example is given in the right of Fig.5. The experimental results encourage us to explore the feasibility of learning from the underlying experience produced by large beam width of , and utilizing the learned policy to guide a more effective search, which only keeps a much smaller value of beam width () during the search procedure.

Specifically, given nodes for configurations kept in the beam, we will be able to obtain thousands of candidates for the next cut. The original method presented in [2] is employed to select the ‘best’ and the relaxed ‘best’ candidates (). Here we will not keep all these candidates in the beam. Instead, only candidates are selected from these candidates, where the selection is conducted with the help of a scoring function using six enriched feature metrics as input for each candidate clipping. An illustration of this selection procedure can be found in Fig.6. The scoring function is constructed by a neural network, which is trained by using the samples from conducting beam-guided searches [2] on Thingi10k – a large dataset of 3D printing models with a large beam width .

In the rest of this section, we will first provide the enriched feature metrics. Then, we present the details of the accelerated search algorithm and the method to generate training samples. Lastly, the learning model of the scoring function is introduced.

Iv-B Featurization of Candidate Clipping

We featurize each candidate cut to a vector consisting of six metrics. The metrics are carefully designed according to the criteria given in Sec. 

III, which consider both the current and the sequence-dependent information for the configuration of a planar clipping. Note that, it is crucial to have metrics to cover the sequence-dependent information (i.e., and below). Otherwise, it has a trivial chance to learn the strategy of beam-guided search that will not be stuck at local optimum when using large beam width.

Ratio of reduced risky area : The reduced risky area is essentially the decreased value of Eq.(4). is defined as the ratio of decreased risk area caused by a candidate clipping plane (the candidate) over the value of .

Accumulated ratio of reduced risky area : Different stages have different values of , which only reflect a local configuration. We define the sum of as the accumulated ratio of reduced risky areas to describe the situation of a sequence of planning. In short, .

Processed volume : The volume of region removed by a clipping plane directly determines the efficiency of a cutting plan – larger volume removed per cut leads to fewer times of clipping. We normalize it to by using the volume of given model .

Distance to platform : To reflect the requirement on letting the working platform always below a clipping plane , we define a metric as the minimal distance between and . is normalized by using the radius of ’s bounding sphere.

Distance to fragile regions : To prevent from cutting through fragile regions during volume decomposition, we define the minimal distance between a clipping plane and all the fragile regions, which are thin ‘fins’ or ‘bridges’. These regions can be detected by the geometric analysis of local curvature and feature size [30]. Again, this distance is normalized by the radius of ’s bounding sphere.

Accumulated residual risky area : None of the above metric has considered the area that cannot be fully support-free even after decomposition – i.e., having residual risky areas. Here we add a metric to consider the accumulated residual risky area, which is also normalized by the total risky area as .

Without loss of generality, for a candidate clipping in any stage of the planning process, it can use the vector formed by the above six metrics to describe its configuration. As illustrated in Fig.4, each candidate clipping is represented as a node during the beam-guided search. A node is denoted by associated with the six metrics. In this following sub-section, we will introduce the method to select nodes kept in the beam-guided search by using the values of these metrics.

Iv-C Accelerated Search Algorithm

Using the beam-guided search algorithm, we can obtain a list of candidate cuts with feature vectors evaluated by six metrics. The beam-guided search algorithm always keeps up to promising nodes at stage . We observe that each node may come from different parent nodes from its last stage , and may result in different offspring nodes at the next stage . This essentially constructs a set of trajectories starting from the input model to the globally optimal solution of decomposition (see Fig.4 for an example).

When working on an input mesh , we can search many possible trajectories by running the beam-guided search algorithm. Each trajectory has a corresponding cost of . Comparing two nodes and at the same stage belong to different trajectories and , we would have more preference to keep the node in the beam than when as the trajectory is more optimal. This is denoted as . Therefore, at any stage , we can always obtain a ranked order according to these relative relationships between the nodes.

Selecting top- nodes from the ranked order can have a high chance to keep nodes belong to the trajectories with a smaller value of in the result of selection. In our algorithm, we are trying to learn a scoring function from the ranked orders at different stages on different models. With the help of the scoring function learned from searches with large beam width. The search with smaller beam width is expected to generate results with similar quality. See also the illustration of our scoring-function-based ranking step for selecting nodes out of candidates given in Fig.6.

Fig. 6: An example that shows the pipeline of our learning to accelerate decomposition. At each stage, we use a relatively vast to generate candidate cuts and their metrics. Then we use the trained score function to predicate scores of cuts. After that, we convert the predicated scores to a ranked order by arg-sorting. Lastly, we can select the first cuts (with ) from the selection vector for the next-round searching of decomposition. Note that, the input of are the six metrics for all the candidate cuts as a matrix , and the output of is a column of scores for these candidates.

Iv-D Listwise Learning

For an input mesh , we can obtain a collection of resultant trajectories by running the beam-guided search algorithm. Each trajectory has a corresponding cost of . Here we propose a method to convert the trajectories to listwise samples to be used for learning the scoring function . Specifically, a method is developed to sample trajectories obtained from beam-guided search with a large beam width on a large dataset of 3D printing models.

Our learning method consists of four major steps.

  • First, we need to featurize each candidate of clipping to distinguish the differences among the other candidate. Here the six metrics introduced above in Sec. IV-B (i.e., ) are used.

  • Second, we build a dataset made up of these features by running the beam-guided algorithm with a vast value of beam width of . This step is very time-consuming because of the large costs more computational resources.

  • Third, we convert the trajectories to listwise samples at every stage of the beam-guided search which describes the ranking of clipping candidates. Specifically, we traverse the collection of trajectories in descending order with respect to , and use the selected node to construct a set of ranked lists . If a node is not contained in any trajectory, it is regarded as worse than all other nodes that are contained in any trajectory. If a node was used to construct from a trajectory , it would not be used again to prevent from introducing ambiguity. We set the scores of top- nodes in as and the scores of the other nodes as zero. The training samples are collected from all stages of the beam-guided search.

  • Finally, we use the listwise data to train the scoring function by learning-to-rank.

The resultant scoring function will be used to evaluate every candidate of clipping in our algorithm.

Now we have the dataset constituting of listwise rankings for training. Our goal is to train a scoring system on the listwise dataset to score and rank candidate cuts at each stage of the beam-guided search. Once the scoring system is trained, it can be utilized to replace the original selection strategy used in the beam-guided algorithm.

We use uRank [31] to train , which formulates the purpose of ordering the nodes as selecting the most relevant ones in steps. It selects all nodes that have the highest score from a candidate set at each step. To solve the classic cross-entropy issue raised by the softmax function of ratings in ListNet [26], it adopts multiple softmax operations and each of which targets a single node from the set of nodes that matches the ground-truth (we denote it as , where corresponds to the step.). This method restricts the positive label appears once in candidate sets, so it only needs to select one node at each step.

The architecture of uRank consists of a neural network with two hidden layers with and hidden units respectively. Specifically, we have three trainable matrices , , and . Let

be the activation function, the closed-form of

is with being the input as six metrics of

nodes. The loss function is defined as follows.

(6)

where is the likehood of selecting a node at step . The network architecture of uRank and the selection procedure are shown in Fig.6.

V Training and Evaluation

V-a Dataset Preparation and Training

We implemented the proposed pipeline using C++ and Python, and trained the uRank network [31]

using TensorFlow 

[32]. The trained model and source codes are publicly accessible. The dataset collection phase is conducted on a high-performance server equipped with two Intel E5-2698 v3 CPUs and 128 GB RAM. All other tests are performed on a PC equipped with an Intel Core i7 4790 CPU, NVIDIA Geforce RTX 2060 GPU, and 24 GB RAM. We use directions sampled on the Gaussian sphere with mm intervals to evaluate the metrics. The maximal self-supporting angle is set as .

We trained our model on the Thingi10k dataset [3] repaired by Hu et al. [33]. Instead of training and evaluating on the whole dataset, we extract a subset of the dataset (with 2,099 models) to ensure every model in the selected dataset should have a few risky faces that can be processed by our plane-based cutting algorithm. The training dataset for our scoring function is built by running the beam-guided search algorithm with . By the aforementioned sampling methods, we obtain a dataset with 7,961 listwise samples. We split all the dataset to 60% samples for training, 15% samples for validation, and 25% data for testing. The numbers of hidden units we used are and . In our experiments, we train the network by using maximal epochs and a learning rate of . The early stop is invoked when no improvement is found after epochs.

V-B Evaluation on Accelerated Search

The beam-guided search algorithm’s computing time is significantly influenced by the chosen value of beam width . According to out experiments, the average computing time on test models by the conventional beam search with beam width is at of the average time on the beam search with .

Fig. 7: We use the results generated by the original beam-guided algorithm with as a baseline to generate comparison, where the vertical axis indicates the reduced percentage of average (i.e., Eq.(4) on all test examples). The blue bars indicate the results of using conventional beam search, which is compared with the results of our learning-based method displayed in yellow.
NDCG
@1
NDCG
@2
NDCG
@3
NDCG
@4
NDCG
@5
Ours 0.423 0.455 0.483 0.510 0.532
RankNet 0.270 0.303 0.335 0.362 0.384
-Rank 0.262 0.297 0.326 0.354 0.378
TABLE I: Statistics on ranking performance

After the training phase is finalized, we use the trained scoring function to rank a set of features evaluated by candidate planes in our beam guided search, and use and 5 for evaluation. To make the search procedure insensitive to minor overfitting bias, we always check if the best result ranked by the simple sort-and-rank module is in the selected beam. We run both the algorithm with the trained model and the original algorithm by different choices of on the testing dataset (524 models). The statistical result in terms of improvement on the average of is given in Fig.7, which shows that we can use a relatively small with the trained model to achieve a similar performance generated by a larger . In other words, the search speed can be accelerated while the searched results are comparative to the ones generated using longer computing time. Meanwhile, we can improve the quality of the results produced by the original algorithm if using the trained model to select cuts.

V-C Evaluation on Ranking Performance

We compare our method with other classic ranking algorithms used in information retrieval, including another listwise approach – LambdaRank [34] and the pairwise approach – RankNet [24]

. We use the implementations provided in XGBoost

222https://github.com/dmlc/xgboost/tree/master/demo/rank with the parameters of {max_depth=8, number_of_boosting=500}. We use NDCG (Normalized Discounted Cumulative Gain) [35] to evaluate different methods. All experimental results are reported using NDCG metric at position 1, 2, 3, 4, and 5 respectively in Table I. The results show that our method the best performance among all other approaches.

V-D Feature Analysis

V-D1 Feature importance

Our learning-based decomposition method extracts six features to train a neural network that can score a list of nodes. In this section, we further investigate the learned model by analyzing the features proposed in Sec. IV-B. We use permutation importance [36]

to analyze feature importance after our model is trained. It is a widely used estimator of feature relevance in machine learning, which randomly permutes each feature column in the testing data to measure the importance of this feature. Randomization of different features will have different effects on the performance of the trained model. For any feature

, we compute its importance as follows.

(7)

where

is the evaluation metric which could be NDCG metrics that we used in Sec.

V-C, and is the number of repetitions. Here we use NDCG@5 as the metric of to analyze feature importance. The experimental results are shown in Fig.8(a), in which exhibits the most importance, and is the least important feature among all others.

Fig. 8: Feature analysis: (a) feature importance generated by permutation importance method [36], and (b) correlation analysis of six features proposed in Sec.IV-B.

V-D2 Feature correlation

Correlation is a statistical measure that indicates the relationship between two or more variables. In machine learning, Pearson correlation [37] is widely used to measure how the degree of the linear relationship between variables. Given two variables and , their Pearson correlation is defined as

(8)

where var denotes the variance, and cov denotes the covariance. We use the Pearson correlation to build a correlation matrix on the sampled dataset. The heatmap visualization is shown in Fig.

8(b). It shows that different features used in our approach are not very relevant, the most relevant feature pair is and , where its .

Vi Conclusion and Future Work

This paper presents an accelerated decomposition algorithm for multi-directional printing that can reduce the need of support structures. The proposed method utilizes learning-to-rank techniques to train a neural network that can score the candidates of clipping. We use the trained scoring function to replace the simple sort-and-rank module in the beam-guided search algorithm. The computing time is reduced to around one third while keeping the results with similar quality. The experimental results demonstrate the effectiveness of our proposed method. We provide an easy-to-use python package and make the source code publicly accessible. In the future, we plan to investigate the capability of handling more complex objects such as periodic lattice structures. Our current method can work with models with simple lattices such as the cubic lattice structure shown in 9, but the boundary of applying our multi-directional method is still unclear. Moreover, it is worth trying to study the problem of determining a given structure is suitable or not for multi-directional printing in the future.

Fig. 9: An example of decomposing a lattice structure for multi-directional printing. (a) the cubic lattice structure with many supports. (b) the decomposed results for multi-directional printing and the generated supports.

References

  • [1] K. Hu, S. Jin, and C. C. L. Wang, “Support slimming for single material based additive manufacturing,” Computer-Aided Design, vol. 65, pp. 1–10, 2015.
  • [2] C. Wu, C. Dai, G. Fang, Y.-J. Liu, and C. C. Wang, “General support-effective decomposition for multi-directional 3-d printing,” IEEE Transactions on Automation Science and Engineering, 2019.
  • [3] Q. Zhou and A. Jacobson, “Thingi10k: A dataset of 10,000 3d-printing models,” arXiv preprint arXiv:1605.04797, 2016.
  • [4] J. Vanek, J. A. Galicia, and B. Benes, “Clever support: Efficient support structure generation for digital fabrication,” in Computer Graphics Forum, vol. 33, no. 5.   Wiley Online Library, 2014, pp. 117–125.
  • [5] J. Dumas, J. Hergel, and S. Lefebvre, “Bridging the gap: Automated steady scaffoldings for 3d printing,” ACM Trans. Graph., vol. 33, no. 4, pp. 98:1–98:10, July 2014.
  • [6] R. Hu, H. Li, H. Zhang, and D. Cohen-Or, “Approximate pyramidal shape decomposition,” ACM Trans. Graph., vol. 33, no. 6, pp. 213:1–213:12, 2014.
  • [7] P. Herholz, W. Matusik, and M. Alexa, “Approximating free-form geometry with height fields for manufacturing,” Computer Graphics Forum, vol. 34, no. 2, pp. 239–251, 2015.
  • [8] W. Gao, Y. Zhang, D. C. Nazzetta, K. Ramani, and R. J. Cipra, “RevoMaker: Enabling multi-directional and functionally-embedded 3D printing using a rotational cuboidal platform,” in Proceedings of the 28th Annual ACM Symposium on User Interface Software and Technology, 2015, pp. 437–446.
  • [9] X. Wei, S. Qiu, L. Zhu, R. Feng, Y. Tian, J. Xi, and Y. Zheng, “Toward support-free 3D printing: A skeletal approach for partitioning models,” IEEE Transactions on Visualization and Computer Graphics, vol. 24, no. 10, pp. 2799–2812, Oct 2018.
  • [10] A. Muntoni, M. Livesu, R. Scateni, A. Sheffer, and D. Panozzo, “Axis-aligned height-field block decomposition of 3D shapes,” ACM Trans. Graph., 2018.
  • [11] P. Urhal, A. Weightman, C. Diver, and P. Bartolo, “Robot assisted additive manufacturing: A review,” Robotics and Computer-Integrated Manufacturing, vol. 59, pp. 335–345, 2019.
  • [12] S. Keating and N. Oxman, “Compound fabrication: A multi-functional robotic platform for digital design and fabrication,” Robotics and Computer-Integrated Manufacturing, vol. 29, no. 6, pp. 439–448, 2013.
  • [13] Y. Pan, C. Zhou, Y. Chen, and J. Partanen, “Multitool and multi-axis computer numerically controlled accumulation for fabricating conformal features on curved surfaces,” ASME Journal of Manufacturing Science and Engineering, vol. 136, no. 3, 2014.
  • [14] H. Peng, R. Wu, S. Marschner, and F. Guimbretière, “On-the-fly print: Incremental printing while modelling,” in Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems.   ACM, 2016, pp. 887–896.
  • [15] R. Wu, H. Peng, F. Guimbretière, and S. Marschner, “Printing arbitrary meshes with a 5dof wireframe printer,” ACM Trans. Graph., vol. 35, no. 4, p. 101, 2016.
  • [16] Y. Huang, J. Zhang, X. Hu, G. Song, Z. Liu, L. Yu, and L. Liu, “Framefab: robotic fabrication of frame shapes,” ACM Trans. Graph., vol. 35, no. 6, p. 224, 2016.
  • [17] C. Dai, C. C. L. Wang, C. Wu, S. Lefebvre, G. Fang, and Y.-J. Liu, “Support-free volume printing by multi-axis motion,” ACM Trans. Graph., vol. 37, no. 4, pp. 134:1–134:14, July 2018.
  • [18] A. V. Shembekar, Y. J. Yoon, A. Kanyuck, and S. K. Gupta, “Generating robot trajectories for conformal three-dimensional printing using nonplanar layers,” Journal of Computing and Information Science in Engineering, vol. 19, no. 3, p. 031011, 2019.
  • [19] K. Xu, L. Chen, and K. Tang, “Support-free layered process planning toward 3 + 2-axis additive manufacturing,” IEEE Transactions on Automation Science and Engineering, vol. 16, no. 2, pp. 838–850, April 2019.
  • [20] C. Wu, C. Dai, G. Fang, Y. J. Liu, and C. C. L. Wang, “RoboFDM: A robotic system for support-free fabrication using FDM,” in 2017 IEEE International Conference on Robotics and Automation (ICRA), May 2017, pp. 1175–1180.
  • [21] T. Chen, L. Zheng, E. Yan, Z. Jiang, T. Moreau, L. Ceze, C. Guestrin, and A. Krishnamurthy, “Learning to optimize tensor programs,” in Advances in Neural Information Processing Systems, 2018, pp. 3389–3400.
  • [22] A. Adams, K. Ma, L. Anderson, R. Baghdadi, T.-M. Li, M. Gharbi, B. Steiner, S. Johnson, K. Fatahalian, F. Durand, et al., “Learning to optimize halide with tree search and random programs,” ACM Transactions on Graphics (TOG), vol. 38, no. 4, pp. 1–12, 2019.
  • [23] T.-Y. Liu, “Learning to rank for information retrieval,” Foundations and trends in information retrieval, vol. 3, no. 3, pp. 225–331, 2009.
  • [24] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender, “Learning to rank using gradient descent,” in Proceedings of the 22nd international conference on Machine learning, 2005, pp. 89–96.
  • [25] C. J. Burges, “From ranknet to lambdarank to lambdamart: An overview,” Learning, vol. 11, no. 23-581, p. 81, 2010.
  • [26] Z. Cao, T. Qin, T.-Y. Liu, M.-F. Tsai, and H. Li, “Learning to rank: from pairwise approach to listwise approach,” in Proceedings of the 24th international conference on Machine learning, 2007, pp. 129–136.
  • [27]

    J. Guiver and E. Snelson, “Bayesian inference for plackett-luce ranking models,” in

    proceedings of the 26th annual international conference on machine learning, 2009, pp. 377–384.
  • [28] F. Xia, T.-Y. Liu, J. Wang, W. Zhang, and H. Li, “Listwise approach to learning to rank: theory and algorithm,” in Proceedings of the 25th international conference on Machine learning, 2008, pp. 1192–1199.
  • [29] B. T. Lowerre, “The harpy speech recognition system.” Ph.D. dissertation, Carnegie Mellon University, Pittsburgh, PA, USA, 1976, aAI7619331.
  • [30] L. Luo, I. Baran, S. Rusinkiewicz, and W. Matusik, “Chopper: Partitioning models into 3D-printable parts,” ACM Trans. Graph., vol. 31, no. 6, pp. 129:1–129:9, Nov. 2012.
  • [31] X. Zhu and D. Klabjan, “Listwise learning to rank by exploring unique ratings,” in Proceedings of the 13th International Conference on Web Search and Data Mining, ser. WSDM ’20.   New York, NY, USA: Association for Computing Machinery, 2020, p. 798–806.
  • [32] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al., “Tensorflow: A system for large-scale machine learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), 2016, pp. 265–283.
  • [33] Y. Hu, Q. Zhou, X. Gao, A. Jacobson, D. Zorin, and D. Panozzo, “Tetrahedral meshing in the wild.” ACM Trans. Graph., vol. 37, no. 4, pp. 60–1, 2018.
  • [34] C. J. Burges, R. Ragno, and Q. V. Le, “Learning to rank with nonsmooth cost functions,” in Advances in neural information processing systems, 2007, pp. 193–200.
  • [35] K. Järvelin and J. Kekäläinen, “Cumulated gain-based evaluation of ir techniques,” ACM Transactions on Information Systems (TOIS), vol. 20, no. 4, pp. 422–446, 2002.
  • [36] A. Altmann, L. Toloşi, O. Sander, and T. Lengauer, “Permutation importance: a corrected feature importance measure,” Bioinformatics, vol. 26, no. 10, pp. 1340–1347, 2010.
  • [37] D. Freedman, R. Pisani, and R. Purves, “Statistics (international student edition),” Pisani, R. Purves, 4th edn. WW Norton & Company, New York, 2007.