Co-Optimization of Design and Fabrication Plans for Carpentry: Supplemental Material

07/30/2021 ∙ by Haisen Zhao, et al. ∙ MIT University of Washington 0

Past work on optimizing fabrication plans given a carpentry design can provide Pareto-optimal plans trading off between material waste, fabrication time, precision, and other considerations. However, when developing fabrication plans, experts rarely restrict to a single design, instead considering families of design variations, sometimes adjusting designs to simplify fabrication. Jointly exploring the design and fabrication plan spaces for each design is intractable using current techniques. We present a new approach to jointly optimize design and fabrication plans for carpentered objects. To make this bi-level optimization tractable, we adapt recent work from program synthesis based on equality graphs (e-graphs), which encode sets of equivalent programs. Our insight is that subproblems within our bi-level problem share significant substructures. By representing both designs and fabrication plans in a new bag of parts(BOP) e-graph, we amortize the cost of optimizing design components shared among multiple candidates. Even using BOP e-graphs, the optimization space grows quickly in practice. Hence, we also show how a feedback-guided search strategy dubbed Iterative Contraction and Expansion on E-graphs (ICEE) can keep the size of the e-graph manage-able and direct the search toward promising candidates. We illustrate the advantages of our pipeline through examples from the carpentry domain.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 11

page 12

page 14

page 15

page 17

page 19

page 20

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Technique details

This section provides additional details of our ICEE algorithm for extracting a Pareto front where each solution represents a (design, fabrication) pair.

1.1. Generating Design Variants

Design variants can be generated manually by a user or automatically. In our paper, we implement an automatic method, which includes three steps: (1) detecting all the assembly connectors between two neighboring parts; (2) enumerating all candidate connector variants for each connect; (3) generating design variants by selecting different connector variation of each connector, as shown in Figure 4 of the main paper.

1.2. Fabrication arrangement generation

Given a specific design variation, we use a fabrication arrangement generation algorithm to update the BOP E-graph such that the E-graph encodes more fabrication arrangements. This algorithm uses two heuristic described as Section 4.3.3.

For an input design variation which consists of parts , this algorithm first groups a library of stock lumbers by their dimensions, e.g., lumber , lumber , and lumber are grouped together. For the parts assigned to this group, this algorithm starts by packing all the parts on the largest stock lumber of the current group. The packing process will be terminated when (1) the current stock lumber is maximally packed or (2) all parts are packed. In the first case, the packing process can continue by switching to another stock lumber (also the largest one) until the second termination condition is reached. Such a layout process is called a full Traversal which traverses all of the parts in a specific order. Prior work [wu2019carpentry] has used a similar approach but used the number of Traversals as the termination criterion which prevents them to control the amount of fabrication arrangements to generate with this heuristic-driven method.

1.3. Cost Metrics

This section describes how we compute the three costs: material usage (), cutting precision (), and fabrication time (). Our formulae are updated versions of the ones used by Wu et al. [wu2019carpentry]. Our key improvements are 1) we include a quantitative evaluation method. Each cost metric is associated with a meaningful unit: material usage () is in dollars, cutting precision () is in inches, fabrication time () is in minutes. 2) we evaluate stock load and unload time to make the result fabrication time much more reasonable.

Material Cost

We compute material cost as:

where , the price of the

piece of stock is estimated based on costs from standard US vendors 

[mcmaster] as shown in Table 1, and is the total number of pieces of lumber used.

Stock Dimension Material Cost ($)
3.0
5.5
10.0
3.0
5.5
10.0
7.5
13.75
25.0
7.5
13.75
25.0
5.5
10.0
30.0
3/4 7.0
3/4 12.0
3/4 32.0
Table 1. Prices of stocks
Tool Full setup (s) Partial setup (s) Op Time Lumber Plywood Chopsaw 60 60 15 1 s Bandsaw 20 90 N/A 1 inch/s Jigsaw 30 60 N/A 1 inch/s Tracksaw 180 180 75 4.5 inch/s Drill 20 20 N/A 0.1 inch (depth)/s
Table 2. Fabrication times for different tools. All times are converted to minutes in our results.
Tool Operation Error Chopsaw of an inch Bandsaw of an inch Jigsaw of an inch Tracksaw of an inch Drill of an inch
Table 3. Error per cut for each tool.
Stock F-Load (s) P-Load (s) F-Unload (s) P-Unload (s)
10 1 5 1
20 2 8 2
40 3 15 2
10 1 5 1
20 2 8 2
40 4 15 2
15 2 5 1
30 4 10 2
60 6 20 3
15 2 5 1
30 4 10 2
60 6 20 3
30 3 10 2
50 5 15 2
100 10 20 2
3/4 30 3 10 2
3/4 50 5 15 2
3/4 100 10 20 2
Table 4. Loading and unloading time of stocks. All times are converted to minutes in our results.

Time

We asked an expert carpenter to assign a fabrication time to each tool (Table 3) and estimate a stock loading and unloading time for each piece of wood stock (Table 4). They reported the time taken for (1) full setup of the tool, (2) partial setup when applicable (a partial setup in one where only some parameters are changed), (3) full stock load (unload) time (setting up a piece of wood stock to the tool workspace) (4) partial stock load (unload) time (setting up a piece of wood stock to the tool workspace where stocks are stacked) and (5) performing a single operation (e.g., cut). Out of all the used tools, tracksaw has the most elaborate setup process which is indicated by its long setup times (both full and partial). Bandsaw and jigsaw are set to different full setup time while cutting lumber and plywood sheet. Only chopsaw and tracksaw allow partial setups, the remaining tools do not. The operation time for a chopsaw is constant — all cuts take a second. For all other tools, the operation time is based on the length cut per second. For example, the operation time for a tracksaw cut of length is seconds.

We refer to a cut as ”partial” if it requires only a partial setup. For example, if the cut is a partial cut on a chopsaw, the time taken for this cut would be for partial setup and for the cutting operation leading to a total of for the cut to be completed. The fabrication time, is therefore computed as:

where , and are the setup time, stock load and unload time, and operation time for the cut respectively, and is the total number of cuts. and are computed based on Table 3. is computed based on Table 4. For a cut with stacking, is measured as the sum of a full load and unload time for the first piece of stock, and multiple partial load and unload times for the following stocks.

Precision

To measure the precision of a fabrication plan, we compute (1) measurement error, , and (2) operation error, . We use as the minimum measurement that can be made in any of the currently supported tools. The measurement error for length, is computed as:

which, intuitively, is the residual length that cannot be measured by our tools. For operation error, we again asked an expert to estimate the error per operation for each tool (Table 3). The error for the cut is therefore . is then computed as:

where is the total number of cuts. Lower values of indicate higher precision.

1.3.1. Mixed-material implementation

By introducing a new material, we must accommodate the cost of using this new material in our metrics. Based on an expert’s input, we set material price of metal to be 20 times that of wood. All of the same tools can be used to cut metal sheets, using a specific blade. Setup time is the same, but the execution times is set to 10 times slower, and stock loading and loading time is 5 times slower. Finally, the jigsaw’s precision error is set to 2 times that of wood. Modulo the cost difference, we apply the same process to evaluate the fabrication cost.

1.4. Pareto front extraction

In Section 4.3.4 of the main paper, we propose two methods to speed up the Pareto front extraction. This section provides more details of the two methods. For an atomic e-node which has not been previously optimized, we simply try a maximum, , different orders of cuts (each cutting order is evaluated using the cost metrics in Section 1.3), then select the cutting order with minimum precision error () and the one with minimum fabrication time ().

To apply the branch and bound technique, we need to define the lower and upper bound for the precision error () and the time cost (). A term ’s cutting order can be initialized by taking the cutting order used in its contained atomic e-node. We first select the cutting order with minimized precision of each atomic e-node, then evaluate its precision error defined as the precision upper bound of the term. The time upper bound is set by taking the cutting order with minimized time of each atomic e-node. The precision (time) lower bound is defined as the evaluated precision (time) cost, ignoring the dependency relationship between cuts belongs to the same stock. The dependency relationship indicates the order of a cut with respect to other cuts. In other words, the time lower bound is computed by the assumption that each cut is independent of the other.

If the precision (or) lower bound is not dominated by the Pareto front of all computed solutions , we run an optimization that uses the upper bound as a starting point. For the optimization, we randomly flip the order of some cuts until iterations. We set as 20 in our experiments.

2. Results and Discussion

In this section, we provide additional results of our ICEE pipeline for exploring the space of design variations and fabrication plans of Figure 7. Table 5 shows detailed comparison between our pipeline and the no design exploration pipeline. At the end of this supplemental material, for each model, we demonstrate some representative design variations and fabrication plans extracted with our ICEE pipeline and the baseline method.

Model Ref. Point
Frame 881570 902017 (100.0,100.0) 10.00 N/A 1.98 8.50 N/A 1.37
L-Frame 796637 797655 (100.0,100.0) 18.50 N/A 2.15 18.50 N/A 2.05
A-bookcase 698414 702256 (100.0,100.0,100.0) 23.00 0.16 8.82 23.00 0.13 8.53
S-Chair 683887 698016 (100.0,100.0) 25.50 N/A 7.92 25.50 N/A 5.92
Table 833919 849705 (100.0,100.0) 20.00 N/A 7.18 20.00 N/A 5.78
F-Cube 817648 825935 (100.0,100.0) 15.50 N/A 3.08 15.50 N/A 2.12
Window 754863 777711 (100.0,100.0) 20.00 N/A 4.93 20.00 N/A 2.38
Bench 355824 369260 (100.0,100.0) 55.50 N/A 17.03 53.00 N/A 20.38
A-Chair 596346 578014 (100.0,100.0) 35.50 N/A 6.25 35.50 N/A 9.52
F-Pot 24155184 24264553 (300.0,300.0,300.0) 13.00 0.29 19.13 13.00 0.26 17.91
Z-Table 19716012 20386788 (300.0,300.0,300.0) 55.50 0.35 30.69 53.00 0.28 24.28
Loom 20469248 21040283 (300.0,300.0,300.0) 27.50 1.47 48.26 25.50 0.58 43.86
J-Gym 13925991 15657249 (300.0,300.0,300.0) 96.00 1.82 72.02 89.00 0.62 51.38
D-Chair 543771 539241 (100.0,100.0) 35.50 N/A 15.00 35.50 N/A 16.20
Bookcase 20266354 21957242 (300.0,300.0,300.0) 40.00 0.86 37.53 30.00 0.27 24.71
Dresser 21237296 22866170 (300.0,300.0,300.0) 30.00 0.61 36.48 30.00 0.14 15.42
Table 5. Performance comparison between our method and the baseline method (no design exploration pipeline). In this table, we first report the hypervolume value of our method () and the baseline method (). The reference points are also listed used for the hyper-volume computation. We also report the minimal material usage, the minimal cutting precision and the minimal fabrication time of the Pareto fronts of baseline method () and the Pareto fronts of our pipeline (). The cutting precision of some models is not reported for that precision cost has not been taken into the objective metrics. We use bold font to indicate the hypervolume and these fabrication costs where we are better than the baseline method.

2.1. Carpentry Compiler Parameters

During the comparison between our pipeline and the Carpentry Compiler pipeline [wu2019carpentry], we use the default parameter setting used in their experiments, the number of Traversal , the number of top e-nodes , the maximal different orders of cuts , the population size during E-graph extraction as

, the probability of crossover and mutation

.

2.2. Running Time Analysis

On average, our pipeline spends 4.2% of its runtime in design extraction, 5.8% of its time generating packings, 64.5% of its time assigning cutting orders, and 21.6% of its time in applying genetic algorithms to extract Pareto front of terms. This translates to 10.0% of its runtime in the expansion and contraction phase, 86.1% in the extraction phase, and the other 3.9% everywhere else.

2.3. Comparison with baseline

Figure 1. Pareto fronts generated from our pipeline and baseline method.
Model Ref. Point
Frame 901997 902017 (100.0,100.0)
J-Gym 15036794 15657249 (300.0,300.0,300.0)
L-Frame 797573 797655 (100.0,100.0)
Table 751206 849705 (100.0,100.0)
Window 778158 777711 (100.0,100.0)
Table 6. Hypervolume result of the performance validation experiment. In this table, we report the hypervolume of the Pareto fronts computed from our method () and the ”baseline” method () . ”Baseline” indicates extracting the Pareto front fabrication plans fro each design variation explored by our method independently with the Carpentry Compiler pipeline [wu2019carpentry]. The reference points used for hypervolume computation are also reported.

As shown in Figure 1, we cannot find the exact same front due to convergence and stochasticity. In order to make this comparison, we have tuned the parameters to be as close as possible. This means we do not find the exact same hypervolume indicated in Table 6. However, hypervolume is unintuitive to compare, so we believe presenting plots for comparison is far more compelling.

Figure 2. Pareto fronts of the A-Chair model with different alpha, (a) shows the result of a small alpha (), (b) shows the result of the default alpha (), (c) shows the result of a bigger alpha (). The Pareto fronts generated from no design exploration pipeline stay the same in (a,b,c). Pareto fronts of the J-Gym model with tuning parameters of the no design exploration pipeline, (d) show results of the default parameters, (e) show the Pareto fronts with tuning parameters of the no design exploration pipeline. The Pareto fronts generated from our pipeline stay the same in (d,e).

2.4. Convergence

Due to the combinatorial nature of the problem, we cannot guarantee that we discover the true Pareto front. In what follows we discuss how this limitation can affect the results of our evaluation.

2.4.1. Increasing the Design Space

An implication of the intractable search is that we have a finite amount of resources to spend searching the space. Realistically, this means we can only explore a subset of the design space. Consider the design of the “Window” in Figure 7. If we permit this model to have angled joints instead of just 90 degree joints, the space of design variants becomes much larger. Ideally, the result for this new window variant should be at least as optimal as the result for the original window. However, our approach was able to explore only a portion of the vast design space (, compared to for the simpler window model) and found a worse landscape of solutions (Figure 3). For models with a large design space, if the initial design variants picked by our algorithm are too far from the optimal ones, then our algorithm may never reach those optimal points.

Figure 3. Example showing a limitation of our approach: the star dots indicate the Pareto fronts with allowing the Window model to have arbitrarily angled connectors; the colored dots indicate ones without arbitrarily angled connectors. Both results are generated with the default parameter of our ICEE pipeline.

To avoid similar potential pitfalls, we might be able to allocate our resources to effectively direct the search. In particular, one thing we can do to navigate this space more precisely is to expose a parameter, alpha.

is a parameter that roughly defines the tradeoff between breadth (exploring many designs) and depth (exploring variations within a small number of designs). For our experiments, we set to a default value of . However, in cases where the initial design is believed to be optimal or close to optimal, the user may increase to encourage deeper exploration of the initial and similar designs. In cases where our baseline of optimizing the fabrication for a single design outperforms exploring many designs in the default configuration, we have found that increasing to makes the performance at least as good again.

This note about parameter tuning reveals interesting implications. In particular, our new algorithm does not always fully dominate fabrication-plan-only exploration with default parameters. There are models where the input design is already fairly optimized and doesn’t offer much opportunity for improvement. Take a model that is simple or is composed of many instances of the same shape. The Adirondack chair is such an example. It would be clear that an optimal packing arrangement would minimize material, cuts, and error by stacking cuts; design variations would only deviate from the simplicity of the fabrication plan and there are no improvements possible. The baseline method has the advantage with these models, because it spends its search time deeply exploring fabrication plans, while ICEE must also spend time searching across design variations. Tuning the parameter alpha can help us perform as well as the baseline method, as seen in Figure 2 (a) (b) (c).

From these examples, we see that comparing approaches while using parameter tuning is trickier and subtler.

2.4.2. Comparisons with [wu2019carpentry]

Notably, comparisons with the approaches not employing design exploration are also susceptible to the imprecision in convergence. In previous sections, we ran each tool with its default parameters. Tuning the parameters for each approach would change the results and make the difference in Pareto fronts difficult to assess. Neither approach discovers the true Pareto front, so an absolute comparison depends on the input parameters. Fortunately, we avoid the brunt of this pitfall: the selling point of our design space exploration approach is not about eking out marginal improvements over the baseline, but about exploring a completely different space.

Though we may not search any one design as thoroughly when using default parameters, our approach is able to do something the baseline cannot: discover optimizations to the input design when they exist. Consider elaborate models where there are many ways to pack and fabricate any design, and due to the complexity and number of parts, the designer has diminishing intuition about how to achieve some desired point in the space of tradeoffs.

An example that illustrates this complexity is the Jungle Gym model. The model uses a mix of 1D and 2D wood and has a number of components. From Figure 2 (d) (e), we see that the initial design was suboptimal, and just by exploring several different design variations, we find a number of plans that save greatly on our three objectives. Even the expert plan is completely dominated by the pareto front, indicating that good designs are not always readily apparent.

Our tool takes much longer to finish running as it explores the design space jointly with the fabrication space. Because this is a synthesis problem, one might ask if allowing the baseline to run for longer would result in better fabrication plans for the initial design. We chose not to attempt a comparison of the two algorithms for two reasons. Although it is possible that allowing the fabrication-space-only algorithm to run longer will help it find lower-cost plans, there is no parameter that directly controls how long the algorithm runs. The second reason is that we are supplying a new approach that explores a different solution space, and targets a different problem. Controlling for the running time of the algorithm would still not create an apples-to-apples comparison.

Percentage improvement (%)
Model Carpenter price ()
0 10 20 40 80 160 240 400
Frame 15 19 21 20 15 10 12 16
L-Frame 0 1 2 2 1 3 1 0
A-bookcase 0 1 2 6 7 6 6 5
S-Chair 0 1 2 2 6 9 11 14
Table 0 1 2 3 4 7 10 12
F-Cube 0 4 5 3 3 5 6 8
Window 0 2 4 12 20 26 28 31
Bench 5 4 4 4 4 5 1 -4
A-Chair 0 -2 -4 -4 -3 -4 -9 -16
F-Pot 0 3 4 5 7 9 9 8
Z-Table 5 5 6 8 11 11 11 12
Loom 7 3 1 0 3 3 4 5
J-Gym 7 7 7 8 11 17 20 23
D-Chair 0 1 1 2 3 2 0 -2
Bookcase 25 16 10 7 8 12 14 18
Dresser 0 2 4 6 8 25 33 42
Table 7. The percent improvement of minimum-cost (after scalarization) plans of the baseline compared to the minimum-cost plans of design space exploration, when scalarized at different prices.

2.5. Scalarization Results

Table 7 shows the scalarization of some of the tradeoffs found on the Pareto front with our method. Note that these results are all for the wood models with our default cost metric. When time isn’t worth a lot, plans with low material cost dominate. When time is worth more much than materials, low time become cheapest. We observe a variety of fabrication plans being effective at different points.

Figure 4. Design variants and fabrication plans extracted with our ICEE algorithm (Frame)
Figure 5. Design variants and fabrication plans extracted with our ICEE algorithm (A-Chair)
Figure 6. Design variants and fabrication plans extracted with our ICEE algorithm (Table)
Figure 7. Design variants and fabrication plans extracted with our ICEE algorithm (Window)
Figure 8. Design variants and fabrication plans extracted with our ICEE algorithm (L-Frame)
Figure 9. Design variants and fabrication plans extracted with our ICEE algorithm (D-Chair)
Figure 10. Design variants and fabrication plans extracted with our ICEE algorithm (Bench)
Figure 11. Design variants and fabrication plans extracted with our ICEE algorithm (F-Cube)
Figure 12. Design variants and fabrication plans extracted with our ICEE algorithm (J-Gym)
Figure 13. Design variants and fabrication plans extracted with our ICEE algorithm (Bookcase)
Figure 14. Design variants and fabrication plans extracted with our ICEE algorithm (A-Bookcase)
Figure 15. Design variants and fabrication plans extracted with our ICEE algorithm (Dresser)
Figure 16. Design variants and fabrication plans extracted with our ICEE algorithm (F-Pot)
Figure 17. Design variants and fabrication plans extracted with our ICEE algorithm (Z-Table)
Figure 18. Design variants and fabrication plans extracted with our ICEE algorithm (Loom)

References