Certifiable Robot Design Optimization using Differentiable Programming

by   Charles Dawson, et al.

There is a growing need for computational tools to automatically design and verify autonomous systems, especially complex robotic systems involving perception, planning, control, and hardware in the autonomy stack. Differentiable programming has recently emerged as powerful tool for modeling and optimization. However, very few studies have been done to understand how differentiable programming can be used for robust, certifiable end-to-end design optimization. In this paper, we fill this gap by combining differentiable programming for robot design optimization with a novel statistical framework for certifying the robustness of optimized designs. Our framework can conduct end-to-end optimization and robustness certification for robotics systems, enabling simultaneous optimization of navigation, perception, planning, control, and hardware subsystems. Using simulation and hardware experiments, we show how our tool can be used to solve practical problems in robotics. First, we optimize sensor placements for robot navigation (a design with 5 subsystems and 6 tunable parameters) in under 5 minutes to achieve an 8.4x performance improvement compared to the initial design. Second, we solve a multi-agent collaborative manipulation task (3 subsystems and 454 parameters) in under an hour to achieve a 44 improvement over the initial design. We find that differentiable programming enables much faster (32 than approximate gradient methods. We certify the robustness of each design and successfully deploy the optimized designs in hardware. An open-source implementation is available at https://github.com/MIT-REALM/architect



page 1

page 6

page 8


An End-to-End Differentiable Framework for Contact-Aware Robot Design

The current dominant paradigm for robotic manipulation involves two sepa...

DRAGON (Differentiable Graph Execution) : A suite of Hardware Simulation and Optimization tools for Modern AI/Non-AI Workloads

We introduce DRAGON, an open-source, fast and explainable toolchain hard...

NavDreams: Towards Camera-Only RL Navigation Among Humans

Autonomously navigating a robot in everyday crowded spaces requires solv...

Golem: An algorithm for robust experiment and process optimization

Numerous challenges in science and engineering can be framed as optimiza...

Learning Material Parameters and Hydrodynamics of Soft Robotic Fish via Differentiable Simulation

The high dimensionality of soft mechanisms and the complex physics of fl...

Pontryagin Differentiable Programming: An End-to-End Learning and Control Framework

This paper develops a Pontryagin differentiable programming (PDP) method...

Simulation and Control of Deformable Autonomous Airships in Turbulent Wind

Abstract. Fixed wing and multirotor UAVs are common in the field of robo...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

To design complex systems, engineers in many fields use computer-aided tools to boost their productivity. Mechanical engineers can use a suite of 3D CAD (computer-aided design) and FEA (finite-element analysis) tools to design structures and understand their performance. Likewise, electrical engineers use electronic design automation tools, including hardware description languages like Verilog, to design and analyze large-scale, reliable, and yet highly complex integrated circuits. Sadly, when it comes to designing autonomous systems and robots, engineers often take an ad-hoc approach, relying heavily on experience and tedious parameter tuning.

Fig. 1: An overview of our framework for robot design optimization and certification. Differentiable programming allows the user to flexibly specify a robot design problem, which can be efficiently optimized using exact gradients and verified using an extreme value statistical analysis.

Two factors have made it difficult to develop automated design tools for robotics. The first is complexity: most robots are composed of many interacting subsystems. Although some tools may aid in designing certain subsystems (e.g. Simulink for controllers, SolidWorks or CATIA for hardware, custom software for training perception systems), these tools cover only a small part of the overall robotics design problem, which includes sensing, actuation, perception, navigation, control, and decision-making subsystems. In addition to being interconnected, these subsystems often have a large number of parameters that require tuning to achieve good performance (neural network-based perception is an extreme example of this trend). Moreover, since few robotic systems are exactly alike, an effective design tool must allow the user to select an appropriate level of abstraction for the problem at hand. As a result, there is a need for flexible computational tools that can help designers optimize complex robotic systems.

The second difficulty is uncertainty. Robots operate in dynamic environments that cannot be fully specified a priori, and nonlinear interactions between the robot and its environment can make this uncertainty difficult to quantify. Nevertheless, we must account for this uncertainty during the design process and ensure that our designs perform robustly. The nature of this uncertainty can vary from problem to problem, reiterating the requirement that an automated design tool must be flexible enough to adapt to different robot design problems.

To be successful, an automated robot design tool must address these two challenges (complexity and uncertainty). In addition, just as mechanical and electrical engineers use automated tools to both design and verify their designs, a robot design tool must enable its user to both design autonomous systems and certify the robustness of those designs. In this paper, we address these challenges by combining differentiable programming for design optimization with a novel statistical approach to design certification. In short:

  1. We present a robot design optimization framework that is flexible (using differentiable programming to model complex systems) and robust (avoiding “brittle” optima).

  2. We develop a novel statistical approach to certifying a design’s robustness to environmental uncertainty.

  3. We validate our approach with experiments in simulation and hardware to show how our methods can be used to solve practical robot design problems.

Our goal is to develop a general-purpose robot design optimization tool that can be applied to a range of robot design problems with multiple subsystems. This goal is in contrast with other approaches that are restricted either to specific applications [25, 13, 8, 12, 18, 33] or subsystems [32]. To accomplish this goal, we make two novel contributions. The first is algorithmic: our approach builds on recent developments in programming languages (i.e. automatic differentiation) to provide the flexibility to model complex systems while still allowing fast gradient-based optimization. The second concerns certification: to ensure that our optimized designs are robust in the face of uncertainty, we pair design optimization with a novel statistical approach to robustness analysis.

Our experiments show that our methods can (in our first case study) optimize a robotic system with five subsystems and six design variables in under five minutes, achieving an 8.4x performance improvement over the initial design. In our second case study, we optimize a system with three subsystems and 454 design variables in under an hour, achieving a 44% performance improvement over the initial design. Our use of differentiable programming allows us to complete this optimization 32% and 20x faster, respectively in each example, compared to approximate gradient methods. Both of these designs are certified using a statistical robustness analysis and successfully deployed in hardware. An open-source implementation of our framework, including repeatable code examples, is available at https://github.com/MIT-REALM/architect. Our hope is that this prototype implementation will provide the foundation for a fully-featured, easy-to-use design tool for practicing robotics engineers.

Ii Related Work

Ii-1 Design optimization for robotics

Most existing works on design optimization for robotics focus on a particular application, such as simple walking robots [25], quadrotors [13], and soft robots [8, 12, 18]. Other works employ optimization to design specific subsystems, such as controllers [32] or motion plans [24]. In contrast, the purpose of this work is to develop a general-purpose robot design optimization tool that can be applied not only to a range of robot design problems but also to optimize the design of multiple subsystems simultaneously. This goal is related to that of a large family of multi-disciplinary design optimization (MDO) methods in aerospace engineering [19]. As discussed above, our approach differs from MDO in its use of differentiable programming as a flexible modeling tool and our novel statistical approach to robustness analysis. We review the related work for automatic differentiation and robustness analysis in the next two sections.

Ii-2 Programming languages for design optimization

When it comes to managing complexity in a general-purpose design framework, programming languages are a natural tool. They allow users (i.e. programmers) to define precisely which abstractions are appropriate for any given application (e.g. by defining appropriate class hierarchies and function interfaces) without sacrificing generality. To take advantage of this expressivity, we can view engineering designs as programs that define the behavior of the system given suitable choices for design structure and parameters. We can then use automatic differentiation to derive gradients connecting these parameters to the system’s behavior and optimize accordingly. This view is inspired by recent work in 3D design optimization [6], aircraft design [26]

, and machine learning 

[21, 5].

In recent years, the robotics community has also developed special-purpose differentiable simulators for robotic systems, particularly those involving rigid body contact dynamics [15, 11, 29, 28]. These simulators have been used to solve system identification and controller design tasks, but they do not represent a general-purpose framework, as gradients are often derived by hand and the simulators are not expressive enough to model full-stack robotic systems (e.g. with perception and navigation capabilities). We take inspiration from these methods in our case studies, where we implement a simple differentiable contact simulator in our second case study.

Ii-3 Formal methods for robustness analysis

Safety and robustness are critical concerns for any robotic system. When it comes to low-level control, there is a rich history of reachability [1] and stability [7, 10] analysis tools that can be used to answer questions of safety and robustness for the control subsystem. Other works apply reachability analysis at the system level using black-box tools [14]. This work builds on this history by incorporating formal analysis into a design-optimize-analyze loop to provide rapid feedback on robustness as part of the design process. In particular, we develop a novel statistical method for quantifying the worst-case performance and sensitivity of an optimized design to external perturbations.

Iii Preliminaries and Assumptions

Key to the design of robotic systems is the tension between the factors a designer can control and those she cannot. For instance, a designer might be able to choose the locations of sensors and tune controller gains, but she cannot choose the sensor noise or disturbances (e.g. wind) encountered during operation. Robot design is therefore the process of choosing feasible values for the controllable factors (here referred to as design parameters) that achieve good performance despite the influence of uncontrollable factors (exogenous parameters).

Of course, this is a deliberately narrow view of engineering design, since it focuses on parameter optimization and ignores important steps like problem formulation and system architecture selection. Our focus on parameter optimization is intentional, as it allows the designer to focus her creative abilities and engineering judgment on the architecture problem, using computational aids as interactive tools in a larger design process [26, 6]. This focus is common in design optimization (e.g. aircraft design in [26] and 3D CAD optimization in [6]).

To formalize the design optimization problem, we take a high-level view of the robot design problems (shown in Fig. 2), where a design problem has five components:

Iii-1 Design parameters

The system designer has the ability to tune certain continuous parameters ; e.g., control gains or the positions of nodes in a sensor network.

Iii-2 Exogenous parameters

Some factors are beyond the designer’s control, such as wind speeds or sensor noise. We model these effects as random variables with some distribution

supported on a subset of . We assume no knowledge of other than the ability to draw samples i.i.d..

Iii-3 Simulator

Given particular choices for and , the system’s state evolves in discrete time according to a known simulator . This simulator describes the system’s behavior over a finite horizon as a trace of states . should be deterministic; randomness must be “imported” via the exogenous parameters.

Iii-4 Cost

We assume access to a function mapping system behaviors (i.e. a trace of states) to a scalar performance metric that we seek to minimize.

Iii-5 Constraints

The choice of design parameters is governed by a set of constraints with index set . Design parameters are feasible if . Here, we consider constraints as functions of only; we leave the extension to robust constraints involving to future work.

Fig. 2: A glass-box model of a generic robotic system. Design optimization involves finding a set of design parameters so that the simulated cost is minimized, while robustness analysis involves quantifying how changes in the exogenous parameters affect the simulated cost.

We can make this discussion concrete with an example: consider the autonomous ground vehicle (AGV) design problem illustrated in Fig. 3

. In this problem, our goal is to design a localization and navigation system that will allow the AGV to safely navigate between two obstacles. The AGV can estimate its position using an extended Kalman filter (EKF) with noisy measurements of its range from two nearby beacons and its heading from an IMU. The robot uses this estimate with a navigation function 

[22] and feedback controller to track a collision-free path between the obstacles.

In this problem, the design parameters include the locations of the two range beacons and the feedback controller gains . The exogenous parameters are the actuation and sensor noises at each timestep and

, drawn i.i.d. from Gaussian distributions

and , respectively, as well as the initial state (also Gaussian). The simulator integrates the AGV’s dynamics using a fixed timestep, updating the EKF and evaluating the navigation controller at each step. The cost function assigns a penalty to collisions with the environment, estimation errors, and deviations from the goal location. We will return to this example in more detail in Section VI-A; first, we discuss our approach to design optimization and robustness analysis in Sections IV and V, respectively.

Fig. 3: A design optimization problem for an AGV localization and navigation system. The goal is to find placements for two range sensors along with parameters for the navigation system that allow the robot to safely pass through the narrow doorway.

Iv Design Optimization

Given the notation from Section III

, we can formally pose the robot design optimization problem. In formulating the optimization objective, it is important to consider the variance introduced by the exogenous parameters

. Simply minimizing the expected value of the cost (where denotes composition) can lead to myopic behavior where exceptional performance for some values of

compensates for poor performance on other values; this is related to the phenomenon of “reward hacking” in reinforcement learning 


Ideally, we would like our designs to be robust to variations in exogenous parameters: changing

should not cause the performance to change much. We can include this requirement as a heuristic by penalizing the variance of

. Intuitively, this heuristic “smooths” the cost function with respect to the exogenous parameters: regions of high variance (containing sharp local minima) are penalized, while regions of low variance are rewarded. We return to justify this connection to robustness in Section V-C. This heuristic leads us to the variance-regularized robust design optimization problem:

s.t. (1b)

Practically, we replace the expectation and variance with unbiased estimates over

samples .

s.t. (2b)

Of course, these Monte-Carlo estimators will require multiple evaluations of to evaluate (2a). Since might itself be expensive to evaluate, approximating the gradients of (2a) and (2b) using finite differences will impose a large computational cost ( additional evaluations of and at each step). Instead, we can turn to automatic differentiation (AD) to directly compute these gradients with respect to , which we can use with any off-the-shelf gradient-based optimization engine. The precise choice of optimization algorithm is driven by the constraints and is not central to our framework. If the constraints are hyper-rectangle bounds on , then algorithms like L-BFGS-B may be used, but if the constraints are more complex then sequential quadratic programming or interior-point methods may be used. Our implementation provides an interface to a range of optimization back-ends through SciPy [30], and we plan to add support for hybrid methods combining local gradient descent with gradient-free population methods in a future work.

In this framework, the user need only implement the simulator and cost function for their specific problem using a differentiable programming framework like the JAX library for Python [5], and this implementation can be used automatically for efficient gradient-based optimization. By implementing a library of additional building blocks in this AD paradigm (e.g. estimation algorithms like the EKF), we can provide an AD-based design optimization tool that strikes a productive balance between flexibility and ease of use. In the supplementary materials, we provide a prototype implementation of this tool, containing some of these AD building blocks. In future work, we hope to further expand this library to include more common robotics algorithms.

V Design Certification via Robustness Analysis

Once we have found an optimal choice of design parameters, we need to verify that the design will be robust to uncertainty in the exogenous parameters. Similarly to 3D CAD and FEA packages for mechanical engineers, a successful design tool not only helps an engineer refine her design (i.e. using the design optimization framework in Section IV) but also helps her analyze and predict its performance. To certify the performance of an optimized design, we are interested in two distinct questions. First, what is the maximum cost we can expect given variation in the exogenous parameters? Second, how sensitive is the cost to external disturbances: by how much can a change in the exogenous parameters increase the cost?

Answering these questions is difficult because we must extrapolate from a finite number of simulations to predict worst-case performance. To address this difficulty, we develop a probabilistic approach based on extreme value theory in statistics [27, 31, 9]. We begin by stating a relevant result:

Theorem V.1 (Extremal Types Theorem; 3.1.1 in  [9]).

Let be random variables drawn i.i.d. from an unknown distribution and be the sample maximum. If there exist sequences of normalizing constants and such that the limiting distribution of as is non-degenerate, then


where is a Generalized Extreme Value Distribution (GEVD) with location , scale , and shape ,


supported on .

In the special case , this distribution has a slightly different form (known as a Gumbel distribution), but the result holds. In practice, and are not estimated directly (this merely changes the fit values of and ) and the GEVD is fit directly to by either minimizing the log likelihood [9] or estimating the posterior distribution of

using Markov Chain Monte Carlo sampling 

[23]. A useful feature of the GEVD is that if our data suggest that , then the support of is bounded above and we can estimate an upper bound on the maximum . If

, then we cannot estimate a strict upper bound, but we can provide for a confidence interval for

instead. In the following sections, we apply this theorem to analyze the robustness of an optimized design.

V-a Estimating the worst-case performance

Our first robustness question concerns the worst-case performance of our design: given variation in , what is the maximum cost111Any function of the simulation trace can be substituted for cost without changing the framework. we can expect for our choice of design parameters ? Our insight is that the variation induces an (unknown) distribution in , so a random variable to which the extremal types theorem applies. Algorithm 1 provides a means for estimating the maximum of by fitting a GEVD to observed maximums . Generally speaking, the block size and sample size should be chosen to be as large as computationally feasible to reduce the variance of the GEVD estimate [9].

Block size and sample size
; with , ,
posterior GEVD estimate given
Algorithm 1 An algorithm for estimating the parameters of a GEVD governing the expected maximum cost

In practice, we use the automatic parallelization features of JAX to efficiently compute and obtain the posterior distribution of , , and using Markov Chain Monte Carlo sampling with the PyMC3 library [23]. From this posterior distribution, we take the confidence level for each parameter . If , we have confidence that the corresponding GEVD has bounded support on the right and estimate the maximum cost . Otherwise, we can estimate the 97% confidence level for using the GEVD described by .

V-B Estimating sensitivity

In addition to the expected worst-case performance, it is also useful to know the sensitivity of that performance. That is, if the design performs well in one situation (i.e. for some value of ), then how much can we expect its performance to degrade if changes? Formally, we define the sensitivity as the least constant such that for any two ,

If is Lipschitz then will be finite and equal the Lipschitz constant of , but we do not require this assumption; if is not Lipschitz, then we can estimate a high-confidence upper bound on .

In both cases, we can exploit the fact that is an extreme value of the slope and apply the extremal types theorem. Let be a random variable with . The distribution of is unknown, but the extremal types theorem lets us characterize the sample maximum using a GEVD. Algorithm 2 provides our method for fitting this distribution, and a concrete Python implementation is provided in the supplementary materials. This approach is similar to that in [27, 16] but removes the assumption that is bounded by fitting a GEVD instead of a reverse Weibull distribution, allowing our approach to apply when is not Lipschitz.

Block size and sample size
,                   xxxxxxx with , ,
posterior GEVD estimate given
Algorithm 2 An algorithm for estimating the parameters of a GEVD governing the sensitivity of

Algorithm 2 is similar to Algorithm 1, but the interpretation of the results differs in that the fit parameters from Algorithm 2 allow us to understand the sensitivity of a design. In particular, if the 97% confidence level for the shape parameter is negative, then is likely Lipschitz continuous with Lipschitz constant . If , then is likely not Lipschitz but we can estimate the 97% confidence level for . As a result, this statistical approach allows us to avoid making prior assumptions about the continuity of our system.

V-C Connections to Design Optimization

Here, we will attempt to justify the variance regularization heuristic introduced in Section IV with reference to the worst-case performance and sensitivity computed by Algorithms 1 and 2. First, let’s examine the connection with expected worst-case performance

. If we take the probability of observing a cost

within of () and apply Cantelli’s inequality [4], we see that

Minimizing in addition to will correlate with decreasing this upper bound. As a result, we expect variance regularization to correlate with decreased probability of encountering near-worst-case performance.

We can also justify the connection between variance regularization and reducing sensitivity by looking at the special case where is Lipschitz and the elements of are independent. The Bobkov-Houdré variance bound for Lipschitz functions [3] holds that , where is the variance of the sum of elements in . This bound does not explicitly show that minimizing decreases , but it suggests a correlation that we hope to revisit in future work.

Vi Experimental Results

So far, we have developed the theoretical and algorithmic basis for our robot design framework. It remains for us to empirically answer two questions: first, is our framework useful for solving practical robot design problems? Second, is our statistical method for robustness analysis sound?

In this section, we answer these questions through the lens of two case studies. The first involves finding optimal sensor placements for robot navigation, and the second involves optimizing a pushing strategy for multi-agent manipulation. We demonstrate the success of our optimization and robustness analysis framework on each example, and we provide results from hardware testing in both cases. Next, we include an ablation study justifying our use of automatic differentiation and variance regularization. We conclude by verifying the soundness of our statistical robustness analysis.

Vi-a Case study: optimal sensor placement for navigation

First, we return to the AGV localization and navigation example introduced in Fig. 3. This design problem requires finding an optimal placement for two ranging beacons to minimize estimation error and allow the robot to safely navigate between two obstacles. Range measurements from these beacons are integrated with IMU data via an EKF, and the resulting state estimate is used as input to a navigation function and tracking feedback controller to guide the robot to its goal. This design problem has two important features. First, it involves interactions between multiple subsystems: the output from the EKF is used by the navigation function, which feeds input to the controller, which in turn influences future EKF predictions. Second, the effect of uncertainty on the robot’s performance is relatively strong.

The design parameters are the locations of two range beacons and two feedback controller gains (6 total design parameters). The exogenous parameters include uncertainty in the robot’s initial state along with actuation and sensing noise at each of timesteps ( total exogenous parameters). The cost function has three components: one penalizing large estimation errors, one penalizing deviations from the goal, and one penalizing collisions with the environment. A formal definition of the design and exogenous parameters, simulator, cost, and constraints is given in Table LABEL:tab:agv_design_problem in the appendix. We also include code in the supplementary materials for defining this design problem in our framework and running our design optimization and sensitivity analysis methods. The simulator and cost functions are implemented in Python using the JAX framework for automatic differentiation.

Fig. 4 compares simulated trajectories for the initial and optimized beacon placements and feedback gains, clearly showing the impact of design optimization. Initially, poor beacon placement causes the robot to accumulate estimation error and drift away from its goal. The optimized design moves the beacons off to the side to eliminate this drift. Optimization (, , L-BFGS-B back-end) took 3 minutes on a laptop computer ( RAM, 8-core processor).

We tested the initial and optimized design in hardware using the Turtlebot 3 platform. To emulate range beacon measurements in our lab, odometry and laser scan data were fused into a full state estimate from which range measurements were derived (the full state estimate was hidden from the robot, which only received the emulated range measurements). The control frequency was increased from in simulation to in hardware, and the obstacles were recreated in our laboratory. The hardware results, shown in Figs. 5 and 6, confirm our simulation results: the initial design suffers from drift and ends approximately from its target position, while the optimized design does not drift and ends within of the goal. This difference can be seen most clearly in the posterior error covariance from the EKF; Fig. 6 shows how the optimized design greatly reduces uncertainty in the state estimate compared to the initial design. No parameter estimation or tuning was required.

Fig. 4: Simulated trajectories for the initial (top) and optimized (bottom) AGV designs. Color indicates the value of the navigation function. Beacon positions are bounded within the area shown.
Fig. 5: Hardware performance of initial (left) and optimized (right) AGV designs. Square (green) shows the goal; triangles (red) show beacon locations. The optimized design eliminates drift relative to goal.
Fig. 6: Hardware results for EKF state estimates and posterior error covariance ellipse for initial and optimized designs.

Finally, we apply the robustness analysis from Section V to certify the maximum absolute estimation error in the optimized design (in meters, projected into the plane). Note that this error is different from the cost used during optimization, but we can still apply Algorithm 1 simply by changing the cost function for the duration of the analysis. Using block size and sample size , we fit a GEVD using Algorithm 1 to the maximum estimation error for both the initial and optimized designs. These distributions are shown in Fig. 7; the optimized design significantly reduces the expected maximum estimation error. We observe that the 97% confidence level for the shape parameter is positive, so we cannot conclude that the worst-case estimation error is bounded, but we can derive a high-confidence bound of for our optimized design.

Fig. 7: GEVD CDF fit using Algorithm 1 for the maximum absolute estimation error in the -plane in both the initial and optimized designs, with 97% confidence levels.

Vi-B Case study: collaborative multi-robot manipulation

Fig. 8: Multi-agent manipulation design optimization problem. The goal is to find parameters for robot controllers and a neural network planner that push the box from an initial position (solid) to a desired position (striped).

Our second example involves finding a control strategy for multi-agent collaborative manipulation. In this setting, two ground robots must collaborate to push a box from its current location to a target pose (as in Fig. 8). Given the desired box pose and the current location of each robot, a neural network plans a trajectory for each robot, which the robots then track using a feedback controller ( includes both the neural network parameters and the tracking controller gains, with a total of 454 design parameters). The exogenous parameters include the coefficient of friction for each contact pair, the mass of the box, the desired pose of the box, and the initial pose for each robot (a total of 13 exogenous parameters; we vary the desired box pose and initial robot poses to prevent over-fitting during optimization). The cost function is simply the squared error between the desired box pose (including position and orientation) and its true final pose after a simulation. A full definition of this design problem and contact dynamics model is included in Table LABEL:tab:mam_design_problem in the appendix. We implement the contact dynamics simulator, trajectory planning neural network, and path tracking controller in Python using JAX.

Fig. 9: GEVD CDF fit using Algorithm 2 for the maximum sensitivity of the optimized collaborative manipulation strategy to variation in friction coefficient. has units of meters per unit change in friction coefficient.
Fig. 10: Left: Initial (top) and optimized (bottom) manipulation strategies in simulation (light/dark colors indicate initial/final positions, stripes indicate desired position). Right: Optimized manipulation strategy deployed in hardware (video included in the supplementary materials). (a) The robots first move to positions around the box. (b) Using the optimized neural network, the robots plan a cubic spline trajectory pushing the box to its desired location. (c-d) The robots execute the plan by tracking that trajectory.

Compared to the design problem in our first case study, this system has a simpler architecture (fewer subsystems) but more complicated dynamics and a much higher-dimensional design space. This example also showcases a different interpretation of the exogenous parameters: instead of representing true sources of randomness, these parameters represent quantities that are simply unknown at design-time. For example, the target position for the box is not random in the same way as sensor noise in the previous example, but since we cannot choose this value at design-time it must be included in . As a result, minimizing the expected cost with respect to variation in yields a solution that achieves good performance for many different target poses, enabling the user to select one at run-time and be confident that the design will perform well.

To solve this design problem, the neural network parameters are initialized i.i.d. according to a Gaussian distribution, and the tracking controller gains are set to nominal values. We then optimize the parameters using , , and L-BFGS-B back-end. This optimization took 45 minutes on a laptop computer ( of RAM and a 8-core processor). Fig. 10 shows a comparison between the initial and optimized strategies, and Fig. LABEL:fig:mam_more in the appendix shows additional examples of the optimized behavior. The target pose is drawn uniformly , and the optimized design achieves a mean squared error of .

We tested the optimized design in hardware, again using the Turtlebot 3 platform. An overhead camera and AprilTag [20] markers were used to obtain the location of the box and each robot. At execution, each robot first moves to a designated starting location near the box, plans a trajectory using the neural network policy, and tracks that trajectory at until the box reaches its desired location or a time limit is reached. Results from this hardware experiment are shown in Fig. 10, and a video is included in the supplementary materials. Again, no parameter tuning or estimation was needed.

After successfully testing the optimized design in the laboratory, it is natural to ask how its performance might change as conditions (particularly the coefficients of friction) change. Using blocks of size each, we use Algorithm 2 to fit a GEVD for the sensitivity constant with respect to the coefficients of friction between each contact pair. We do this by allowing these coefficients to vary and freezing other elements of at nominal values (box mass and target pose ). The fit distribution is shown in Fig. 9. The 97% confidence level for the shape parameter is , so we cannot conclude that the performance of our design is Lipschitz with respect to the friction coefficients, but we can estimate the 97% confidence level for as .

Vi-C Design optimization ablation study

(a) AD vs. FD; sensor placement
(b) AD vs. FD; manipulation
(c) Effect of VR; sensor placement
(d) Effect of VR; manipulation
Fig. 11: (a)-(b) Improvement of automatic differentiation (AD) over finite differences (FD) in both case studies. (c)-(d) Effect of variance regularization (VR) in both case studies.

Our case studies in Sections VI-A and VI-B help demonstrate the utility of our framework for solving realistic robotics problems. However, it remains to justify the choices we made in designing this framework. For instance, how does automatic differentiation compare with other methods for estimating the gradient (e.g. finite differences)? What benefit does variance regularization in problem (2) bring? We answer these questions here using an ablation study where we attempt to isolate the impact of each of these features.

First, why use automatic differentiation? On the one hand, AD allows us to estimate the gradient with only a single evaluation of the objective function, while other methods (such as finite differences, or FD) require multiple evaluations. On the other hand, AD necessarily incurs some overhead at runtime, making each AD function call more expensive than those used in an FD scheme. Additionally, some arguments [28] suggest that exact gradients may be less useful than finite-difference or stochastic approximations when the objective is stiff or discontinuous. We compare AD with a 3-point finite-difference method by re-solving problem (2) for both case studies, keeping all parameters constant (, , same random seed) and substituting the gradients obtained using AD for those computed using finite differences. Fig. 11 shows the results of this comparison. In the sensor placement example, AD achieves a lower expected cost and cost variance, and it runs in 32% less time. In the collaborative manipulation example, both methods achieve similar expected cost and variance, but the AD version runs nearly 19x faster. These results lead us to conclude that AD enables more effective optimization than finite differences and is an appropriate choice for our framework. An exciting extension of our framework involves combining AD with stochastic population methods, but we leave this to future work.

The next question is whether variance regularization brings any benefit to the design optimization problem. To answer this question, we compare the results of re-solving both case studies with variance weight and . These results are shown in Fig. 11; surprisingly, in the sensor placement example we see that the variance-regularized problem results in a lower expected cost, contrary to the intuition that regularization requires a trade off with increased expected cost. We expect that this lower expected cost may be a result of the regularization term smoothing the objective with respect to the exogenous parameters. However, these benefits are less pronounced than the benefits from automatic differentiation, and we do not see a distinct benefit in our second case study.

Vi-D Accuracy of robustness analysis

To verify the soundness of our statistical robustness analysis methods, we need to determine whether the fit GEVD is likely to either under- or overestimate the worst-case performance of a design. Put simply, is our approach falsely optimistic (underestimating the worst-case) or conservative (overestimating)?

To answer these questions, we compare the cumulative distribution function (CDF) of the fit GEVD with an empirical CDF observed from data. Algorithms 

1 and 2 both estimate a posterior distribution for , , and , allowing us to construct an upper-bound and lower-bound GEVD using the 97% and 3% confidence level parameter estimates. Using these distributions, we can measure false optimism and conservatism using a one-sided Kolmogorov-Smirnov (KS) test [17].

Fig. 12 compares the estimated GEVDs and empirical data for worst-case performance in the sensor placement example (fit using Algorithm 1) and sensitivity in the manipulation example (fit using Algorithm 2). In the former case, we see that the empirical CDF lies between the upper- and lower-confidence limits for the fit distribution, indicating that the fit is neither falsely optimistic at the 97% level nor conservative at the 3% level (these conclusions are confirmed by the KS statistics provided in Table LABEL:tab:ks_test_agv in the appendix). In the latter case, even though the empirical CDF extends slightly beyond the estimated bounds in some regions, the statistical analysis in Table LABEL:tab:ks_test_mam indicates that the estimated GEVD is neither falsely optimistic at the 97% level nor conservative at the 3% level. In addition, we see that the gap between the 3% and 97% distributions is relatively small in both examples in Fig. 12.

Vii Discussion and Conclusion

In this paper, we develop an automated design tool to improve the productivity of robot designers by a) enabling efficient optimization of robot designs and b) allowing users to certify the robustness of those designs. In developing this framework, we make two main algorithmic and theoretical contributions. First, we use differentiable programming for end-to-end optimization of robotic systems, creating a flexible software framework for design optimization. Second, we develop a novel statistical framework for certifying the worst-case performance and sensitivity of optimized designs.

To validate this framework and demonstrate the usefulness of our contributions, we present two case studies to highlight how our framework can be used for design optimization in practical robotics problems. Moreover, we show that our optimized designs are robust enough to deploy in hardware, and data from these hardware experiments validate our optimization approach. Finally, we provide an ablation study to justify the architecture of our optimization framework and a statistical analysis showing the soundness of our robustness analysis techniques. We hope that by combining flexible design optimization with robustness certification in our framework we can increase the productivity of robotics engineers, shorten the design cycle, and help bring more complex robotic systems to life.

There are a number of interesting directions for future work. First, since our approach relies on sampling from without any further information, it will require a large number of samples to accurately capture rare events. We can close this gap when more information about is available, perhaps using adversarial testing or importance sampling. Second, our framework is currently focused on tuning continuous parameters; we hope to incorporate stochastic search over discrete parameters in a future work. Finally, we hope to expand the software implementation of our framework to include a richer library of autonomy building blocks and demonstrate a wider range of applications in designing autonomous systems, including robotic arms, autonomous air and spacecraft, and networked autonomous systems.

Fig. 12: Comparison of fit GEVD CDFs and empirical CDF for worst-case estimation error in the sensor placement example (top) and sensitivity in the manipulation example (bottom).


  • [1] M. Althoff, G. Frehse, and A. Girard (2021) Set propagation techniques for reachability analysis. Annual Review of Control, Robotics, and Autonomous Systems 4 (1), pp. 369–395. External Links: Document, Link, https://doi.org/10.1146/annurev-control-071420-081941 Cited by: §II-3.
  • [2] D. Amodei, C. Olah, J. Steinhardt, P. F. Christiano, J. Schulman, and D. Mané (2016) Concrete problems in ai safety. ArXiv abs/1606.06565. Cited by: §IV.
  • [3] S. G. Bobkov and C. Houdré (1996) Variance of lipschitz functions and an isoperimetric problem for a class of product measures. Bernoulli 2 (3), pp. 249–255. External Links: ISSN 13507265, Link Cited by: §V-C.
  • [4] S. Boucheron, G. Lugosi, and P. Massart (2016) Concentration inequalities: a nonasymptotic theory of independence. Oxford University Press. Cited by: §V-C.
  • [5] JAX: composable transformations of Python+NumPy programs External Links: Link Cited by: §II-2, §IV.
  • [6] D. Cascaval, M. Shalah, P. Quinn, R. Bodik, M. Agrawala, and A. Schulz (2021) Differentiable 3d cad programs for bidirectional editing. arXiv abs/2110.01182. Cited by: §II-2, §III.
  • [7] Y. Chang, N. Roohi, and S. Gao (2019) Neural lyapunov control. In NeurIPS, Cited by: §II-3.
  • [8] F. Chen and M. Y. Wang (2020) Design optimization of soft robots: a review of the state of the art. IEEE Robotics Automation Magazine 27 (4), pp. 27–43. External Links: Document Cited by: §I, §II-1.
  • [9] S. Coles (2001) An introduction to statistical modeling of extreme values. Springer. Cited by: §V-A, Theorem V.1, §V, §V.
  • [10] C. Dawson, Z. Qin, S. Gao, and C. Fan (2021) Safe nonlinear control using robust neural lyapunov-barrier functions. In 5th Annual Conference on Robot Learning, External Links: Link Cited by: §II-3.
  • [11] F. de Avila Belbute-Peres, K. Smith, K. Allen, J. Tenenbaum, and J. Z. Kolter (2018) End-to-end differentiable physics for learning and control. In Advances in Neural Information Processing Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.), Vol. 31, pp. . Cited by: §II-2.
  • [12] T. Du, J. Hughes, S. Wah, W. Matusik, and D. Rus (2021) Underwater soft robot modeling and control with differentiable simulation. IEEE Robotics and Automation Letters. Cited by: §I, §II-1.
  • [13] T. Du, A. Schulz, B. Zhu, B. Bickel, and W. Matusik (2016) Computational multicopter design. ACM Transactions on Graphics (TOG) 35 (6), pp. 227. Cited by: §I, §II-1.
  • [14] C. Fan, B. Qi, S. Mitra, and M. Viswanathan (2017) DryVR: data-driven verification and compositional reasoning for automotive systems. In Computer Aided Verification, R. Majumdar and V. Kuncak (Eds.), Cham, pp. 441–461. External Links: ISBN 978-3-319-63387-9 Cited by: §II-3.
  • [15] E. Heiden, D. Millard, E. Coumans, Y. Sheng, and G. S. Sukhatme (2021) NeuralSim: augmenting differentiable simulators with neural networks. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), External Links: Link Cited by: §II-2.
  • [16] C. Knuth, G. Chou, N. Ozay, and D. Berenson (2021) Planning with learned dynamics: probabilistic guarantees on safety and reachability via lipschitz constants. IEEE Robotics and Automation Letters 6 (3), pp. 5129–5136. External Links: Document Cited by: §V-B.
  • [17] Kolmogorov-smirnov goodness-of-fit test. NIST. External Links: Link Cited by: §VI-D.
  • [18] P. Ma, T. Du, J. Z. Zhang, K. Wu, A. Spielberg, R. K. Katzschmann, and W. Matusik (2021)

    DiffAqua: a differentiable computational design pipeline for soft underwater swimmers with shape interpolation

    ACM Transactions on Graphics (TOG) 40 (4), pp. 132. Cited by: §I, §II-1.
  • [19] J. R. R. A. Martins and A. B. Lambe (2013-09) Multidisciplinary design optimization: a survey of architectures. AIAA Journal 51 (9), pp. 2049–2075. External Links: Document Cited by: §II-1.
  • [20] E. Olson (2011-05) AprilTag: a robust and flexible visual fiducial system. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pp. 3400–3407. Cited by: §VI-B.
  • [21] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala (2019) PyTorch: an imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (Eds.), pp. 8024–8035. Cited by: §II-2.
  • [22] S. J. Russell and P. Norvig (2002-12) Artificial intelligence: a modern approach (2nd edition). External Links: ISBN 0137903952, Link Cited by: §III-5.
  • [23] J. Salvatier, T. V. Wiecki, and C. Fonnesbeck (2016) Probabilistic programming in python using pymc3. PeerJ Computer Science 2. External Links: Document Cited by: §V-A, §V.
  • [24] J. Schulman, Y. Duan, J. Ho, A. Lee, I. Awwal, H. Bradlow, J. Pan, S. Patil, K. Goldberg, and P. Abbeel (2014-08) Motion planning with sequential convex optimization and convex collision checking. The International Journal of Robotics Research 33 (9), pp. 1251–1270 (English). External Links: ISSN 0278-3649, Document Cited by: §II-1.
  • [25] A. Schulz, C. Sung, A. Spielberg, W. Zhao, R. Cheng, E. Grinspun, D. Rus, and W. Matusik (2017) Interactive robogami: an end-to-end system for design of robots with ground locomotion. The International Journal of Robotics Research 36 (10), pp. 1131–1147. Cited by: §I, §II-1.
  • [26] P. D. Sharpe (2021) AeroSandbox: a differentiable framework for aircraft design optimization. Master’s Thesis, MIT. Cited by: §II-2, §III.
  • [27] K. Sridhar, O. Sokolsky, I. Lee, and J. Weimer (2021) Improving neural network robustness via persistency of excitation. arXiv. External Links: Link, 2106.02078 Cited by: §V-B, §V.
  • [28] H. Suh, T. Pang, and R. Tedrake (2021) Bundled gradients through contact via randomized smoothing. ArXiv abs/2109.05143. Cited by: §II-2, §VI-C.
  • [29] R. Tedrake and the Drake Development Team (2019) Drake: model-based design and verification for robotics. External Links: Link Cited by: §II-2.
  • [30] P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, J. Bright, S. J. van der Walt, M. Brett, J. Wilson, K. J. Millman, N. Mayorov, A. R. J. Nelson, E. Jones, R. Kern, E. Larson, C. J. Carey, İ. Polat, Y. Feng, E. W. Moore, J. VanderPlas, D. Laxalde, J. Perktold, R. Cimrman, I. Henriksen, E. A. Quintero, C. R. Harris, A. M. Archibald, A. H. Ribeiro, F. Pedregosa, P. van Mulbregt, and SciPy 1.0 Contributors (2020) SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods 17, pp. 261–272. External Links: Document Cited by: §IV.
  • [31] G.R. Wood and B.P. Zhang (1996) Estimation of the lipschitz constant of a function. Journal of Global Optimization 8 (1). External Links: Document Cited by: §V.
  • [32] J. Xu, T. Du, M. Foshey, B. Li, B. Zhu, A. Schulz, and W. Matusik (2019-07) Learning to fly: computational controller design for hybrid uavs with reinforcement learning. ACM Trans. Graph. 38 (4). External Links: ISSN 0730-0301, Link, Document Cited by: §I, §II-1.
  • [33] J. Zhang, J. Liu, C. Wang, Y. Song, and B. Li (2017)

    Study on multidisciplinary design optimization of a 2-degree-of-freedom robot based on sensitivity analysis and structural analysis

    Advances in Mechanical Engineering 9 (4), pp. 1687814017696656. External Links: Document, Link, https://doi.org/10.1177/1687814017696656 Cited by: §I.

Sensor Placement Design Problem Statement

We model the robot with discrete-time Dubins dynamics with three state variables (), two control inputs for linear and angular velocity (), and noisy transition model