Physics engines are important for planning and control in robotics. To plan for a task, a robot may use a physics engine to simulate the effects of different actions on the environment and then select a sequence of them to reach a desired goal state. The utility of the resulting action sequence depends on the accuracy of the physics engine’s predictions, so a high-fidelity physics engine is an important component in robot planning. Most physics engines used in robotics (such as Mujoco  and Bullet ) use approximate contact models, and recent studies [3, 4, 5] have demonstrated discrepancies between their predictions and real-world data. These mismatches make contact-rich tasks hard to solve using these physics engines.
One way to increase the robustness of controllers and policies resulting from physics engines is to add perturbations to parameters that are difficult to estimate accurately (e.g., frictional variation as a function of position). This approach leads to an ensemble of simulated predictions that covers a range of possible outcomes. Using the ensemble allows to take more conservative actions and increases robustness, but does not address the limitation of using learned, approximate models [6, 7].
To correct for model errors due to approximations, we learn a residual model between real-world measurements and a physics engine’s predictions. Combining the physics engine and residual model yields a data-augmented physics engine. This strategy is effective because learning a residual error of a reasonable approximation (here from a physics engine) is easier and more sample efficient than learning from scratch. This approach has been shown to be more data efficient, have better generalization capabilities, and outperform its purely analytical or data-driven counterparts [8, 9, 10, 11].
Most residual-based approaches assume a fixed number of objects in the world states. This means they cannot be applied to states with a varied number of objects or generalize what they learn for one object to other similar ones. This problem has been addressed by approaches that use graph-structured network models, such as interaction networks  and neural physics engines . These methods are effective at generalizing over objects, modeling interactions, and handling variable numbers of objects. However, as they are purely data-driven, in practice they require a large number of training examples to arrive at a good model.
In this paper, we propose simulator-augmented interaction networks (SAIN), incorporating interaction networks into a physical simulator for complex, real-world control problems. Specifically, we show:
Sample-efficient residual learning and improved prediction accuracy relative to the physics engine,
Accurate predictions for the dynamics and interaction of novel arrangements and numbers of objects, and the
Utility of the learned residual model for control in highly underactuated planar pushing tasks.
We demonstrate SAIN’s performance on the experimental setup depicted in Fig. 1. Here, the robot’s objective is to guide the second disk to a goal by pushing on the first. This task is challenging due to the presence of multiple complex frictional interactions and underactuation . We demonstrate the step-by-step deployment of SAIN, from training in simulation to augmentation with real-world data, and finally control.
Ii Related Work
Ii-a Learning Contact Dynamics
In the field of contact dynamics, researchers have looked towards data-driven techniques to complement analytical models and/or directly learn dynamics. For example, Byravan and Fox  designed neural nets to predict rigid-body motions for planar pushing. Their approach does not exploit explicit physical knowledge. Kloss et al.  used neural net predictions as input to an analytical model; the output of the analytical model is used as the prediction. Here, the neural network learns to maximize the analytical model’s performance. Fazeli et al.  also studied learning a residual model for predicting planar impacts. Zhou et al.  employed a data-efficient algorithm to capture the frictional interaction between an object and a support surface. They later extended it for simulating parametric variability in planar pushing and grasping .
The paper closest to ours is that from Ajay et al. , where they used the analytical model as an approximation to the push outcomes, and learned a residual neural model that makes corrections to its output. In contrast, our paper makes two key innovations: first, instead of using a feedforward network to model the dynamics of a single object, we employ an object-based network to learn residuals. Object-based networks build upon explicit object representations and learn how they interact; this enables capturing multi-object interactions. Second, we demonstrate that such a hybrid dynamics model can be used for control tasks both in simulation and on a real robot.
Ii-B Differentiable Physical Simulators
There has been an increasing interest in building differentiable physics simulators . For example, Degrave et al.  proposed to directly solve differentiable equations. Such systems have been deployed for manipulation and planning for tool use . Battaglia et al.  and Chang et al.  have both studied learning object-based, differentiable neural simulators. Their systems explicitly model the state of each object and learn to predict future states based on object interactions. In this paper, we combine such a learned object-based simulator with a physics engine for better prediction and for controlling real-world objects.
Ii-C Control with a Learned Simulator
Recent papers have explored model-predictive control with deep networks [21, 22, 23, 24, 25]. These approaches learn an abstract-state transition function, not an explicit model of the environment [26, 27]. Eventually, they apply the learned value function or model to guide policy network training. In contrast, we employ an object-based physical simulator that takes raw object states (e.g., velocity, position) as input. Hogan et al.  also learned a residual model with an analytical model for model-predictive control, but their learned model is a task-specific Gaussian Process, while our model has the ability to generalize to new object shapes and materials.
A few papers have exploited the power of interaction networks for planning and control, mostly using interaction networks to help training policy networks via imagination—rolling out approximate predictions [29, 30, 31]. In contrast, we use interaction networks as a learned dynamics simulator, combine it with a physics engine, and directly search for actions in real-world control problems. Recently, Sanchez-Gonzalez et al.  also used interaction networks in control, though their model does not take into account explicit physical knowledge, and its performance is only demonstrated in simulation.
In this section, we describe SAIN’s formulation and components. We also present our Model Predictive Controller (MPC) which uses SAIN to perform the pushing task.
Let be the state space and be the action space. A dynamics model is a function that predicts the next state given the current action and state: .
There are two general types of dynamics models: analytical (Fig. 2a) and data-driven (Fig. 2b). Our goal is to learn a hybrid dynamics model that combines the two (Fig. 2c). Here, conditioned on the state-action pair, the data-driven model learns the discrepancy between analytical model predictions and real-world data (i.e. the residual). Specifically, let represent the hybrid dynamics model, represent the physics engine, and represent the residual component. We have . Intuitively, the residual model refines the physics engine’s guess using the current state and action.
For long-term prediction, let represent the recurrent hybrid dynamics model (Fig. 2d). If is the initial state, the action at time , the prediction by the physics engine at time and the prediction at time , then
For training, we collect observational data and then solve the following optimization problem:
where is the weight for the regularization term.
In this study, we choose to use a recurrent parametric model over a non-recurrent representation for two reasons. First, non-recurrent models are trained on observation data to make single-step predictions. Consequently, errors in prediction compound over a sequence of steps. Second, since these models recursively use their own predictions, the input data given during the simulation phase will have a different distribution than the input data during the training phase. This creates a data distribution mismatch between the training and test phases.
Iii-B Interaction Networks
We use interaction networks  as the data-driven model for multi-object interaction. An interaction network consists of 2 neural nets: and . The network calculates pairwise forces between objects and the network calculates the next state of an object, based on the states of the objects it is interacting with and the nature of the interactions.
The original version of interaction networks was trained to make a single-step prediction; for improved accuracy, we extend them to make multi-step predictions. Let be the state at time , where is the state for object at time . Similarly, let be the predicted state at time where is the predicted state for object at time . In our work, where is the pose of object at time step , the velocity of object at time step , the mass of object i and the radius of object i. Similarly, where is the predicted pose of object at time step and the predicted velocity of object at time step . Note that we do not predict any changes to static object properties such as mass and radius. Also, we note that while is a set of objects, the state of any individual object,
, is a vector. Now, letbe the action applied to object at time . The equations for the interaction network are:
Iii-C Simulator-Augmented Interaction Networks (SAIN)
A simulator-augmented interaction network extends an interaction network, where and now take in the prediction of a physics engine, . We now learn the residual between the physics engine and the real world. Let be the state at time and be the state for object at time predicted by the physics engine. The equations for SAIN are
These equations describe a single-step prediction. For multi-step prediction, we use the same equations by providing the true state at and predicted state at as input.
Iii-D Control Algorithm
Our action space has two free parameters: the point where the robot contacts the first disk and the direction of the push. In our experiments, a successful execution requires searching for a trajectory of about 50 actions. Due to the size of the search space, we use an approximate receding horizon control algorithm with our dynamics model. The search algorithm maintains a priority queue of action sequences based on the heuristic below. For each expansion, letbe the current state and be the predicted state after steps with actions . Let be the goal state. We choose the control strategy that minimizes the the cost function and insert the new action sequence into the queue.
We demonstrate SAIN on a challenging planar manipulation task both in simulation and in the real-world. We further evaluate how our model generalizes to handle control tasks that involve objects of new materials and shapes.
In this manipulation task, we are given two disks with different mass and radii. Our goal is to guide the second disk to a target location, but are constrained to push only the first disk. Here, a point in the state space is factored into a set of two object states, , where each is an element of object state space . The object state includes the mass, 2D position, rotation, velocity, and radius of the disk.
Targets locations are generated at random and divide into two categories: easy and hard. A target location is produced by first sampling an angle from an interval , then choosing the goal location to be at distance of three times the radius of second disk and at an angle of with respect to the second disk. In easy pushes, the interval is . In hard pushes, the interval is . A push is considered a success if the distance between the goal location and the pose of the center of mass of the second disk is within the radius of second disk.
Iv-B Simulation Setup
We use the Bullet physics engine  for simulation. For each trajectory, we vary the coefficient of friction between the surface and the disks, the mass of the disks and their radius. The coefficient of friction is sampled from . The mass is sampled from Uniform(0.85kg, 1.15kg) and the radius is sampled from . We always fix the initial position of the first disk to the origin. The other disks are placed in front of the first disk at an angle, randomly sampled from , and just touches it. We ensure that disks don’t overlap each other. The pusher is placed at back of the first disk at an angle randomly sampled from , and just touches it. Then the pusher makes a straight line push at an angle, randomly sampled from , for 2s and covers a distance of about 1cm. We experiment with two different simulation setups: (1) direct-force simulation setup in which we control pusher with external force and (2) robot control simulation setup in which we control the pusher using position control. We use the first setup to show the benefits of SAIN over other models. But in our real world setup, we control the pusher using position-based control. So, we have designed a second simulation setup which matches the real robot and use it to collect pre-training data.
For the direct-force simulation setup, we collect pushes with 2 disks for our training set, pushes with 2 disks and pushes with 3 disks for our test set. For the robot control simulation setup, we collect pushes with 2 disks for our training set and pushes with 2 disks for our test set.
Iv-C Model and Implementation Details
|Models||Error on Object 1/Object 2|
|trans (%)||pos (mm)||rot (deg)|
We compare two models for simulation and control: the original interaction networks (IN) and our simulator-augmented interaction networks (SAIN). They share the same architecture. Each consists of two separate neural networks: and . Both and
have four linear layers with hidden sizes of 128, 64, 32 and 16 respectively. The first three linear layers are followed by a ReLU activation.
Training interaction networks in simulation is straightforward. It is more involved for SAIN, which learns a correction over the Bullet physics engine, so the problem of training “in simulation” is ill-posed. To address this problem, we fix the physics engine in SAIN with mass and radius of disks equaling that of disks in the real world. We also fix the coefficient of friction in the physics engine to an estimated mean of the coefficient of friction of the real world surface across space. The training data instead contain varied mass and radius of both disks, and varied the coefficient of friction between the disks and the surface, and the model is trained to learn the residual.
We use ADAM  for optimization with a starting learning rate of 0.001. We decrease it by 0.5 every 2,500 iterations. We train these models for 10,000 iterations with a batch size of 100. Let the predicted 2D position, rotation, and velocity at time of disk be , and , respectively, and the corresponding true values be , , and . Let
be the length of all trajectories. The training loss function for a single trajectory is 1T∑_i=1^2∑_t=0^T-1 ∥p^i_t - ^p^i_t∥_2^2 + ∥v^i_t - ^v^i_t∥_2^2 +
∥sin(r^i_t) - sin(^r^i_t)∥_2^2 + ∥cos(r^i_t) - cos(^r^i_t)∥_2^2. During training, we use a batch of trajectories and take a mean over the loss of those trajectories. We also use regularization with as regularization constant.
In practice, we ask the models to predict the change in object states (relative values) rather than the absolute values. This enables them to generalize to arbitrary starting positions without overfitting.
Iv-D Search Algorithm
|Models||Error on Object 1/2/3|
|trans (%)||pos (mm)||rot (deg)|
|Models||Error on Object 1/2|
|trans (%)||pos (mm)||rot (deg)|
As mentioned in Sec. III-D, an action is defined by the initial position of the pusher and the angle of the push, , with respect to the first disk. After these parameters have been selected, the pusher starts at the initial position and moves at an angle of with respect to the first disk for . We discretize our action space as follows. For selecting , we divide the interval into six bins and choose their midpoints. For selecting the initial position of the pusher, we choose an angle and place the pusher at edge of first disk at an angle such that the pusher touches the first disk. For selecting , we divide the interval into 12 bins and choose midpoint of one of these bins. Therefore, our action space consists of 72 discretized actions for each time step. We maintain a priority queue of action sequences based on heuristic where is the predicted 2D position of disk and is the 2D position of goal. is sum of and cosine distance between and . The cosine distance serves as a regularization cost to encourage the center of both disks and the goal to stay in a straight line. To prevent the priority queue from blowing up, we do receding horizon greedy search with an initial horizon of 2, and increase it to 3 when the distance between the second disk and goal is less than .
|Models||Fine-tuning||Error on Object 1/2|
|trans (%)||pos (mm)||rot (deg)|
Iv-E Prediction and Control Results in Simulation
The forward multi-step prediction errors of both interaction networks and SAIN for direct-force simulation setup with 2 disks and 3 disks are reported in Table I and Table II. Note that errors on different objects are separated by in all the tables. The training data for this setup consist of pushes with only 2 disks. The forward multi-step prediction errors of both interaction networks and SAIN for robot control simulation setup are reported in Table III. Given an initial state and a sequence of actions, the models do forward prediction for the next 200 timesteps, where each time-step is 1/240s. We see that SAIN outperforms interaction networks in both setups. We also list the results of the fixed physics engine used for training SAIN for reference.
We have also evaluated IN and SAIN on control tasks in simulation. We test each model on 25 easy and 25 hard pushes. For these pushes, we set the mass of two disks to 0.9kg and 1kg and their radius to 54mm and 59mm, making them differ them from those used in the internal physics engine of SAIN. This mimics real-world environments, where objects’ size and shape cannot be precisely estimated, and ensures SAIN cannot cheat by just querying the internal simulator. Fig. 3 shows SAIN performs better than IN. This suggests learning the residual not only helps to do better forward prediction, but also benefits control.
Iv-F Real-World Robot Setup
We now test our models on a real robot. The setup used for the real experiments is based on the system from the MIT Push dataset . The pusher is a cylinder of radius 4.8mm attached to the last joint of a ABB IRB 120 robot. The position of the pusher is recorded using the robot kinematics. The two disks being pushed are made of stainless steel, have radius of 52.5mm and 58mm, and weight 0.896kg and 1.1kg. During the experiments, the smallest disk is the one pushed directly by the pusher. The position of both disks is tracked using a Vicon system of four cameras so that the disks’ positions are highly accurate. Finally, the surface where the objects lay is made of ABS (Acrylonitrile Butadiene Styrene), whose coefficient of friction is around 0.15. Each push is done at 50mm/s and spans 10mm. We collect 1,500 pushes out of which 1,200 are used for training and 300 for testing.
We evaluate two versions of interaction networks and SAIN. The first is an off-the-shelf version purely trained on synthetic data; the second is one trained on simulated data and later fine-tuned on real data. This helps us understand whether these models can exploit real data to adapt to new environments.
Iv-G Results on Real-World Data
Results of forward simulation are shown in Table IV. SAIN outperforms IN on real data. While both models benefit from fine-tuning, SAIN achieves the best performance. This suggests residual learning also generalizes to real data well. All models achieve a lower error on real data than in simulation; this is because simulated data have a significant amount of noise to make the problem more challenging.
We then evaluate SAIN (both with and without fine-tuning) for control, on 25 easy and 25 hard pushes. The results are shown in Fig. 4. The model without fine-tuning achieves 100% success rate on easy pushes and 68% on hard pushes. As shown in the rightmost columns of Fig. 3(a), it sometimes pushes the object too far and gets stuck in a local minimum. After fine-tuning, the model works well on both easy pushes (100%) and hard pushes (96%) (Fig. 3(b)).
While objects of different shapes and materials have different dynamics, the gap between their dynamics in simulation and in the real world might share similar patterns. This is the intuition behind the observation that residual learning allows easier generalization to novel scenarios. Ajay  validated this for forward prediction. Here, we evaluate how our fine-tuned SAIN generalizes for control. We test our model on 25 hard pushes with a different surface (plywood, where the original surface is ABS), using the original disks. Our framework achieves successes in 92% of the pushes, where Fig. 4(a) shows qualitative results. We’ve also evaluated our model on another 25 hard pushes, where it pushes the large disk (58mm) to direct the small one (52.5mm). Our framework achieves successes in 88% of the pushes. Fig. 4(b) shows qualitative results. These results suggest that SAIN can generalize to solve control tasks with new object shapes and materials.
We have proposed a hybrid dynamics model, simulator-augmented interaction networks (SAIN), combining a physical simulator with a learned, object-centered neural network. Our underlying philosophy is to first use analytical models to model real-world processes as much as possible, and learn the remaining residuals. Learned residual models are specific to the real-world scenario for which data is collected, and adapt the model accordingly. The combined physics engine and residual model requires little need for domain specific knowledge or hand-crafting and can generalize well to unseen situations. We have demonstrated SAIN’s efficacy when applied to a challenging control problem in both simulation and the real world. Our model also generalizes to setups where object shape and material vary and has potential applications in control tasks that involve complex contact dynamics.
Acknoledgements. This work is supported by NSF #1420316, #1523767, and #1723381, AFOSR grant FA9550-17-1-0165, ONR MURI N00014-16-1-2007, Honda Research, Facebook, and Draper Laboratory.
-  E. Todorov, T. Erez, and Y. Tassa, “Mujoco: A physics engine for model-based control,” in IROS. IEEE, 2012, pp. 5026–5033.
-  E. Coumans, “Bullet physics simulation,” in SIGGRAPH, 2015.
-  R. Kolbert, N. Chavan Dafle, and A. Rodriguez, “Experimental Validation of Contact Dynamics for In-Hand Manipulation,” in ISER, 2016.
-  K.-T. Yu, M. Bauza, N. Fazeli, and A. Rodriguez, “More than a million ways to be pushed. a high-fidelity experimental dataset of planar pushing,” in IROS. IEEE, 2016, pp. 30–37.
-  N. Fazeli, S. Zapolsky, E. Drumwright, and A. Rodriguez, “Fundamental limitations in performance and interpretability of common planar rigid-body contact models,” in ISRR, 2017.
-  I. Mordatch, K. Lowrey, and E. Todorov, “Ensemble-cio: Full-body dynamic motion planning that transfers to physical humanoids,” in IROS, 2015.
-  A. Becker and T. Bretl, “Approximate steering of a unicycle under bounded model perturbation using ensemble control,” IEEE TRO, vol. 28, no. 3, pp. 580–591, 2012.
-  N. Fazeli, S. Zapolsky, E. Drumwright, and A. Rodriguez, “Learning data-efficient rigid-body contact models: Case study of planar impact,” in CoRL, 2017, pp. 388–397.
-  A. Ajay, J. Wu, N. Fazeli, M. Bauza, L. P. Kaelbling, J. B. Tenenbaum, and A. Rodriguez, “Augmenting physical simulators with stochastic neural networks: Case study of planar pushing and bouncing,” in IROS, 2018.
-  K. Chatzilygeroudis and J.-B. Mouret, “Using parameterized black-box priors to scale up model-based policy search for robotics,” in ICRA, 2018.
-  A. Kloss, S. Schaal, and J. Bohg, “Combining learned and analytical models for predicting action effects,” arXiv:1710.04102, 2017.
-  P. W. Battaglia, R. Pascanu, M. Lai, D. Rezende, and K. Kavukcuoglu, “Interaction networks for learning about objects, relations and physics,” in NeurIPS, 2016.
-  M. B. Chang, T. Ullman, A. Torralba, and J. B. Tenenbaum, “A compositional object-based approach to learning physical dynamics,” in ICLR, 2017.
-  F. R. Hogan and A. Rodriguez, “Feedback control of the pusher-slider system: A story of hybrid and underactuated contact dynamics,” in WAFR, 2016.
-  A. Byravan and D. Fox, “Se3-nets: Learning rigid body motion using deep neural networks,” in ICRA, 2017.
-  J. Zhou, R. Paolini, A. Bagnell, and M. T. Mason, “A convex polynomial force-motion model for planar sliding: Identification and application,” in ICRA, 2016, pp. 372–377.
-  J. Zhou, A. Bagnell, and M. T. Mason, “A fast stochastic contact model for planar pushing and grasping: Theory and experimental validation,” in RSS, 2017.
-  S. Ehrhardt, A. Monszpart, N. Mitra, and A. Vedaldi, “Taking visual motion prediction to new heightfields,” arXiv:1712.09448, 2017.
J. Degrave, M. Hermans, and J. Dambre, “A differentiable physics engine for deep learning in robotics,” inICLR Workshop, 2016.
-  M. Toussaint, K. Allen, K. Smith, and J. Tenenbaum, “Differentiable physics and stable modes for tool-use and manipulation planning,” in RSS, 2018.
-  I. Lenz, R. A. Knepper, and A. Saxena, “Deepmpc: Learning deep latent features for model predictive control,” in RSS, 2015.
-  S. Gu, T. Lillicrap, I. Sutskever, and S. Levine, “Continuous deep q-learning with model-based acceleration,” in ICML, 2016.
A. Nagabandi, G. Kahn, R. S. Fearing, and S. Levine, “Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning,” inICRA, 2018.
-  G. Farquhar, T. Rocktäschel, M. Igl, and S. Whiteson, “Treeqn and atreec: Differentiable tree planning for deep reinforcement learning,” in ICLR, 2018.
-  A. Srinivas, A. Jabri, P. Abbeel, S. Levine, and C. Finn, “Universal planning networks,” in ICML, 2018.
-  D. Silver, H. van Hasselt, M. Hessel, T. Schaul, A. Guez, T. Harley, G. Dulac-Arnold, D. Reichert, N. Rabinowitz, A. Barreto, and T. Degris, “The predictron: End-to-end learning and planning,” in ICML, 2017.
-  J. Oh, S. Singh, and H. Lee, “Value prediction network,” in NeurIPS, 2017.
-  M. Bauza, F. R. Hogan, and A. Rodriguez, “A data-efficient approach to precise and controlled pushing,” in CoRL, 2018.
-  S. Racanière, T. Weber, D. Reichert, L. Buesing, A. Guez, D. J. Rezende, A. P. Badia, O. Vinyals, N. Heess, Y. Li, R. Pascanu, P. Battaglia, D. Silver, and D. Wierstra, “Imagination-augmented agents for deep reinforcement learning,” in NeurIPS, 2017.
-  J. B. Hamrick, A. J. Ballard, R. Pascanu, O. Vinyals, N. Heess, and P. W. Battaglia, “Metacontrol for adaptive imagination-based optimization,” in ICLR, 2017.
-  R. Pascanu, Y. Li, O. Vinyals, N. Heess, L. Buesing, S. Racanière, D. Reichert, T. Weber, D. Wierstra, and P. Battaglia, “Learning model-based planning from scratch,” arXiv:1707.06170, 2017.
-  A. Sanchez-Gonzalez, N. Heess, J. T. Springenberg, J. Merel, M. Riedmiller, R. Hadsell, and P. Battaglia, “Graph networks as learnable physics engines for inference and control,” in ICML, 2018.
-  E. Coumans, “Bullet physics engine,” Open Source Software: http://bulletphysics. org, 2010.
-  D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in ICLR, 2015.