1 Introduction
A key ingredient to achieving intelligent behavior is physical understanding. Under the umbrella of intuitive physics, specialized models, such as interaction and graph neural networks, have been proposed to learn dynamics from data to predict the motion of objects over long time horizons. By labelling the training data given to these models by physical quantities, they are able to produce behavior that is conditioned on actual physical parameters, such as masses or friction coefficients, allowing for plausible estimation of physical properties and improved generalizability.
In this work, we introduce Interactive Differentiable Simulation (IDS), a differentiable physical simulator for rigid body dynamics. Instead of learning every aspect of such dynamics from data, our engine constrains the learning problem to the prediction of a small number of physical parameters that influence the motion and interaction of bodies.
A differentiable physics engine provides many advantages when used as part of a learning process. Physically accurate simulation obeys dynamical laws of real systems, including conservation of energy and momentum. Furthermore, joint constraints are enforced with no room outside of the model for error. The parameters of a physics engine are well-defined and correspond to properties of real systems, including multi-body geometries, masses, and inertia matrices. Learning these parameters provides a significantly interpretable parameter space, and can benefit classical control and estimation algorithms. Further, due to the high inductive bias, model parameters need not be jointly retrained for differing degrees of freedom or a reconfigured dynamics environment.
2 Differentiable Rigid Body Dynamics
In this work, we introduce a physical simulator for rigid-body dynamics. The motion of kinematic chains of multi-body systems can be described using the Newton-Euler equations:
Here, , and
are vectors of generalized
^{1}^{1}1“Generalized coordinates” sparsely encode only particular degrees of freedom in the kinematic chain such that bodies connected by joints are guaranteed to remain connected. position, velocity and acceleration coordinates, and is a vector of generalized forces. is the generalized inertia matrix and depends on . Coriolis forces, centrifugal forces, gravity and other forces acting on the system, are accounted for by the bias force matrix that depends on and . Since all bodies are connected via joints, including free-floating bodies which connect to a static world body via special joints with seven degrees of freedom (DOF), i.e. 3D position and orientation in quaternion coordinates, their positions and orientations in world coordinates are computed by the forward kinematics function (Fig. 2) using the joint angles and the bodies’ relative transforms to their respective joints through which they are attached to their parent body.Forward dynamics (cf. Fig. 2) is the mapping from positions, velocities and forces to accelerations. We efficiently compute the forward dynamics using the Articulated Body Algorithm (ABA) Featherstone (2007). Given a descriptive model consisting of joints, bodies, and predecessor/successor relationships, we build a kinematic chain that specifies the dynamics of the system. In our simulator, bodies comprise physical entities with mass, inertia, and attached rendering and collision geometries. Joints describe constraints on the relative motion of bodies in a model. Equipped with such a graph of bodies connected via joints with forces acting on them, ABA computes the joint accelerations in operations. Following the calculation of the accelerations, we implement semi-implicit Euler integration (referred to as in Fig. 2) to compute the velocities and positions of the joints and bodies at the current instant given time step :
In control scenarios, external forces are applied to the kinematic tree, which are then propagated through the joints and bodies of the physical model. This propagation is efficiently calculated using the Recursive Newton-Euler Algorithm Featherstone (2007). For body , let denote the predecessor body and denote successor bodies. We denote the allowable motion subspace matrix of the joint by , and the spatial inertia matrix by . Given the velocity and acceleration of body , we may compute
Denoting the net force on body as , we use the physical equation of motion to relate this force to the body acceleration
We can separate this force into , the force transmitted from body across joint , and , the external force acting on body (such as gravity). Then
which lets us easily calculate , the force transmitted across each joint as
Finally, we may calculate the generalized force vector at joint as
While the analytical gradients of the rigid-body dynamics algorithms can be derived manually Carpentier and Mansard (2018), we choose to implement the entire physics engine in the reverse-mode automatic differentiation framework Stan Math Carpenter et al. (2015). Automatic differentiation allows us to compute gradients of any quantity involved in the simulation of complex systems, opening avenues to state estimation, optimal control and system design. Enabled by low-level optimization, our C++ implementation is designed to lay the foundations for real-time performance on physical robotic systems in the future.
3 Experiments
3.1 Inferring Physical Properties from Vision
To act autonomously, intelligent agents need to understand the world through high-dimensional sensor inputs, like cameras. We demonstrate that our approach is able to infer the relevant physical parameters of the environment dynamics from these types of high-dimensional observations. We optimize the weights of an autoencoder network trained to predict the future visual state of a dynamical system, with our physics layer serving as the bottleneck layer. In this exemplar scenario, given an image of a three-link compound pendulum simulated in the MuJoCo physics simulator
Todorov et al. (2012) at time , the model is tasked to predict the future rendering of this pendulum time steps ahead. Compound pendula are known to exhibit chaotic behavior, i.e. given slightly different initial conditions (such as link lengths, starting angles, etc.), the trajectories drift apart significantly. Therefore, IDS must recover the true physical parameters accurately in order to generate valid motions that match the training data well into the future.We model the encoder and the decoder as neural networks consisting of two 256-unit hidden layers mapping from grayscale images to a six-dimensional vector of joint positions and velocities , and vice-versa. Inserted between both networks, we place an IDS layer (Fig. 2) to forward-simulate the given joint coordinates from time to time , where is the number of time steps of the prediction horizon. Given that the input data uses a time step , the goal is to predict the state of the pendulum into the future. While the linear layers of and are parameterized by weights and biases, IDS, referred to as , is conditioned on physical parameters which, in the case of our compound pendulum, are the lengths of the three links . We choose arbitrary values to initialize these parameters.
Given a dataset of ground-truth pairs of images and ground-truth joint coordinates , we optimize a triplet loss using the Adam optimizer that jointly trains the individual components of the autoencoder:
(1) |
We note that the physical parameters converge to the true parameters of the dynamical system (), as shown in Fig. 4.
As a baseline from the intuitive physics literature, we train a graph neural network model based on Sanchez-Gonzalez et al. (2018) on the first 800 frames of a 3-link pendulum motion. When we let the graph network predict 20 time steps into the future from a point after these 800 training samples, it returns false predictions where the pendulum is continuing to swing up, even though such motion would violate Newton’s laws. Such behavior is typical for fully learned models, which mostly achieve accurate predictions within the domain of the training examples. By contrast, IDS imposes a strong inductive bias, which allows the estimator to make accurate predictions far into the future (Fig. 4).
3.2 Automatic Robot Design
Industrial robotic applications often require a robot to follow a given tool path. In general, robotic arms with 6 or more degrees of freedom provide large workspaces and redundant configurations to reach any possible point within the workspace. However, motors are expensive to produce, maintain, and calibrate. Designing arms that contain a minimal number of motors required for a task provides economic and reliability benefits, but imposes constraints on the kinematic design of the arm.
One standard for specifying the kinematic configuration of a serial robot arm is the Denavit-Hartenberg (DH) parameterization. For each joint , the DH parameters are . The preceding motor axis is denoted by and the current motor axis is denoted by . describes the distance to the joint projected onto and specifies the angle of rotation about . specifies the distance to joint in the direction orthogonal to and describes the angle between and , rotated about the -axis of the preceding motor coordinate frame. We are primarily interested in arms with motorized revolute joints, and thus becomes the parameter of our joint state. We can thus fully specify the relevant kinematic properties of a serial robot arm with degrees of freedom (DOF) as .
We specify a task-space trajectory for as the world coordinates of the end-effector of the robot. Given a joint-space trajectory , we seek to find the best -DOF robot arm design, parameterized by DH vector , that most closely matches the specified end-effector trajectory:
where the forward kinematics function maps from joint space to Cartesian tool positions conditioned on DH parameters . Since we compute using our engine, we may compute derivatives to arbitrary inputs to this function (cf. Fig. 2), and use gradient-based optimization through L-BFGS to converge to arm designs which accurately perform the trajectory-following task, as shown in Fig. 5.
3.3 Adaptive MPC
Besides parameter estimation and design, a key benefit of differentiable physics is its applicability to optimal control algorithms. In order to control a system within our simulator, we specify the control space , which is typically a subset of the system’s generalized forces , and the state space . Given a quadratic, i.e. twice-differentiable, cost function , we can linearize the dynamics at every time step, allowing efficient gradient-based optimal control techniques to be employed. Iterative Linear Quadratic Control Li and Todorov (2004) (iLQR) is a direct trajectory optimization algorithm that uses a dynamic programming scheme on the linearized dynamics to derive the control inputs that successively move the trajectory of states and controls closer to the optimum of the cost function.
Throughout our control experiments, we optimize a trajectory for an -link cartpole to swing up from arbitrary initial configuration of the joint angles. In the case of double cartpole, i.e. a double inverted pendulum on a cart, the state space is defined as where and refer to the cart’s position and velocity, to the joint angles, and to the velocities and accelerations of the revolute joints of the poles, respectively. For a single cartpole the state space is analogously represented, excluding the second revolute joint coordinates . The cost is defined as the norm of the control plus the Euclidean distance between the cartpole’s current state and the goal state at which the pole is upright at zero angular velocity and acceleration, and the cart is centered at the origin with zero positional velocity.
Trajectory optimization assumes that the dynamics model is accurate w.r.t the real world and generates sequences of actions that achieve optimal behavior towards a given goal state, leading to open-loop control. Model-predictive control (MPC) leverages trajectory optimization in a feedback loop where the next action is chosen as the first control computed by trajectory optimization over a shorter time horizon with the internal dynamics model. After some actions are executed in the real world and subsequent state samples are observed, adaptive MPC (Algorithm 1) fits the dynamics model to these samples to align it closer with the real-world dynamics. In this experiment, we want to investigate how differentiable physics can help overcome the domain shift that poses an essential challenge of model-based control algorithms that are employed in a different environment. To this end, we incorporate IDS as dynamics model in such receding-horizon control algorithm to achieve swing-up motions of a single and double cartpole in the DeepMind Control Suite Tassa et al. (2018) environments that are based on the MuJoCo simulator Todorov et al. (2012).
We fit the parameters of the system by minimizing the state-action prediction loss:
(2) |
Thanks to the low dimensionality of the model parameter vector (for a double cartpole there are 14 parameters, cf. Fig. 6), efficient optimizers such as the quasi-Newton optimizer L-BFGS are applicable, leading to fast convergence of the fitting phase, typically within 10 optimization steps. The length of one episode is 140 time steps. During the first episode we fit the dynamics model more often, i.e. every 50 time steps, to warm-start the receding-horizon control scheme. Given a horizon size of 20 and 40 time steps, MPC is able to find the optimal swing-up trajectory for the single and double cartpole, respectively.
Within a handful of training episodes, adaptive MPC infers the correct model parameters involved in the dynamics of double cartpole (Fig. 6). As shown in Fig. 1, the models we start from in IDS do not match their counterparts from DeepMind Control Suite. For example, the poles are represented by capsules where the mass is distributed across these elongated geometries, whereas initially in our IDS model, the center of mass of the poles is at the end of them, such that they have different inertia parameters. We set the masses, lengths of the links, and 3D coordinates of the center of masses to 2, and using a few steps of the optimizer and less than 100 transition samples, converge to a much more accurate model of the true dynamics in the MuJoCo environment. On the example of a cartpole, Fig. 8 visualizes the predicted and actual dynamics for each state dimension after the first (left) and third (right) episode.
Having a current model of the dynamics fitted to the true system dynamics, adaptive MPC significantly outperforms two model-free reinforcement learning baselines, Deep Deterministic Policy Gradient Lillicrap et al. (2015) and Soft Actor-Critic Haarnoja et al. (2018), in sample efficiency. Both baseline algorithms operate on the same state space as Adaptive MPC, while receiving a dense reward that matches the negative of our cost function. Although DDPG and SAC are able to eventually attain higher average rewards than adaptive MPC on the single cartpole swing-up task (Fig. 7), we note that the iLQR trajectory optimization constraints the force applied to the cartpole within a interval, which caps the overall achievable reward as it takes more time to achieve the swing-up with less force-full movements.
4 Related Work
Degrave et al. Degrave et al. (2019)
implemented a differentiable physics engine in the automatic differentiation framework Theano. IDS is implemented in C++ using Stan Math
Carpenter et al. (2015) which enables reverse-mode automatic differentiation to efficiently compute gradients, even in cases where the code branches significantly. Analytical gradients of rigid-body dynamics algorithms have been implemented efficiently in the Pinnocchio library Carpentier and Mansard (2018) to facilitate optimal control and inverse kinematics. These are less general than our approach since they can only be used to optimize for a number of hand-engineered quantities. Simulating non-penetrative multi-point contacts between rigid bodies requires solving a linear complementarity problem (LCP), through which de Avila Belbute-Peres et al. (2018) differentiate using the differentiable quadratic program solver OptNet Amos and Kolter (2017). While our proposed model does not yet incorporate contact dynamics, we are able to demonstrate the scalability of our approach on versatile applications of differentiable physics to common 3D control domains.Learning dynamics models has a tradition in the field of robotics and control theory. Early works on forward models Moore (1992) and locally weighted regression Atkeson et al. (1997) yielded control algorithms that learn from previous experiences. More recently, a variety of novel deep learning architectures have been proposed to learn intuitive physics models. Inductive bias has been introduced through graph neural networks Sanchez-Gonzalez et al. (2018); Li et al. (2019); Liu et al. (2019), particularly interaction networks Battaglia et al. (2016); Schenck and Fox (2018); Mrowca et al. (2018) that are able to learn rigid and soft body dynamics. By incorporating more structure into the learning problem, Deep Lagrangian Networks Lutter et al. (2019)
represent functions in the Lagrangian mechanics framework using deep neural networks. Besides novel architectures, vision-based machine learning approaches to predict the future outcomes of the state of the world have been proposed
Wu et al. (2015, 2017); Finn et al. (2016).The approach of adapting the simulator to real world dynamics, which we demonstrate through our adaptive MPC algorithm in Sec. 3.3, has been less explored. While many previous works have shown to adapt simulators to the real world using system identification and state estimation Kolev and Todorov (2015); Zhu et al. (2018), few have shown adaptive model-based control schemes that actively close the feedback loop between the real and the simulated system Reichenbach (2009); Farchy et al. (2013); Chebotar et al. (2018). Instead of using a simulator, model-based reinforcement learning is a broader field Polydoros and Nalpantidis (2017), where the system dynamics, and state-action transitions in particular, are learned to achieve higher sample efficiency compared to model-free methods. Within this framework, predominantly Gaussian Processes Ko et al. (2007); Deisenroth and Rasmussen (2011); Boedecker et al. (2014) and neural networks Williams et al. (2016); Yamaguchi and Atkeson (2016) have been proposed to learn the dynamics and optimize policies.
5 Future Work
We plan to continue this contribution in several ways. IDS can only model limited dynamics due to its lack of a contact model. Modeling collision and contact in a plausibly differentiable way is an exciting topic that will greatly expand the number of environments that can be modeled.
We are interested in exploring the loss surfaces of redundant physical parameters in IDS, where different models may have equivalent predictive power over the given task horizon. Resolving couplings between physical parameters can give rise to exploration strategies that expose properties of the physical system which allow our model to systematically calibrate itself. By examining the generalizability of models on these manifolds, we hope to establish guarantees of performance and prediction for specific tasks.
6 Conclusion
We introduced interactive differentiable simulation (IDS), a novel differentiable layer in the deep learning toolbox that allows for inference of physical parameters, optimal control and system design. Being constrained to the laws of physics, such as conservation of energy and momentum, our proposed model is interpretable in that its parameters have physical meaning. Combined with established learning algorithms from computer vision and receding horizon planning, we have shown how such a physics model can lead to significant improvements in sample efficiency and generalizability. Within a handful of trials in the test environment, our gradient-based representation of rigid-body dynamics allows an adaptive MPC scheme to infer the model parameters of the system thereby allowing it to make predictions and plan for actions many time steps ahead.
References
- Amos and Kolter [2017] Brandon Amos and J. Zico Kolter. OptNet: Differentiable optimization as a layer in neural networks. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 136–145. PMLR, 2017.
- Atkeson et al. [1997] Christopher G. Atkeson, Andrew W. Moore, and Stefan Schaal. Locally weighted learning for control. Artificial Intelligence Review, 11(1):75–113, Feb 1997. ISSN 1573-7462. doi: 10.1023/A:1006511328852. URL https://doi.org/10.1023/A:1006511328852.
- Battaglia et al. [2016] Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, et al. Interaction networks for learning about objects, relations and physics. In Advances in neural information processing systems, pages 4502–4510, 2016.
- Boedecker et al. [2014] J. Boedecker, J. T. Springenberg, J. Wülfing, and M. Riedmiller. Approximate real-time optimal control based on sparse gaussian process models. In 2014 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL), pages 1–8, Dec 2014. doi: 10.1109/ADPRL.2014.7010608.
- Carpenter et al. [2015] Bob Carpenter, Matthew D. Hoffman, Marcus Brubaker, Daniel Lee, Peter Li, and Michael Betancourt. The stan math library: Reverse-mode automatic differentiation in C++. CoRR, abs/1509.07164, 2015. URL http://arxiv.org/abs/1509.07164.
- Carpentier and Mansard [2018] Justin Carpentier and Nicolas Mansard. Analytical derivatives of rigid body dynamics algorithms. In Robotics: Science and Systems, 2018.
- Chebotar et al. [2018] Yevgen Chebotar, Ankur Handa, Viktor Makoviychuk, Miles Macklin, Jan Issac, Nathan D. Ratliff, and Dieter Fox. Closing the sim-to-real loop: Adapting simulation randomization with real world experience. CoRR, abs/1810.05687, 2018. URL http://arxiv.org/abs/1810.05687.
- de Avila Belbute-Peres et al. [2018] Filipe de Avila Belbute-Peres, Kevin Smith, Kelsey Allen, Josh Tenenbaum, and J. Zico Kolter. End-to-end differentiable physics for learning and control. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 7178–7189. Curran Associates, Inc., 2018. URL http://papers.nips.cc/paper/7948-end-to-end-differentiable-physics-for-learning-and-control.pdf.
- Degrave et al. [2019] Jonas Degrave, Michiel Hermans, Joni Dambre, and Francis wyffels. A differentiable physics engine for deep learning in robotics. Frontiers in Neurorobotics, 13:6, 2019. ISSN 1662-5218. doi: 10.3389/fnbot.2019.00006. URL https://www.frontiersin.org/article/10.3389/fnbot.2019.00006.
- Deisenroth and Rasmussen [2011] Marc Deisenroth and Carl E Rasmussen. Pilco: A model-based and data-efficient approach to policy search. In Proceedings of the 28th International Conference on machine learning (ICML-11), pages 465–472, 2011.
- Farchy et al. [2013] Alon Farchy, Samuel Barrett, Patrick MacAlpine, and Peter Stone. Humanoid robots learning to walk faster: From the real world to simulation and back. In Proc. of 12th Int. Conf. on Autonomous Agents and Multiagent Systems (AAMAS), May 2013. URL http://www.cs.utexas.edu/users/ai-lab/?AAMAS13-Farchy.
- Featherstone [2007] Roy Featherstone. Rigid Body Dynamics Algorithms. Springer-Verlag, Berlin, Heidelberg, 2007. ISBN 0387743146.
- Finn et al. [2016] Chelsea Finn, Ian Goodfellow, and Sergey Levine. Unsupervised learning for physical interaction through video prediction. In Advances in neural information processing systems, pages 64–72, 2016.
- Haarnoja et al. [2018] Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. arXiv preprint arXiv:1801.01290, 2018.
- Ko et al. [2007] J. Ko, D. J. Klein, D. Fox, and D. Haehnel. Gaussian processes and reinforcement learning for identification and control of an autonomous blimp. In Proceedings 2007 IEEE International Conference on Robotics and Automation, pages 742–747, April 2007. doi: 10.1109/ROBOT.2007.363075.
- Kolev and Todorov [2015] Svetoslav Kolev and Emanuel Todorov. Physically consistent state estimation and system identification for contacts. 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), pages 1036–1043, 2015.
- Li and Todorov [2004] Weiwei Li and Emanuel Todorov. Iterative linear quadratic regulator design for nonlinear biological movement systems. In International Conference on Informatics in Control, Automation and Robotics, 2004.
- Li et al. [2019] Yunzhu Li, Jiajun Wu, Russ Tedrake, Joshua B. Tenenbaum, and Antonio Torralba. Learning particle dynamics for manipulating rigid bodies, deformable objects, and fluids. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=rJgbSn09Ym.
- Lillicrap et al. [2015] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
- Liu et al. [2019] Zhijian Liu, Jiajun Wu, Zhenjia Xu, Chen Sun, Kevin Murphy, William T. Freeman, and Joshua B. Tenenbaum. Modeling parts, structure, and system dynamics via predictive learning. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=rJe10iC5K7.
- Lutter et al. [2019] Michael Lutter, Christian Ritter, and Jan Peters. Deep lagrangian networks: Using physics as model prior for deep learning. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=BklHpjCqKm.
- Moore [1992] Andrew W. Moore. Fast, robust adaptive control by learning only forward models. In J. E. Moody, S. J. Hanson, and R. P. Lippmann, editors, Advances in Neural Information Processing Systems 4, pages 571–578. Morgan-Kaufmann, 1992. URL http://papers.nips.cc/paper/585-fast-robust-adaptive-control-by-learning-only-forward-models.pdf.
- Mrowca et al. [2018] Damian Mrowca, Chengxu Zhuang, Elias Wang, Nick Haber, Li Fei-Fei, Joshua B Tenenbaum, and Daniel LK Yamins. Flexible neural representation for physics prediction. In Advances in Neural Information Processing Systems, 2018.
- Polydoros and Nalpantidis [2017] Athanasios S. Polydoros and Lazaros Nalpantidis. Survey of model-based reinforcement learning: Applications on robotics. Journal of Intelligent & Robotic Systems, 86(2):153–173, May 2017. ISSN 1573-0409. doi: 10.1007/s10846-017-0468-y. URL https://doi.org/10.1007/s10846-017-0468-y.
- Reichenbach [2009] Tomislav Reichenbach. A dynamic simulator for humanoid robots. Artificial Life and Robotics, 13(2):561–565, Mar 2009. ISSN 1614-7456. doi: 10.1007/s10015-008-0508-6. URL https://doi.org/10.1007/s10015-008-0508-6.
- Sanchez-Gonzalez et al. [2018] Alvaro Sanchez-Gonzalez, Nicolas Heess, Jost Tobias Springenberg, Josh Merel, Martin Riedmiller, Raia Hadsell, and Peter Battaglia. Graph networks as learnable physics engines for inference and control. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 4470–4479, Stockholmsmässan, Stockholm Sweden, 10–15 Jul 2018. PMLR. URL http://proceedings.mlr.press/v80/sanchez-gonzalez18a.html.
- Schenck and Fox [2018] Connor Schenck and Dieter Fox. Spnets: Differentiable fluid dynamics for deep neural networks. CoRR, abs/1806.06094, 2018. URL http://arxiv.org/abs/1806.06094.
- Tassa et al. [2018] Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdolmaleki, Josh Merel, Andrew Lefrancq, Timothy P. Lillicrap, and Martin A. Riedmiller. Deepmind control suite. CoRR, abs/1801.00690, 2018. URL http://arxiv.org/abs/1801.00690.
- Todorov et al. [2012] E. Todorov, T. Erez, and Y. Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 5026–5033, Oct 2012. doi: 10.1109/IROS.2012.6386109.
- Williams et al. [2016] G. Williams, P. Drews, B. Goldfain, J. M. Rehg, and E. A. Theodorou. Aggressive driving with model predictive path integral control. In 2016 IEEE International Conference on Robotics and Automation (ICRA), pages 1433–1440, May 2016. doi: 10.1109/ICRA.2016.7487277.
- Wu et al. [2015] Jiajun Wu, Ilker Yildirim, Joseph J. Lim, William T. Freeman, and Joshua B. Tenenbaum. Galileo: Perceiving physical object properties by integrating a physics engine with deep learning. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, NIPS’15, pages 127–135, Cambridge, MA, USA, 2015. MIT Press. URL http://dl.acm.org/citation.cfm?id=2969239.2969254.
- Wu et al. [2017] Jiajun Wu, Erika Lu, Pushmeet Kohli, Bill Freeman, and Josh Tenenbaum. Learning to see physics via visual de-animation. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 153–164. Curran Associates, Inc., 2017. URL http://papers.nips.cc/paper/6620-learning-to-see-physics-via-visual-de-animation.pdf.
- Yamaguchi and Atkeson [2016] Akihiko Yamaguchi and Christopher G Atkeson. Neural networks and differential dynamic programming for reinforcement learning problems. In 2016 IEEE International Conference on Robotics and Automation (ICRA), pages 5434–5441. IEEE, 2016.
- Zhu et al. [2018] Shaojun Zhu, Andrew Kimmel, Kostas E. Bekris, and Abdeslam Boularias. Fast model identification via physics engines for data-efficient policy search. In IJCAI, 2018.
Comments
There are no comments yet.