Scaling up and Stabilizing Differentiable Planning with Implicit Differentiation

10/24/2022
by   Linfeng Zhao, et al.
0

Differentiable planning promises end-to-end differentiability and adaptivity. However, an issue prevents it from scaling up to larger-scale problems: they need to differentiate through forward iteration layers to compute gradients, which couples forward computation and backpropagation, and needs to balance forward planner performance and computational cost of the backward pass. To alleviate this issue, we propose to differentiate through the Bellman fixed-point equation to decouple forward and backward passes for Value Iteration Network and its variants, which enables constant backward cost (in planning horizon) and flexible forward budget and helps scale up to large tasks. We study the convergence stability, scalability, and efficiency of the proposed implicit version of VIN and its variants and demonstrate their superiorities on a range of planning tasks: 2D navigation, visual navigation, and 2-DOF manipulation in configuration space and workspace.

READ FULL TEXT

page 18

page 19

research
04/12/2016

Backward-Forward Search for Manipulation Planning

In this paper we address planning problems in high-dimensional hybrid co...
research
04/23/2023

Efficient Training of Deep Equilibrium Models

Deep equilibrium models (DEQs) have proven to be very powerful for learn...
research
06/08/2022

Integrating Symmetry into Differentiable Planning

We study how group symmetry helps improve data efficiency and generaliza...
research
12/14/2021

Efficient differentiable quadratic programming layers: an ADMM approach

Recent advances in neural-network architecture allow for seamless integr...
research
06/18/2021

Differentiable Particle Filtering without Modifying the Forward Pass

In recent years particle filters have being used as components in system...
research
07/02/2022

Object Representations as Fixed Points: Training Iterative Refinement Algorithms with Implicit Differentiation

Iterative refinement – start with a random guess, then iteratively impro...
research
06/26/2023

PMaF: Deep Declarative Layers for Principal Matrix Features

We explore two differentiable deep declarative layers, namely least squa...

Please sign up or login with your details

Forgot password? Click here to reset