DiffTune: Auto-Tuning through Auto-Differentiation

by   Sheng Cheng, et al.

The performance of a robot controller depends on the choice of its parameters, which require careful tuning. In this paper, we present DiffTune, a novel, gradient-based automatic tuning framework. Our method unrolls the dynamical system and controller as a computational graph and updates the controller parameters through gradient-based optimization. Unlike the commonly used back-propagation scheme, the gradient in DiffTune is obtained through sensitivity propagation, a forward-mode auto differentiation technique that runs parallel to the system's evolution. We validate the proposed auto-tune approach on a Dubin's car and a quadrotor in challenging simulation environments. Simulation experiments show that the approach is robust to uncertainties in the system dynamics and environment and generalizes well to unseen trajectories in tuning.


DiffTune^+: Hyperparameter-Free Auto-Tuning using Auto-Differentiation

Controller tuning is a vital step to ensure the controller delivers its ...

Structured Differential Learning for Automatic Threshold Setting

We introduce a technique that can automatically tune the parameters of a...

Correcting auto-differentiation in neural-ODE training

Does the use of auto-differentiation yield reasonable updates to deep ne...

Deluca – A Differentiable Control Library: Environments, Methods, and Benchmarking

We present an open-source library of natively differentiable physics and...

Inverse design of photonic crystals through automatic differentiation

Gradient-based inverse design in photonics has already achieved remarkab...

Performance-Driven Controller Tuning via Derivative-Free Reinforcement Learning

Choosing an appropriate parameter set for the designed controller is cri...

Please sign up or login with your details

Forgot password? Click here to reset