Learning Observation Models with Incremental Non-Differentiable Graph Optimizers in the Loop for Robotics State Estimation

09/05/2023
by   Mohamad Qadri, et al.
0

We consider the problem of learning observation models for robot state estimation with incremental non-differentiable optimizers in the loop. Convergence to the correct belief over the robot state is heavily dependent on a proper tuning of observation models which serve as input to the optimizer. We propose a gradient-based learning method which converges much quicker to model estimates that lead to solutions of much better quality compared to an existing state-of-the-art method as measured by the tracking accuracy over unseen robot test trajectories.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/18/2023

Learning Covariances for Estimation with Constrained Bilevel Optimization

We consider the problem of learning error covariance matrices for roboti...
research
08/04/2021

LEO: Learning Energy-based Models in Graph Optimization

We address the problem of learning observation models end-to-end for est...
research
12/14/2022

Particle-Based Score Estimation for State Space Model Learning in Autonomous Driving

Multi-object state estimation is a fundamental problem for robotic appli...
research
03/02/2022

STEADY: Simultaneous State Estimation and Dynamics Learning from Indirect Observations

Accurate kinodynamic models play a crucial role in many robotics applica...
research
09/26/2014

Gradient-based Taxis Algorithms for Network Robotics

Finding the physical location of a specific network node is a prototypic...
research
07/31/2020

Towards Deep Robot Learning with Optimizer applicable to Non-stationary Problems

This paper proposes a new optimizer for deep learning, named d-AmsGrad. ...

Please sign up or login with your details

Forgot password? Click here to reset