Learning from demonstration using products of experts: applications to manipulation and task prioritization

10/07/2020
by   Emmanuel Pignat, et al.
0

Probability distributions are key components of many learning from demonstration (LfD) approaches. While the configuration of a manipulator is defined by its joint angles, poses are often best explained within several task spaces. In many approaches, distributions within relevant task spaces are learned independently and only combined at the control level. This simplification implies several problems that are addressed in this work. We show that the fusion of models in different task spaces can be expressed as a product of experts (PoE), where the probabilities of the models are multiplied and renormalized so that it becomes a proper distribution of joint angles. Multiple experiments are presented to show that learning the different models jointly in the PoE framework significantly improves the quality of the model. The proposed approach particularly stands out when the robot has to learn competitive or hierarchical objectives. Training the model jointly usually relies on contrastive divergence, which requires costly approximations that can affect performance. We propose an alternative strategy using variational inference and mixture model approximations. In particular, we show that the proposed approach can be extended to PoE with a nullspace structure (PoENS), where the model is able to recover tasks that are masked by the resolution of higher-level objectives.

READ FULL TEXT

page 2

page 3

page 7

page 8

page 11

page 15

page 18

page 19

research
05/23/2019

Variational Inference with Mixture Model Approximation: Robotic Applications

We propose a method to approximate the distribution of robot configurati...
research
07/10/2019

Trust-Region Variational Inference with Gaussian Mixture Models

Many methods for machine learning rely on approximate inference from int...
research
07/09/2021

Lifelong Mixture of Variational Autoencoders

In this paper, we propose an end-to-end lifelong learning mixture of exp...
research
09/23/2022

A Unified Perspective on Natural Gradient Variational Inference with Gaussian Mixture Models

Variational inference with Gaussian mixture models (GMMs) enables learni...
research
08/11/2023

Learning Distributions via Monte-Carlo Marginalization

We propose a novel method to learn intractable distributions from their ...
research
12/19/2017

A Learning from Demonstration Approach fusing Torque Controllers

Torque controllers have become commonplace in the new generation of robo...
research
06/09/2021

Mixture weights optimisation for Alpha-Divergence Variational Inference

This paper focuses on α-divergence minimisation methods for Variational ...

Please sign up or login with your details

Forgot password? Click here to reset