-
Learning with Fenchel-Young Losses
Over the past decades, numerous loss functions have been been proposed f...
read it
-
Model-Based Inverse Reinforcement Learning from Visual Demonstrations
Scaling model-based inverse reinforcement learning (IRL) to real robotic...
read it
-
Learning to Adapt: Meta-Learning for Model-Based Control
Although reinforcement learning methods can achieve impressive results i...
read it
-
Learning Surrogate Losses
The minimization of loss functions is the heart and soul of Machine Lear...
read it
-
Deep Online Learning via Meta-Learning: Continual Adaptation for Model-Based RL
Humans and animals can learn complex predictive models that allow them t...
read it
-
It Is Likely That Your Loss Should be a Likelihood
We recall that certain common losses are simplified likelihoods and inst...
read it
-
Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time
From CNNs to attention mechanisms, encoding inductive biases into neural...
read it
Learning State-Dependent Losses for Inverse Dynamics Learning
Being able to quickly adapt to changes in dynamics is paramount in model-based control for object manipulation tasks. In order to influence fast adaptation of the inverse dynamics model's parameters, data efficiency is crucial. Given observed data, a key element to how an optimizer updates model parameters is the loss function. In this work, we propose to apply meta-learning to learn structured, state-dependent loss functions during a meta-training phase. We then replace standard losses with our learned losses during online adaptation tasks. We evaluate our proposed approach on inverse dynamics learning tasks, both in simulation and on real hardware data. In both settings, the structured learned losses improve online adaptation speed, when compared to standard, state-independent loss functions.
READ FULL TEXT
Comments
There are no comments yet.