Meta-Learning via Learned Loss

06/12/2019
by   Yevgen Chebotar, et al.
0

We present a meta-learning approach based on learning an adaptive, high-dimensional loss function that can generalize across multiple tasks and different model architectures. We develop a fully differentiable pipeline for learning a loss function targeted at maximizing the performance of an optimizee trained using this loss function. We observe that the loss landscape produced by our learned loss significantly improves upon the original task-specific loss. We evaluate our method on supervised and reinforcement learning tasks. Furthermore, we show that our pipeline is able to operate in sparse reward and self-supervised reinforcement learning scenarios.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/15/2019

Addressing the Loss-Metric Mismatch with Adaptive Loss Alignment

In most machine learning training paradigms a fixed, often handcrafted, ...
research
10/08/2021

Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning

In few-shot learning scenarios, the challenge is to generalize and perfo...
research
11/29/2019

VIABLE: Fast Adaptation via Backpropagating Learned Loss

In few-shot learning, typically, the loss function which is applied at t...
research
03/15/2021

Evolving parametrized Loss for Image Classification Learning on Small Datasets

This paper proposes a meta-learning approach to evolving a parametrized ...
research
08/08/2023

Scope Loss for Imbalanced Classification and RL Exploration

We demonstrate equivalence between the reinforcement learning problem an...
research
11/17/2022

MelHuBERT: A simplified HuBERT on Mel spectrogram

Self-supervised models have had great success in learning speech represe...
research
10/29/2020

Learning to Actively Learn: A Robust Approach

This work proposes a procedure for designing algorithms for specific ada...

Please sign up or login with your details

Forgot password? Click here to reset