Meta-learning the Learning Trends Shared Across Tasks

10/19/2020
by   Jathushan Rajasegaran, et al.
0

Meta-learning stands for 'learning to learn' such that generalization to new tasks is achieved. Among these methods, Gradient-based meta-learning algorithms are a specific sub-class that excel at quick adaptation to new tasks with limited data. This demonstrates their ability to acquire transferable knowledge, a capability that is central to human learning. However, the existing meta-learning approaches only depend on the current task information during the adaptation, and do not share the meta-knowledge of how a similar task has been adapted before. To address this gap, we propose a 'Path-aware' model-agnostic meta-learning approach. Specifically, our approach not only learns a good initialization for adaptation, it also learns an optimal way to adapt these parameters to a set of task-specific parameters, with learnable update directions, learning rates and, most importantly, the way updates evolve over different time-steps. Compared to the existing meta-learning methods, our approach offers: (a) The ability to learn gradient-preconditioning at different time-steps of the inner-loop, thereby modeling the dynamic learning behavior shared across tasks, and (b) The capability of aggregating the learning context through the provision of direct gradient-skip connections from the old time-steps, thus avoiding overfitting and improving generalization. In essence, our approach not only learns a transferable initialization, but also models the optimal update directions, learning rates, and task-specific learning trends. Specifically, in terms of learning trends, our approach determines the way update directions shape up as the task-specific learning progresses and how the previous update history helps in the current update. Our approach is simple to implement and demonstrates faster convergence. We report significant performance improvements on a number of FSL datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/08/2018

CAML: Fast Context Adaptation via Meta-Learning

We propose CAML, a meta-learning method for fast adaptation that partiti...
research
06/03/2022

Dynamic Kernel Selection for Improved Generalization and Memory Efficiency in Meta-learning

Gradient based meta-learning methods are prone to overfit on the meta-tr...
research
11/27/2020

Connecting Context-specific Adaptation in Humans to Meta-learning

Cognitive control, the ability of a system to adapt to the demands of a ...
research
10/28/2019

HIDRA: Head Initialization across Dynamic targets for Robust Architectures

The performance of gradient-based optimization strategies depends heavil...
research
07/20/2023

Nonlinear Meta-Learning Can Guarantee Faster Rates

Many recent theoretical works on meta-learning aim to achieve guarantees...
research
06/04/2019

Neuromorphic Architecture Optimization for Task-Specific Dynamic Learning

The ability to learn and adapt in real time is a central feature of biol...
research
07/16/2019

Towards Understanding Generalization in Gradient-Based Meta-Learning

In this work we study generalization of neural networks in gradient-base...

Please sign up or login with your details

Forgot password? Click here to reset