MAML is a Noisy Contrastive Learner

06/29/2021
by   Chia-Hsiang Kao, et al.
0

Model-agnostic meta-learning (MAML) is one of the most popular and widely-adopted meta-learning algorithms nowadays, which achieves remarkable success in various learning problems. Yet, with the unique design of nested inner-loop and outer-loop updates which respectively govern the task-specific and meta-model-centric learning, the underlying learning objective of MAML still remains implicit and thus impedes a more straightforward understanding of it. In this paper, we provide a new perspective to the working mechanism of MAML and discover that: MAML is analogous to a meta-learner using a supervised contrastive objective function, where the query features are pulled towards the support features of the same class and against those of different classes, in which such contrastiveness is experimentally verified via an analysis based on the cosine similarity. Moreover, our analysis reveals that the vanilla MAML algorithm has an undesirable interference term originating from the random initialization and the cross-task interaction. We therefore propose a simple but effective technique, zeroing trick, to alleviate such interference, where the extensive experiments are then conducted on both miniImagenet and Omniglot datasets to demonstrate the consistent improvement brought by our proposed technique thus well validating its effectiveness.

READ FULL TEXT
research
06/16/2020

Convergence of Meta-Learning with Task-Specific Adaptation over Partial Parameters

Although model-agnostic meta-learning (MAML) is a very successful algori...
research
09/19/2019

Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML

An important research direction in machine learning has centered around ...
research
08/20/2020

Does MAML really want feature reuse only?

Meta-learning, the effort to solve new tasks with only a few samples, ha...
research
09/20/2022

MAC: A Meta-Learning Approach for Feature Learning and Recombination

Optimization-based meta-learning aims to learn an initialization so that...
research
09/16/2022

MetaMask: Revisiting Dimensional Confounder for Self-Supervised Learning

As a successful approach to self-supervised learning, contrastive learni...
research
09/08/2021

Do What Nature Did To Us: Evolving Plastic Recurrent Neural Networks For Task Generalization

While artificial neural networks (ANNs) have been widely adopted in mach...
research
11/20/2020

One Shot Learning for Speech Separation

Despite the recent success of speech separation models, they fail to sep...

Please sign up or login with your details

Forgot password? Click here to reset