RotoGBML: Towards Out-of-Distribution Generalization for Gradient-Based Meta-Learning

03/12/2023
by   Min Zhang, et al.
0

Gradient-based meta-learning (GBML) algorithms are able to fast adapt to new tasks by transferring the learned meta-knowledge, while assuming that all tasks come from the same distribution (in-distribution, ID). However, in the real world, they often suffer from an out-of-distribution (OOD) generalization problem, where tasks come from different distributions. OOD exacerbates inconsistencies in magnitudes and directions of task gradients, which brings challenges for GBML to optimize the meta-knowledge by minimizing the sum of task gradients in each minibatch. To address this problem, we propose RotoGBML, a novel approach to homogenize OOD task gradients. RotoGBML uses reweighted vectors to dynamically balance diverse magnitudes to a common scale and uses rotation matrixes to rotate conflicting directions close to each other. To reduce overhead, we homogenize gradients with the features rather than the network parameters. On this basis, to avoid the intervention of non-causal features (e.g., backgrounds), we also propose an invariant self-information (ISI) module to extract invariant causal features (e.g., the outlines of objects). Finally, task gradients are homogenized based on these invariant causal features. Experiments show that RotoGBML outperforms other state-of-the-art methods on various few-shot image classification benchmarks.

READ FULL TEXT

page 1

page 9

page 10

research
06/04/2021

Meta-Learning with Fewer Tasks through Task Interpolation

Meta-learning enables algorithms to quickly learn a newly encountered ta...
research
08/22/2022

Meta-Causal Feature Learning for Out-of-Distribution Generalization

Causal inference has become a powerful tool to handle the out-of-distrib...
research
07/16/2019

Towards Understanding Generalization in Gradient-Based Meta-Learning

In this work we study generalization of neural networks in gradient-base...
research
12/18/2018

Toward Multimodal Model-Agnostic Meta-Learning

Gradient-based meta-learners such as MAML are able to learn a meta-prior...
research
02/20/2023

CMVAE: Causal Meta VAE for Unsupervised Meta-Learning

Unsupervised meta-learning aims to learn the meta knowledge from unlabel...
research
10/30/2019

Multimodal Model-Agnostic Meta-Learning via Task-Aware Modulation

Model-agnostic meta-learners aim to acquire meta-learned parameters from...
research
06/08/2019

Using learned optimizers to make models robust to input noise

State-of-the art vision models can achieve superhuman performance on ima...

Please sign up or login with your details

Forgot password? Click here to reset