DeepAI AI Chat
Log In Sign Up

RotoGBML: Towards Out-of-Distribution Generalization for Gradient-Based Meta-Learning

by   Min Zhang, et al.
Nanjing University

Gradient-based meta-learning (GBML) algorithms are able to fast adapt to new tasks by transferring the learned meta-knowledge, while assuming that all tasks come from the same distribution (in-distribution, ID). However, in the real world, they often suffer from an out-of-distribution (OOD) generalization problem, where tasks come from different distributions. OOD exacerbates inconsistencies in magnitudes and directions of task gradients, which brings challenges for GBML to optimize the meta-knowledge by minimizing the sum of task gradients in each minibatch. To address this problem, we propose RotoGBML, a novel approach to homogenize OOD task gradients. RotoGBML uses reweighted vectors to dynamically balance diverse magnitudes to a common scale and uses rotation matrixes to rotate conflicting directions close to each other. To reduce overhead, we homogenize gradients with the features rather than the network parameters. On this basis, to avoid the intervention of non-causal features (e.g., backgrounds), we also propose an invariant self-information (ISI) module to extract invariant causal features (e.g., the outlines of objects). Finally, task gradients are homogenized based on these invariant causal features. Experiments show that RotoGBML outperforms other state-of-the-art methods on various few-shot image classification benchmarks.


page 1

page 9

page 10


Meta-Learning with Fewer Tasks through Task Interpolation

Meta-learning enables algorithms to quickly learn a newly encountered ta...

Meta-Causal Feature Learning for Out-of-Distribution Generalization

Causal inference has become a powerful tool to handle the out-of-distrib...

Towards Understanding Generalization in Gradient-Based Meta-Learning

In this work we study generalization of neural networks in gradient-base...

Toward Multimodal Model-Agnostic Meta-Learning

Gradient-based meta-learners such as MAML are able to learn a meta-prior...

CMVAE: Causal Meta VAE for Unsupervised Meta-Learning

Unsupervised meta-learning aims to learn the meta knowledge from unlabel...

Multimodal Model-Agnostic Meta-Learning via Task-Aware Modulation

Model-agnostic meta-learners aim to acquire meta-learned parameters from...

Using learned optimizers to make models robust to input noise

State-of-the art vision models can achieve superhuman performance on ima...