Set-based Meta-Interpolation for Few-Task Meta-Learning

05/20/2022
by   Seanie Lee, et al.
0

Meta-learning approaches enable machine learning systems to adapt to new tasks given few examples by leveraging knowledge from related tasks. However, a large number of meta-training tasks are still required for generalization to unseen tasks during meta-testing, which introduces a critical bottleneck for real-world problems that come with only few tasks, due to various reasons including the difficulty and cost of constructing tasks. Recently, several task augmentation methods have been proposed to tackle this issue using domain-specific knowledge to design augmentation techniques to densify the meta-training task distribution. However, such reliance on domain-specific knowledge renders these methods inapplicable to other domains. While Manifold Mixup based task augmentation methods are domain-agnostic, we empirically find them ineffective on non-image domains. To tackle these limitations, we propose a novel domain-agnostic task augmentation method, Meta-Interpolation, which utilizes expressive neural set functions to densify the meta-training task distribution using bilevel optimization. We empirically validate the efficacy of Meta-Interpolation on eight datasets spanning across various domains such as image classification, molecule property prediction, text classification and speech recognition. Experimentally, we show that Meta-Interpolation consistently outperforms all the relevant baselines. Theoretically, we prove that task interpolation with the set function regularizes the meta-learner to improve generalization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/04/2021

Meta-Learning with Fewer Tasks through Task Interpolation

Meta-learning enables algorithms to quickly learn a newly encountered ta...
research
06/18/2020

Unsupervised Meta-Learning through Latent-Space Interpolation in Generative Models

Unsupervised meta-learning approaches rely on synthetic meta-tasks that ...
research
06/13/2020

MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and Architectures

Regularization and transfer learning are two popular techniques to enhan...
research
09/29/2021

Meta Learning on a Sequence of Imbalanced Domains with Difficulty Awareness

Recognizing new objects by learning from a few labeled examples in an ev...
research
11/24/2019

Invenio: Discovering Hidden Relationships Between Tasks/Domains Using Structured Meta Learning

Exploiting known semantic relationships between fine-grained tasks is cr...
research
03/28/2020

Efficient Domain Generalization via Common-Specific Low-Rank Decomposition

Domain generalization refers to the task of training a model which gener...
research
08/05/2021

Out-of-domain Generalization from a Single Source: A Uncertainty Quantification Approach

We study a worst-case scenario in generalization: Out-of-domain generali...

Please sign up or login with your details

Forgot password? Click here to reset