On Data Efficiency of Meta-learning

by   Maruan Al-Shedivat, et al.

Meta-learning has enabled learning statistical models that can be quickly adapted to new prediction tasks. Motivated by use-cases in personalized federated learning, we study the often overlooked aspect of the modern meta-learning algorithms – their data efficiency. To shed more light on which methods are more efficient, we use techniques from algorithmic stability to derive bounds on the transfer risk that have important practical implications, indicating how much supervision is needed and how it must be allocated for each method to attain the desired level of generalization. Further, we introduce a new simple framework for evaluating meta-learning methods under a limit on the available supervision, conduct an empirical study of MAML, Reptile, and Protonets, and demonstrate the differences in the behavior of these methods on few-shot and federated learning benchmarks. Finally, we propose active meta-learning, which incorporates active data selection into learning-to-learn, leading to better performance of all methods in the limited supervision regime.



There are no comments yet.


page 7


Putting Theory to Work: From Learning Bounds to Meta-Learning Algorithms

Most of existing deep learning models rely on excessive amounts of label...

Adaptive Gradient-Based Meta-Learning Methods

We build a theoretical framework for understanding practical meta-learni...

PMFL: Partial Meta-Federated Learning for heterogeneous tasks and its applications on real-world medical records

Federated machine learning is a versatile and flexible tool to utilize d...

Probabilistic Active Meta-Learning

Data-efficient learning algorithms are essential in many practical appli...

Convergence and Accuracy Trade-Offs in Federated Learning and Meta-Learning

We study a family of algorithms, which we refer to as local update metho...

Modular meta-learning in abstract graph networks for combinatorial generalization

Modular meta-learning is a new framework that generalizes to unseen data...

Memory-based Optimization Methods for Model-Agnostic Meta-Learning

Recently, model-agnostic meta-learning (MAML) has garnered tremendous at...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.