Dynamic Regret Analysis for Online Meta-Learning

09/29/2021
by   Parvin Nazari, et al.
0

The online meta-learning framework has arisen as a powerful tool for the continual lifelong learning setting. The goal for an agent is to quickly learn new tasks by drawing on prior experience, while it faces with tasks one after another. This formulation involves two levels: outer level which learns meta-learners and inner level which learns task-specific models, with only a small amount of data from the current task. While existing methods provide static regret analysis for the online meta-learning framework, we establish performance in terms of dynamic regret which handles changing environments from a global prospective. We also build off of a generalized version of the adaptive gradient methods that covers both ADAM and ADAGRAD to learn meta-learners in the outer level. We carry out our analyses in a stochastic setting, and in expectation prove a logarithmic local dynamic regret which depends explicitly on the total number of iterations T and parameters of the learner. Apart from, we also indicate high probability bounds on the convergence rates of proposed algorithm with appropriate selection of parameters, which have not been argued before.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/22/2019

Online Meta-Learning on Non-convex Setting

The online meta-learning framework is designed for the continual lifelon...
research
02/25/2022

Meta-Learning for Simple Regret Minimization

We develop a meta-learning framework for simple regret minimization in b...
research
05/20/2022

Adaptive Fairness-Aware Online Meta-Learning for Changing Environments

The fairness-aware online learning framework has arisen as a powerful to...
research
03/11/2021

Population-Based Evolution Optimizes a Meta-Learning Objective

Meta-learning models, or models that learn to learn, have been a long-de...
research
07/05/2023

Meta-Learning Adversarial Bandit Algorithms

We study online meta-learning with bandit feedback, with the goal of imp...
research
02/04/2021

Meta-strategy for Learning Tuning Parameters with Guarantees

Online gradient methods, like the online gradient algorithm (OGA), often...
research
02/22/2019

Online Meta-Learning

A central capability of intelligent systems is the ability to continuous...

Please sign up or login with your details

Forgot password? Click here to reset