Learning-to-learn non-convex piecewise-Lipschitz functions

08/19/2021
by   Maria-Florina Balcan, et al.
0

We analyze the meta-learning of the initialization and step-size of learning algorithms for piecewise-Lipschitz functions, a non-convex setting with applications to both machine learning and algorithms. Starting from recent regret bounds for the exponential forecaster on losses with dispersed discontinuities, we generalize them to be initialization-dependent and then use this result to propose a practical meta-learning procedure that learns both the initialization and the step-size of the algorithm from multiple online learning tasks. Asymptotically, we guarantee that the average regret across tasks scales with a natural notion of task-similarity that measures the amount of overlap between near-optimal regions of different tasks. Finally, we instantiate the method and its guarantee in two important settings: robust meta-learning and multi-task data-driven algorithm design.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/27/2022

Meta-Learning Adversarial Bandits

We study online learning with bandit feedback across multiple tasks, wit...
research
10/22/2019

Online Meta-Learning on Non-convex Setting

The online meta-learning framework is designed for the continual lifelon...
research
02/04/2021

Meta-strategy for Learning Tuning Parameters with Guarantees

Online gradient methods, like the online gradient algorithm (OGA), often...
research
04/07/2016

Online Optimization of Smoothed Piecewise Constant Functions

We study online optimization of smoothed piecewise constant functions ov...
research
07/05/2023

Meta-Learning Adversarial Bandit Algorithms

We study online meta-learning with bandit feedback, with the goal of imp...
research
02/27/2019

Provable Guarantees for Gradient-Based Meta-Learning

We study the problem of meta-learning through the lens of online convex ...
research
09/30/2019

Chameleon: Learning Model Initializations Across Tasks With Different Schemas

Parametric models, and particularly neural networks, require weight init...

Please sign up or login with your details

Forgot password? Click here to reset