A Simple Guard for Learned Optimizers

01/28/2022
by   Isabeau Prémont-Schwarz, et al.
2

If the trend of learned components eventually outperforming their hand-crafted version continues, learned optimizers will eventually outperform hand-crafted optimizers like SGD or Adam. Even if learned optimizers (L2Os) eventually outpace hand-crafted ones in practice however, they are still not provably convergent and might fail out of distribution. These are the questions addressed here. Currently, learned optimizers frequently outperform generic hand-crafted optimizers (such as gradient descent) at the beginning of learning but they generally plateau after some time while the generic algorithms continue to make progress and often overtake the learned algorithm as Aesop's tortoise which overtakes the hare and are not. L2Os also still have a difficult time generalizing out of distribution. (Heaton et al., 2020) proposed Safeguarded L2O (GL2O) which can take a learned optimizer and safeguard it with a generic learning algorithm so that by conditionally switching between the two, the resulting algorithm is provably convergent. We propose a new class of Safeguarded L2O, called Loss-Guarded L2O (LGL2O), which is both conceptually simpler and computationally less expensive. The guarding mechanism decides solely based on the expected future loss value of both optimizers. Furthermore, we show theoretical proof of LGL2O's convergence guarantee and empirical results comparing to GL2O and other baselines showing that it combines the best of both L2O and SGD and and in practice converges much better than GL2O.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/13/2014

Learning Multi-Scale Representations for Material Classification

The recent progress in sparse coding and deep learning has made unsuperv...
research
04/07/2017

Learned Watershed: End-to-End Learning of Seeded Segmentation

Learned boundary maps are known to outperform hand- crafted ones as a ba...
research
11/19/2018

A Comparative Study of Computational Aesthetics

Objective metrics model image quality by quantifying image degradations ...
research
08/22/2023

Non-Redundant Combination of Hand-Crafted and Deep Learning Radiomics: Application to the Early Detection of Pancreatic Cancer

We address the problem of learning Deep Learning Radiomics (DLR) that ar...
research
05/10/2017

Collaborative Descriptors: Convolutional Maps for Preprocessing

The paper presents a novel concept for collaborative descriptors between...
research
04/01/2018

Joint Learning of Interactive Spoken Content Retrieval and Trainable User Simulator

User-machine interaction is crucial for information retrieval, especiall...
research
09/11/2023

Energy Preservation and Stability of Random Filterbanks

What makes waveform-based deep learning so hard? Despite numerous attemp...

Please sign up or login with your details

Forgot password? Click here to reset