DeepOBS: A Deep Learning Optimizer Benchmark Suite

03/13/2019
by   Frank Schneider, et al.
0

Because the choice and tuning of the optimizer affects the speed, and ultimately the performance of deep learning, there is significant past and recent research in this area. Yet, perhaps surprisingly, there is no generally agreed-upon protocol for the quantitative and reproducible evaluation of optimization strategies for deep learning. We suggest routines and benchmarks for stochastic optimization, with special focus on the unique aspects of deep learning, such as stochasticity, tunability and generalization. As the primary contribution, we present DeepOBS, a Python package of deep learning optimization benchmarks. The package addresses key challenges in the quantitative assessment of stochastic optimizers, and automates most steps of benchmarking. The library includes a wide and extensible set of ready-to-use realistic optimization problems, such as training Residual Networks for image classification on ImageNet or character-level language prediction models, as well as popular classics like MNIST and CIFAR-10. The package also provides realistic baseline results for the most popular optimizers on these test problems, ensuring a fair comparison to the competition when benchmarking new optimizers, and without having to run costly experiments. It comes with output back-ends that directly produce LaTeX code for inclusion in academic publications. It supports TensorFlow and is available open source.

READ FULL TEXT
research
03/06/2023

Judging Adam: Studying the Performance of Optimization Methods on ML4SE Tasks

Solving a problem with a deep learning model requires researchers to opt...
research
11/17/2022

VeLO: Training Versatile Learned Optimizers by Scaling Up

While deep learning models have replaced hand-designed features across m...
research
11/22/2018

GuacaMol: Benchmarking Models for De Novo Molecular Design

De novo design seeks to generate molecules with required property profil...
research
07/03/2020

Descending through a Crowded Valley – Benchmarking Deep Learning Optimizers

Choosing the optimizer is among the most crucial decisions of deep learn...
research
11/15/2020

Tonic: A Deep Reinforcement Learning Library for Fast Prototyping and Benchmarking

Deep reinforcement learning has been one of the fastest growing fields o...
research
10/11/2019

On Empirical Comparisons of Optimizers for Deep Learning

Selecting an optimizer is a central step in the contemporary deep learni...
research
10/08/2020

Olympus: a benchmarking framework for noisy optimization and experiment planning

Research challenges encountered across science, engineering, and economi...

Please sign up or login with your details

Forgot password? Click here to reset