Learning Curves for Analysis of Deep Networks

by   Derek Hoiem, et al.

A learning curve models a classifier's test error as a function of the number of training samples. Prior works show that learning curves can be used to select model parameters and extrapolate performance. We investigate how to use learning curves to analyze the impact of design choices, such as pre-training, architecture, and data augmentation. We propose a method to robustly estimate learning curves, abstract their parameters into error and data-reliance, and evaluate the effectiveness of different parameterizations. We also provide several interesting observations based on learning curves for a variety of image classification models.


page 1

page 2

page 3

page 4


Undeniable signatures based on isogenies of supersingular hyperelliptic curves

We present a proposal for an undeniable signature scheme based in supers...

Investigating classification learning curves for automatically generated and labelled plant images

In the context of supervised machine learning a learning curve describes...

Probabilistic Rollouts for Learning Curve Extrapolation Across Hyperparameter Settings

We propose probabilistic models that can extrapolate learning curves of ...

Estimation of Predictive Performance in High-Dimensional Data Settings using Learning Curves

In high-dimensional prediction settings, it remains challenging to relia...

Minimizers of the Empirical Risk and Risk Monotonicity

Plotting a learner's average performance against the number of training ...

The Shape of Learning Curves: a Review

Learning curves provide insight into the dependence of a learner's gener...

Multi-q Pattern Classification of Polarization Curves

Several experimental measurements are expressed in the form of one-dimen...