Adaptive Sequential Machine Learning

04/04/2019
by   Craig Wilson, et al.
0

A framework previously introduced in [3] for solving a sequence of stochastic optimization problems with bounded changes in the minimizers is extended and applied to machine learning problems such as regression and classification. The stochastic optimization problems arising in these machine learning problems is solved using algorithms such as stochastic gradient descent (SGD). A method based on estimates of the change in the minimizers and properties of the optimization algorithm is introduced for adaptively selecting the number of samples at each time step to ensure that the excess risk, i.e., the expected gap between the loss achieved by the approximate minimizer produced by the optimization algorithm and the exact minimizer, does not exceed a target level. A bound is developed to show that the estimate of the change in the minimizers is non-trivial provided that the excess risk is small enough. Extensions relevant to the machine learning setting are considered, including a cost-based approach to select the number of samples with a cost budget over a fixed horizon, and an approach to applying cross-validation for model selection. Finally, experiments with synthetic and real data are used to validate the algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/29/2018

Active and Adaptive Sequential learning

A framework is introduced for actively and adaptively solving a sequence...
research
10/27/2017

SGDLibrary: A MATLAB library for stochastic gradient descent algorithms

We consider the problem of finding the minimizer of a function f: R^d →R...
research
02/09/2021

A Single-Timescale Stochastic Bilevel Optimization Method

Stochastic bilevel optimization generalizes the classic stochastic optim...
research
10/22/2021

Predictive machine learning for prescriptive applications: a coupled training-validating approach

In this research we propose a new method for training predictive machine...
research
02/06/2023

Target-based Surrogates for Stochastic Optimization

We consider minimizing functions for which it is expensive to compute th...
research
06/09/2021

Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms

Understanding generalization in deep learning has been one of the major ...
research
06/01/2023

Improving Energy Conserving Descent for Machine Learning: Theory and Practice

We develop the theory of Energy Conserving Descent (ECD) and introduce E...

Please sign up or login with your details

Forgot password? Click here to reset