Computational Complexity of Sub-linear Convergent Algorithms

09/29/2022
by   Hilal AlQuabeh, et al.
0

Optimizing machine learning algorithms that are used to solve the objective function has been of great interest. Several approaches to optimize common algorithms, such as gradient descent and stochastic gradient descent, were explored. One of these approaches is reducing the gradient variance through adaptive sampling to solve large-scale optimization's empirical risk minimization (ERM) problems. In this paper, we will explore how starting with a small sample and then geometrically increasing it and using the solution of the previous sample ERM to compute the new ERM. This will solve ERM problems with first-order optimization algorithms of sublinear convergence but with lower computational complexity. This paper starts with theoretical proof of the approach, followed by two experiments comparing the gradient descent with the adaptive sampling of the gradient descent and ADAM with adaptive sampling ADAM on different datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/22/2016

Trading-off variance and complexity in stochastic gradient descent

Stochastic gradient descent is the method of choice for large-scale mach...
research
12/05/2021

A Novel Sequential Coreset Method for Gradient Descent Algorithms

A wide range of optimization problems arising in machine learning can be...
research
07/19/2019

Surfing: Iterative optimization over incrementally trained deep networks

We investigate a sequential optimization procedure to minimize the empir...
research
01/09/2015

Survey schemes for stochastic gradient descent with applications to M-estimation

In certain situations that shall be undoubtedly more and more common in ...
research
06/27/2018

Empirical Risk Minimization and Stochastic Gradient Descent for Relational Data

Empirical risk minimization is the principal tool for prediction problem...
research
09/29/2020

Projection-Free Adaptive Gradients for Large-Scale Optimization

The complexity in large-scale optimization can lie in both handling the ...
research
12/01/2019

Borrowing From the Future: An Attempt to Address Double Sampling

For model-free reinforcement learning, the main difficulty of stochastic...

Please sign up or login with your details

Forgot password? Click here to reset