A Novel Sequential Coreset Method for Gradient Descent Algorithms

12/05/2021
by   Jiawei Huang, et al.
0

A wide range of optimization problems arising in machine learning can be solved by gradient descent algorithms, and a central question in this area is how to efficiently compress a large-scale dataset so as to reduce the computational complexity. Coreset is a popular data compression technique that has been extensively studied before. However, most of existing coreset methods are problem-dependent and cannot be used as a general tool for a broader range of applications. A key obstacle is that they often rely on the pseudo-dimension and total sensitivity bound that can be very high or hard to obtain. In this paper, based on the ”locality” property of gradient descent algorithms, we propose a new framework, termed ”sequential coreset”, which effectively avoids these obstacles. Moreover, our method is particularly suitable for sparse optimization whence the coreset size can be further reduced to be only poly-logarithmically dependent on the dimension. In practice, the experimental results suggest that our method can save a large amount of running time compared with the baseline algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/29/2022

Computational Complexity of Sub-linear Convergent Algorithms

Optimizing machine learning algorithms that are used to solve the object...
research
06/29/2021

Never Go Full Batch (in Stochastic Convex Optimization)

We study the generalization performance of full-batch optimization algor...
research
08/26/2020

Gravilon: Applications of a New Gradient Descent Method to Machine Learning

Gradient descent algorithms have been used in countless applications sin...
research
05/10/2021

Exact asymptotic characterisation of running time for approximate gradient descent on random graphs

In this work we study the time complexity for the search of local minima...
research
10/16/2019

A Double Residual Compression Algorithm for Efficient Distributed Learning

Large-scale machine learning models are often trained by parallel stocha...
research
08/26/2022

Improving the Efficiency of Gradient Descent Algorithms Applied to Optimization Problems with Dynamical Constraints

We introduce two block coordinate descent algorithms for solving optimiz...
research
07/06/2014

Linear Coupling: An Ultimate Unification of Gradient and Mirror Descent

First-order methods play a central role in large-scale machine learning....

Please sign up or login with your details

Forgot password? Click here to reset