-
HAMSI: A Parallel Incremental Optimization Algorithm Using Quadratic Approximations for Solving Partially Separable Problems
We propose HAMSI (Hessian Approximated Multiple Subsets Iteration), whic...
read it
-
Parallel Coordinate Descent Methods for Big Data Optimization
In this work we show that randomized (block) coordinate descent methods ...
read it
-
RES: Regularized Stochastic BFGS Algorithm
RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-S...
read it
-
An Analysis of Asynchronous Stochastic Accelerated Coordinate Descent
Gradient descent, and coordinate descent in particular, are core tools i...
read it
-
Random Multi-Constraint Projection: Stochastic Gradient Methods for Convex Optimization with Many Constraints
Consider convex optimization problems subject to a large number of const...
read it
-
On Optimal Trees for Irregular Gather and Scatter Collectives
This paper studies the complexity of finding cost-optimal communication ...
read it
-
Scalable Kernel Methods via Doubly Stochastic Gradients
The general perception is that kernel methods are not scalable, and neur...
read it
A Class of Parallel Doubly Stochastic Algorithms for Large-Scale Learning
We consider learning problems over training sets in which both, the number of training examples and the dimension of the feature vectors, are large. To solve these problems we propose the random parallel stochastic algorithm (RAPSA). We call the algorithm random parallel because it utilizes multiple parallel processors to operate on a randomly chosen subset of blocks of the feature vector. We call the algorithm stochastic because processors choose training subsets uniformly at random. Algorithms that are parallel in either of these dimensions exist, but RAPSA is the first attempt at a methodology that is parallel in both the selection of blocks and the selection of elements of the training set. In RAPSA, processors utilize the randomly chosen functions to compute the stochastic gradient component associated with a randomly chosen block. The technical contribution of this paper is to show that this minimally coordinated algorithm converges to the optimal classifier when the training objective is convex. Moreover, we present an accelerated version of RAPSA (ARAPSA) that incorporates the objective function curvature information by premultiplying the descent direction by a Hessian approximation matrix. We further extend the results for asynchronous settings and show that if the processors perform their updates without any coordination the algorithms are still convergent to the optimal argument. RAPSA and its extensions are then numerically evaluated on a linear estimation problem and a binary image classification task using the MNIST handwritten digit dataset.
READ FULL TEXT
Comments
There are no comments yet.