Stochastic gradient descent for linear least squares problems with partially observed data

07/09/2020
by   Kui Du, et al.
0

We propose a novel stochastic gradient descent method for solving linear least squares problems with partially observed data. Our method uses submatrices indexed by a randomly selected pair of row and column index sets to update the iterate at each step. Theoretical convergence guarantees in the mean square sense are provided. Numerical experiments are reported to demonstrate the theoretical findings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/03/2022

Understanding the unstable convergence of gradient descent

Most existing analyses of (stochastic) gradient descent rely on the cond...
research
03/26/2018

On the Performance of Preconditioned Stochastic Gradient Descent

This paper studies the performance of preconditioned stochastic gradient...
research
04/30/2020

On the Discrepancy Principle for Stochastic Gradient Descent

Stochastic gradient descent (SGD) is a promising numerical method for so...
research
10/21/2020

On the Saturation Phenomenon of Stochastic Gradient Descent for Linear Inverse Problems

Stochastic gradient descent (SGD) is a promising method for solving larg...
research
03/28/2019

Block stochastic gradient descent for large-scale tomographic reconstruction in a parallel network

Iterative algorithms have many advantages for linear tomographic image r...
research
06/12/2018

INFERNO: Inference-Aware Neural Optimisation

Complex computer simulations are commonly required for accurate data mod...
research
10/13/2021

Seismic Tomography with Random Batch Gradient Reconstruction

Seismic tomography solves high-dimensional optimization problems to imag...

Please sign up or login with your details

Forgot password? Click here to reset