Statistical Learning and Inverse Problems: An Stochastic Gradient Approach

09/29/2022
by   Yuri R. Fonseca, et al.
0

Inverse problems are paramount in Science and Engineering. In this paper, we consider the setup of Statistical Inverse Problem (SIP) and demonstrate how Stochastic Gradient Descent (SGD) algorithms can be used in the linear SIP setting. We provide consistency and finite sample bounds for the excess risk. We also propose a modification for the SGD algorithm where we leverage machine learning methods to smooth the stochastic gradients and improve empirical performance. We exemplify the algorithm in a setting of great interest nowadays: the Functional Linear Regression model. In this case we consider a synthetic data example and examples with a real data classification problem.

READ FULL TEXT
research
02/10/2023

On the Convergence of Stochastic Gradient Descent for Linear Inverse Problems in Banach Spaces

In this work we consider stochastic gradient descent (SGD) for solving l...
research
08/10/2021

An Analysis of Stochastic Variance Reduced Gradient for Linear Inverse Problems

Stochastic variance reduced gradient (SVRG) is a popular variance reduct...
research
03/16/2023

Stochastic gradient descent for linear inverse problems in variable exponent Lebesgue spaces

We consider a stochastic gradient descent (SGD) algorithm for solving li...
research
10/21/2020

On the Saturation Phenomenon of Stochastic Gradient Descent for Linear Inverse Problems

Stochastic gradient descent (SGD) is a promising method for solving larg...
research
07/01/2020

Online Robust Regression via SGD on the l1 loss

We consider the robust linear regression problem in the online setting w...
research
06/11/2018

Statistics on functional data and covariance operators in linear inverse problems

We introduce a framework for the statistical analysis of functional data...

Please sign up or login with your details

Forgot password? Click here to reset