Statistical Analysis of Fixed Mini-Batch Gradient Descent Estimator

04/13/2023
by   Haobo Qi, et al.
0

We study here a fixed mini-batch gradient decent (FMGD) algorithm to solve optimization problems with massive datasets. In FMGD, the whole sample is split into multiple non-overlapping partitions. Once the partitions are formed, they are then fixed throughout the rest of the algorithm. For convenience, we refer to the fixed partitions as fixed mini-batches. Then for each computation iteration, the gradients are sequentially calculated on each fixed mini-batch. Because the size of fixed mini-batches is typically much smaller than the whole sample size, it can be easily computed. This leads to much reduced computation cost for each computational iteration. It makes FMGD computationally efficient and practically more feasible. To demonstrate the theoretical properties of FMGD, we start with a linear regression model with a constant learning rate. We study its numerical convergence and statistical efficiency properties. We find that sufficiently small learning rates are necessarily required for both numerical convergence and statistical efficiency. Nevertheless, an extremely small learning rate might lead to painfully slow numerical convergence. To solve the problem, a diminishing learning rate scheduling strategy can be used. This leads to the FMGD estimator with faster numerical convergence and better statistical efficiency. Finally, the FMGD algorithms with random shuffling and a general loss function are also studied.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/20/2018

Revisiting Small Batch Training for Deep Neural Networks

Modern deep neural network training is typically based on mini-batch sto...
research
12/04/2022

Convergence under Lipschitz smoothness of ease-controlled Random Reshuffling gradient Algorithms

We consider minimizing the average of a very large number of smooth and ...
research
01/05/2023

Training trajectories, mini-batch losses and the curious role of the learning rate

Stochastic gradient descent plays a fundamental role in nearly all appli...
research
11/02/2021

An Asymptotic Analysis of Minibatch-Based Momentum Methods for Linear Regression Models

Momentum methods have been shown to accelerate the convergence of the st...
research
10/22/2020

Sample Efficient Reinforcement Learning with REINFORCE

Policy gradient methods are among the most effective methods for large-s...
research
05/05/2020

Dynamically Adjusting Transformer Batch Size by Monitoring Gradient Direction Change

The choice of hyper-parameters affects the performance of neural models....
research
05/06/2022

Network Gradient Descent Algorithm for Decentralized Federated Learning

We study a fully decentralized federated learning algorithm, which is a ...

Please sign up or login with your details

Forgot password? Click here to reset