Determinantal Point Processes for Mini-Batch Diversification

05/01/2017
by   Cheng Zhang, et al.
0

We study a mini-batch diversification scheme for stochastic gradient descent (SGD). While classical SGD relies on uniformly sampling data points to form a mini-batch, we propose a non-uniform sampling scheme based on the Determinantal Point Process (DPP). The DPP relies on a similarity measure between data points and gives low probabilities to mini-batches which contain redundant data, and higher probabilities to mini-batches with more diverse data. This simultaneously balances the data and leads to stochastic gradients with lower variance. We term this approach Diversified Mini-Batch SGD (DM-SGD). We show that regular SGD and a biased version of stratified sampling emerge as special cases. Furthermore, DM-SGD generalizes stratified sampling to cases where no discrete features exist to bin the data into groups. We show experimentally that our method results more interpretable and diverse features in unsupervised setups, and in better classification accuracies in supervised setups.

READ FULL TEXT

page 7

page 12

research
04/08/2018

Active Mini-Batch Sampling using Repulsive Point Processes

The convergence speed of stochastic gradient descent (SGD) can be improv...
research
01/27/2019

SGD: General Analysis and Improved Rates

We propose a general yet simple theorem describing the convergence of SG...
research
05/03/2020

Adaptive Learning of the Optimal Mini-Batch Size of SGD

Recent advances in the theoretical understandingof SGD (Qian et al., 201...
research
07/10/2022

NGAME: Negative Mining-aware Mini-batching for Extreme Classification

Extreme Classification (XC) seeks to tag data points with the most relev...
research
04/01/2023

Doubly Stochastic Models: Learning with Unbiased Label Noises and Inference Stability

Random label noises (or observational noises) widely exist in practical ...
research
02/23/2020

Improve SGD Training via Aligning Min-batches

Deep neural networks (DNNs) for supervised learning can be viewed as a p...
research
11/23/2022

Learning Compact Features via In-Training Representation Alignment

Deep neural networks (DNNs) for supervised learning can be viewed as a p...

Please sign up or login with your details

Forgot password? Click here to reset