Scalable Data Augmentation for Deep Learning

03/22/2019
by   Yuexi Wang, et al.
4

Scalable Data Augmentation (SDA) provides a framework for training deep learning models using auxiliary hidden layers. Scalable MCMC is available for network training and inference. SDA provides a number of computational advantages over traditional algorithms, such as avoiding backtracking, local modes and can perform optimization with stochastic gradient descent (SGD) in TensorFlow. Standard deep neural networks with logit, ReLU and SVM activation functions are straightforward to implement. To illustrate our architectures and methodology, we use Pólya-Gamma logit data augmentation for a number of standard datasets. Finally, we conclude with directions for future research.

READ FULL TEXT

page 14

page 15

research
06/03/2021

Bayesian Inference for Gamma Models

We use the theory of normal variance-mean mixtures to derive a data augm...
research
02/22/2023

Data Augmentation for Neural NLP

Data scarcity is a problem that occurs in languages and tasks where we d...
research
10/21/2020

Data augmentation as stochastic optimization

We present a theoretical framework recasting data augmentation as stocha...
research
08/26/2018

Deep Learning: Computational Aspects

In this article we review computational aspects of Deep Learning (DL). D...
research
06/06/2017

Deep Latent Dirichlet Allocation with Topic-Layer-Adaptive Stochastic Gradient Riemannian MCMC

It is challenging to develop stochastic gradient based scalable inferenc...
research
02/18/2019

A parallel Fortran framework for neural networks and deep learning

This paper describes neural-fortran, a parallel Fortran framework for ne...

Please sign up or login with your details

Forgot password? Click here to reset