Deep Learning: Computational Aspects

08/26/2018
by   Nicholas Polson, et al.
0

In this article we review computational aspects of Deep Learning (DL). Deep learning uses network architectures consisting of hierarchical layers of latent variables to construct predictors for high-dimensional input-output models. Training a deep learning architecture is computationally intensive, and efficient linear algebra libraries is the key for training and inference. Stochastic gradient descent (SGD) optimization and batch sampling are used to learn from massive data sets.

READ FULL TEXT
research
07/20/2018

Deep Learning

Deep learning (DL) is a high dimensional data reduction technique for co...
research
05/27/2017

Deep Learning for Spatio-Temporal Modeling: Dynamic Traffic Flows and High Frequency Trading

Deep learning applies hierarchical layers of hidden variables to constru...
research
10/02/2018

Learning with Random Learning Rates

Hyperparameter tuning is a bothersome step in the training of deep learn...
research
02/28/2019

A block-random algorithm for learning on distributed, heterogeneous data

Most deep learning models are based on deep neural networks with multipl...
research
01/08/2020

SGD with Hardness Weighted Sampling for Distributionally Robust Deep Learning

Distributionally Robust Optimization (DRO) has been proposed as an alter...
research
03/22/2019

Scalable Data Augmentation for Deep Learning

Scalable Data Augmentation (SDA) provides a framework for training deep ...
research
03/17/2018

Constrained Deep Learning using Conditional Gradient and Applications in Computer Vision

A number of results have recently demonstrated the benefits of incorpora...

Please sign up or login with your details

Forgot password? Click here to reset