Uncertainty Quantification in Deep Learning through Stochastic Maximum Principle

11/28/2020
by   Richard Archibald, et al.
0

We develop a probabilistic machine learning method, which formulates a class of stochastic neural networks by a stochastic optimal control problem. An efficient stochastic gradient descent algorithm is introduced under the stochastic maximum principle framework. Convergence analysis for stochastic gradient descent optimization and numerical experiments for applications of stochastic neural networks are carried out to validate our methodology in both theory and performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/17/2022

Convergence Analysis for Training Stochastic Neural Networks via Stochastic Gradient Descent

In this paper, we carry out numerical analysis to prove convergence of a...
research
08/11/2023

The Stochastic Steepest Descent Method for Robust Optimization in Banach Spaces

Stochastic gradient methods have been a popular and powerful choice of o...
research
06/20/2023

Sampling from Gaussian Process Posteriors using Stochastic Gradient Descent

Gaussian processes are a powerful framework for quantifying uncertainty ...
research
12/28/2022

A Learning-Based Optimal Uncertainty Quantification Method and Its Application to Ballistic Impact Problems

This paper concerns the study of optimal (supremum and infimum) uncertai...
research
01/22/2020

On Last-Layer Algorithms for Classification: Decoupling Representation from Uncertainty Estimation

Uncertainty quantification for deep learning is a challenging open probl...
research
03/05/2018

Energy-entropy competition and the effectiveness of stochastic gradient descent in machine learning

Finding parameters that minimise a loss function is at the core of many ...
research
10/10/2019

Probabilistic Rollouts for Learning Curve Extrapolation Across Hyperparameter Settings

We propose probabilistic models that can extrapolate learning curves of ...

Please sign up or login with your details

Forgot password? Click here to reset