Partial differential equation regularization for supervised machine learning

10/03/2019
by   Adam M. Oberman, et al.
0

This article is an overview of supervised machine learning problems for regression and classification. Topics include: kernel methods, training by stochastic gradient descent, deep learning architecture, losses for classification, statistical learning theory, and dimension independent generalization bounds. Implicit regularization in deep learning examples are presented, including data augmentation, adversarial training, and additive noise. These methods are reframed as explicit gradient regularization.

READ FULL TEXT

page 1

page 10

research
10/20/2022

A note on diffusion limits for stochastic gradient descent

In the machine learning literature stochastic gradient descent has recen...
research
02/21/2023

Deep Learning via Neural Energy Descent

This paper proposes the Nerual Energy Descent (NED) via neural network e...
research
06/18/2019

The Multiplicative Noise in Stochastic Gradient Descent: Data-Dependent Regularization, Continuous and Discrete Approximation

The randomness in Stochastic Gradient Descent (SGD) is considered to pla...
research
04/13/2023

Understanding Overfitting in Adversarial Training via Kernel Regression

Adversarial training and data augmentation with noise are widely adopted...
research
05/16/2018

Adversarial Training for Patient-Independent Feature Learning with IVOCT Data for Plaque Classification

Deep learning methods have shown impressive results for a variety of med...
research
10/29/2017

Regularization for Deep Learning: A Taxonomy

Regularization is one of the crucial ingredients of deep learning, yet t...
research
11/17/2020

Deep Learning Framework From Scratch Using Numpy

This work is a rigorous development of a complete and general-purpose de...

Please sign up or login with your details

Forgot password? Click here to reset