Nonparametric Online Learning Using Lipschitz Regularized Deep Neural Networks

05/26/2019
by   Guy Uziel, et al.
0

Deep neural networks are considered to be state of the art models in many offline machine learning tasks. However, their performance and generalization abilities in online learning tasks are much less understood. Therefore, we focus on online learning and tackle the challenging problem where the underlying process is stationary and ergodic and thus removing the i.i.d. assumption and allowing observations to depend on each other arbitrarily. We prove the generalization abilities of Lipschitz regularized deep neural networks and show that by using those networks, a convergence to the best possible prediction strategy is guaranteed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/26/2019

Deep Online Learning with Stochastic Constraints

Deep learning models are considered to be state-of-the-art in many offli...
research
08/28/2018

Lipschitz regularized Deep Neural Networks converge and generalize

Lipschitz regularized neural networks augment the usual fidelity term us...
research
10/15/2021

Provable Regret Bounds for Deep Online Learning and Control

The use of deep neural networks has been highly successful in reinforcem...
research
12/31/2016

Lazily Adapted Constant Kinky Inference for Nonparametric Regression and Model-Reference Adaptive Control

Techniques known as Nonlinear Set Membership prediction, Lipschitz Inter...
research
10/17/2018

Learning in Non-convex Games with an Optimization Oracle

We consider adversarial online learning in a non-convex setting under th...
research
06/09/2022

Meet You Halfway: Explaining Deep Learning Mysteries

Deep neural networks perform exceptionally well on various learning task...
research
06/01/2023

An FPGA Architecture for Online Learning using the Tsetlin Machine

There is a need for machine learning models to evolve in unsupervised ci...

Please sign up or login with your details

Forgot password? Click here to reset