Taylor Learning

05/24/2023
by   James Schmidt, et al.
0

Empirical risk minimization stands behind most optimization in supervised machine learning. Under this scheme, labeled data is used to approximate an expected cost (risk), and a learning algorithm updates model-defining parameters in search of an empirical risk minimizer, with the aim of thereby approximately minimizing expected cost. Parameter update is often done by some sort of gradient descent. In this paper, we introduce a learning algorithm to construct models for real analytic functions using neither gradient descent nor empirical risk minimization. Observing that such functions are defined by local information, we situate familiar Taylor approximation methods in the context of sampling data from a distribution, and prove a nonuniform learning result.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/31/2022

Robust supervised learning with coordinate gradient descent

This paper considers the problem of supervised learning with linear meth...
research
08/04/2023

Frustratingly Easy Model Generalization by Dummy Risk Minimization

Empirical risk minimization (ERM) is a fundamental machine learning para...
research
06/27/2018

Empirical Risk Minimization and Stochastic Gradient Descent for Relational Data

Empirical risk minimization is the principal tool for prediction problem...
research
01/16/2016

Engineering Safety in Machine Learning

Machine learning algorithms are increasingly influencing our decisions a...
research
03/28/2022

Wildfire risk forecast: An optimizable fire danger index

Wildfire events have caused severe losses in many places around the worl...
research
01/01/2018

Towards Practical Conditional Risk Minimization

We study conditional risk minimization (CRM), i.e. the problem of learni...
research
06/01/2017

Efficient learning with robust gradient descent

Minimizing the empirical risk is a popular training strategy, but for le...

Please sign up or login with your details

Forgot password? Click here to reset