Mathematical Perspective of Machine Learning

07/03/2020
by   Yarema Boryshchak, et al.
0

We take a closer look at some theoretical challenges of Machine Learning as a function approximation, gradient descent as the default optimization algorithm, limitations of fixed length and width networks and a different approach to RNNs from a mathematical perspective.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/19/2022

Can Gradient Descent Provably Learn Linear Dynamic Systems?

We study the learning ability of linear recurrent neural networks with g...
research
01/25/2018

A New Backpropagation Algorithm without Gradient Descent

The backpropagation algorithm, which had been originally introduced in t...
research
12/05/2020

A Review of Designs and Applications of Echo State Networks

Recurrent Neural Networks (RNNs) have demonstrated their outstanding abi...
research
06/28/2023

Beyond NTK with Vanilla Gradient Descent: A Mean-Field Analysis of Neural Networks with Polynomial Width, Samples, and Time

Despite recent theoretical progress on the non-convex optimization of tw...
research
02/07/2020

On the Effectiveness of Richardson Extrapolation in Machine Learning

Richardson extrapolation is a classical technique from numerical analysi...
research
06/01/2023

Improving Energy Conserving Descent for Machine Learning: Theory and Practice

We develop the theory of Energy Conserving Descent (ECD) and introduce E...
research
07/28/2021

A Reflection on Learning from Data: Epistemology Issues and Limitations

Although learning from data is effective and has achieved significant mi...

Please sign up or login with your details

Forgot password? Click here to reset