Mathematics of Deep Learning

12/13/2017
by   Rene Vidal, et al.
0

Recently there has been a dramatic increase in the performance of recognition systems due to the introduction of deep architectures for representation learning and classification. However, the mathematical reasons for this success remain elusive. This tutorial will review recent work that aims to provide a mathematical justification for several properties of deep networks, such as global optimality, geometric stability, and invariance of the learned representations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/24/2017

Geometric robustness of deep networks: analysis and improvement

Deep convolutional neural networks have been shown to be vulnerable to a...
research
06/24/2012

Representation Learning: A Review and New Perspectives

The success of machine learning algorithms generally depends on data rep...
research
01/23/2023

A Structural Approach to the Design of Domain Specific Neural Network Architectures

This is a master's thesis concerning the theoretical ideas of geometric ...
research
02/12/2020

Hypergraphs: an introduction and review

Abstract Hypergraphs were introduced in 1973 by Bergé. This review aims ...
research
06/11/2020

Deep Differential System Stability – Learning advanced computations from examples

Can advanced mathematical computations be learned from examples? Using t...
research
09/22/2020

Towards a Mathematical Understanding of Neural Network-Based Machine Learning: what we know and what we don't

The purpose of this article is to review the achievements made in the la...
research
12/08/2021

Deep Learning and Mathematical Intuition: A Review of (Davies et al. 2021)

A recent paper by Davies et al (2021) describes how deep learning (DL) t...

Please sign up or login with your details

Forgot password? Click here to reset