DeepAI AI Chat
Log In Sign Up

Mathematics of Deep Learning

by   Rene Vidal, et al.
NYU college
Johns Hopkins University
Tel Aviv University

Recently there has been a dramatic increase in the performance of recognition systems due to the introduction of deep architectures for representation learning and classification. However, the mathematical reasons for this success remain elusive. This tutorial will review recent work that aims to provide a mathematical justification for several properties of deep networks, such as global optimality, geometric stability, and invariance of the learned representations.


page 1

page 2

page 3

page 4


Geometric robustness of deep networks: analysis and improvement

Deep convolutional neural networks have been shown to be vulnerable to a...

Representation Learning: A Review and New Perspectives

The success of machine learning algorithms generally depends on data rep...

A Structural Approach to the Design of Domain Specific Neural Network Architectures

This is a master's thesis concerning the theoretical ideas of geometric ...

Deep Differential System Stability – Learning advanced computations from examples

Can advanced mathematical computations be learned from examples? Using t...

Hypergraphs: an introduction and review

Abstract Hypergraphs were introduced in 1973 by Bergé. This review aims ...

Towards a Mathematical Understanding of Neural Network-Based Machine Learning: what we know and what we don't

The purpose of this article is to review the achievements made in the la...

Deep Learning and Mathematical Intuition: A Review of (Davies et al. 2021)

A recent paper by Davies et al (2021) describes how deep learning (DL) t...