Why does Deep Learning work? - A perspective from Group Theory

12/20/2014
by   Arnab Paul, et al.
0

Why does Deep Learning work? What representations does it capture? How do higher-order representations emerge? We study these questions from the perspective of group theory, thereby opening a new approach towards a theory of Deep learning. One factor behind the recent resurgence of the subject is a key algorithmic step called pre-training: first search for a good generative model for the input samples, and repeat the process one layer at a time. We show deeper implications of this simple principle, by establishing a connection with the interplay of orbits and stabilizers of group actions. Although the neural networks themselves may not form groups, we show the existence of shadow groups whose elements serve as close approximations. Over the shadow groups, the pre-training step, originally introduced as a mechanism to better initialize a network, becomes equivalent to a search for features with minimal orbits. Intuitively, these features are in a way the simplest. Which explains why a deep learning network learns simple features first. Next, we show how the same principle, when repeated in the deeper layers, can capture higher order representations, and why representation complexity increases as the layers get deeper.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/08/2015

A Group Theoretic Perspective on Unsupervised Deep Learning

Why does Deep Learning work? What representations does it capture? How d...
research
07/28/2020

Deep frequency principle towards understanding why deeper learning is faster

Understanding the effect of depth in deep learning is a critical problem...
research
02/27/2023

Layer Grafted Pre-training: Bridging Contrastive Learning And Masked Image Modeling For Label-Efficient Representations

Recently, both Contrastive Learning (CL) and Mask Image Modeling (MIM) d...
research
04/27/2023

Categorification of Group Equivariant Neural Networks

We present a novel application of category theory for deep learning. We ...
research
11/14/2016

Post Training in Deep Learning with Last Kernel

One of the main challenges of deep learning methods is the choice of an ...
research
11/11/2022

Depth and Representation in Vision Models

Deep learning models develop successive representations of their input i...

Please sign up or login with your details

Forgot password? Click here to reset