A Group Theoretic Perspective on Unsupervised Deep Learning

04/08/2015
by   Arnab Paul, et al.
0

Why does Deep Learning work? What representations does it capture? How do higher-order representations emerge? We study these questions from the perspective of group theory, thereby opening a new approach towards a theory of Deep learning. One factor behind the recent resurgence of the subject is a key algorithmic step called pretraining: first search for a good generative model for the input samples, and repeat the process one layer at a time. We show deeper implications of this simple principle, by establishing a connection with the interplay of orbits and stabilizers of group actions. Although the neural networks themselves may not form groups, we show the existence of shadow groups whose elements serve as close approximations. Over the shadow groups, the pre-training step, originally introduced as a mechanism to better initialize a network, becomes equivalent to a search for features with minimal orbits. Intuitively, these features are in a way the simplest. Which explains why a deep learning network learns simple features first. Next, we show how the same principle, when repeated in the deeper layers, can capture higher order representations, and why representation complexity increases as the layers get deeper.

READ FULL TEXT

page 1

page 2

research
12/20/2014

Why does Deep Learning work? - A perspective from Group Theory

Why does Deep Learning work? What representations does it capture? How d...
research
07/28/2020

Deep frequency principle towards understanding why deeper learning is faster

Understanding the effect of depth in deep learning is a critical problem...
research
04/27/2023

Categorification of Group Equivariant Neural Networks

We present a novel application of category theory for deep learning. We ...
research
02/27/2023

Layer Grafted Pre-training: Bridging Contrastive Learning And Masked Image Modeling For Label-Efficient Representations

Recently, both Contrastive Learning (CL) and Mask Image Modeling (MIM) d...
research
04/24/2018

An Information-Theoretic View for Deep Learning

Deep learning has transformed the computer vision, natural language proc...
research
03/13/2020

What Information Does a ResNet Compress?

The information bottleneck principle (Shwartz-Ziv Tishby, 2017) sugg...
research
02/01/2019

Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation

Current methods to interpret deep learning models by generating saliency...

Please sign up or login with your details

Forgot password? Click here to reset