Double framed moduli spaces of quiver representations

09/29/2021
by   Marco Armenta, et al.
0

Motivated by problems in the neural networks setting, we study moduli spaces of double framed quiver representations and give both a linear algebra description and a representation theoretic description of these moduli spaces. We define a network category whose isomorphism classes of objects correspond to the orbits of quiver representations, in which neural networks map input data. We then prove that the output of a neural network depends only on the corresponding point in the moduli space. Finally, we present a different perspective on mapping neural networks with a specific activation function, called ReLU, to a moduli space using the symplectic reduction approach to quiver moduli.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/25/2022

Optimal Approximation Rates for Deep ReLU Neural Networks on Sobolev Spaces

We study the problem of how efficiently, in terms of the number of param...
research
09/01/2022

Complexity of Representations in Deep Learning

Deep neural networks use multiple layers of functions to map an object r...
research
04/02/2019

On Geometric Structure of Activation Spaces in Neural Networks

In this paper, we investigate the geometric structure of activation spac...
research
11/22/2022

Interpreting Neural Networks through the Polytope Lens

Mechanistic interpretability aims to explain what a neural network has l...
research
07/12/2019

Sparsely Activated Networks

Previous literature on unsupervised learning focused on designing struct...
research
03/29/2019

Deep Representation with ReLU Neural Networks

We consider deep feedforward neural networks with rectified linear units...
research
06/16/2019

A Closer Look at Double Backpropagation

In recent years, an increasing number of neural network models have incl...

Please sign up or login with your details

Forgot password? Click here to reset