Neural Networks on Groups

06/13/2019
by   Stella Rose Biderman, et al.
0

Recent work on neural networks has shown that allowing them to build internal representations of data not restricted to R^n can provide significant improvements in performance. The success of Graph Neural Networks, Convolutional Kernel Networks, and Fourier Neural Networks among other methods have demonstrated the clear value of applying abstract mathematics to the design of neural networks. The theory of neural networks has not kept up however, and the relevant theoretical results (when they exist at all) have been proven on a case-by-case basis without a general theory. The process of deriving new theoretical backing for each new type of network has become a bottleneck to understanding and validating new approaches. In this paper we extend the concept of neural networks to general groups and prove that neural networks with a single hidden layer and a bounded non-constant activation function can approximate any L^p function defined over a locally compact Abelian group. This framework and universal approximation theorem encompass all of the aforementioned contexts. We also derive important corollaries and extensions with minor modification, including the case for approximating continuous functions on a compact subset, neural networks with ReLU activation functions on a linearly bi-ordered group, and neural networks with affine transformations on a vector space. Our work also obtains as special cases the recent theorems of Qi et al. [2017], Sennai et al. [2019], Keriven and Peyre [2019], and Maron et al. [2019].

READ FULL TEXT
research
07/12/2020

Abstract Universal Approximation for Neural Networks

With growing concerns about the safety and robustness of neural networks...
research
04/08/2020

The Loss Surfaces of Neural Networks with General Activation Functions

We present results extending the foundational work of Choromanska et al ...
research
06/05/2018

The universal approximation power of finite-width deep ReLU networks

We show that finite-width deep ReLU neural networks yield rate-distortio...
research
11/10/2020

Expressiveness of Neural Networks Having Width Equal or Below the Input Dimension

The expressiveness of deep neural networks of bounded width has recently...
research
04/10/2020

Theoretical Aspects of Group Equivariant Neural Networks

Group equivariant neural networks have been explored in the past few yea...
research
05/13/2019

Towards a regularity theory for ReLU networks -- chain rule and global error estimates

Although for neural networks with locally Lipschitz continuous activatio...
research
02/10/2018

Generalization of an Upper Bound on the Number of Nodes Needed to Achieve Linear Separability

An important issue in neural network research is how to choose the numbe...

Please sign up or login with your details

Forgot password? Click here to reset