
On Symmetry and Initialization for Neural Networks
This work provides an additional step in the theoretical understanding o...
read it

Towards Understanding the Role of OverParametrization in Generalization of Neural Networks
Despite existing work on ensuring generalization of neural networks in t...
read it

Use of symmetric kernels for convolutional neural networks
At this work we introduce horizontally symmetric convolutional kernels f...
read it

Effect of Various Regularizers on Model Complexities of Neural Networks in Presence of Input Noise
Deep neural networks are overparameterized, which implies that the numb...
read it

Abelian Neural Networks
We study the problem of modeling a binary operation that satisfies some ...
read it

Partition of unity networks: deep hpapproximation
Approximation theorists have established bestinclass optimal approxima...
read it

Approximation capability of neural networks on spaces of probability measures and treestructured domains
This paper extends the proof of density of neural networks in the space ...
read it
A Functional Perspective on Learning Symmetric Functions with Neural Networks
Symmetric functions, which take as input an unordered, fixedsize set, are known to be universally representable by neural networks that enforce permutation invariance. However, these architectures only give guarantees for fixed input sizes, yet in many practical scenarios, such as particle physics, a relevant notion of generalization should include varying the input size. In this paper, we embed symmetric functions (of any size) as functions over probability measures, and study the ability of neural networks defined over this space of measures to represent and learn in that space. By focusing on shallow architectures, we establish approximation and generalization bounds under different choices of regularization (such as RKHS and variation norms), that capture a hierarchy of functional spaces with increasing amount of nonlinear learning. The resulting models can be learnt efficiently and enjoy generalization guarantees that extend across input sizes, as we verify empirically.
READ FULL TEXT
Comments
There are no comments yet.