Universal approximations of permutation invariant/equivariant functions by deep neural networks

03/05/2019 ∙ by Akiyoshi Sannai, et al. ∙ 0

In this paper,we develop a theory of the relationship between permutation (S_n-) invariant/equivariant functions and deep neural networks. As a result, we prove an permutation invariant/equivariant version of the universal approximation theorem, i.e S_n-invariant/equivariant deep neural networks. The equivariant models are consist of stacking standard single-layer neural networks Z_i:X → Y for which every Z_i is S_n-equivariant with respect to the actions of S_n . The invariant models are consist of stacking equivariant models and standard single-layer neural networks Z_i:X → Y for which every Z_i is S_n-invariant with respect to the actions of S_n . These are universal approximators to S_n-invariant/equivariant functions. The above notation is mathematically natural generalization of the models in deepsets. We also calculate the number of free parameters appeared in these models. As a result, the number of free parameters appeared in these models is much smaller than the one of the usual models. Hence, we conclude that although the free parameters of the invariant/equivarint models are exponentially fewer than the one of the usual models, the invariant/equivariant models can approximate the invariant/equivariant functions to arbitrary accuracy. This gives us an understanding of why the invariant/equivariant models designed in [Zaheer et al. 2018] work well.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.