Learning Set-equivariant Functions with SWARM Mappings

06/22/2019
by   Roland Vollgraf, et al.
8

In this work we propose a new neural network architecture that efficiently implements and learns general purpose set-equivariant functions. Such a function f maps a set of entities x={ x_1,...,x_n} from one domain to a set of same cardinality y=f(x)={ y_1,...,y_n} in another domain regardless of the ordering of the entities. The architecture is based on a gated recurrent network which is iteratively applied to all entities individually and at the same time syncs with the progression of the whole population. In reminiscence to this pattern, which can be frequently observable in nature, we call our approach SWARM mapping. Set-equivariant and generally permutation invariant functions are important building blocks for many state of the art machine learning approaches. Even in application where the permutation invariance is not of primary interest, as to be seen in the recent success of attention based transformer models (Vaswani et. al. 2017). Accordingly, we demonstrate the power and usefulness of SWARM mappings in different applications. We compare the performance of our approach with another recently proposed set-equivariant function, the SetTransformer (Lee et.al. 2018) and we demonstrate that transformer solely based on SWARM layers gives state of the art results.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset