Regularizing Towards Permutation Invariance in Recurrent Models

10/25/2020
by   Edo Cohen-Karlik, et al.
0

In many machine learning problems the output should not depend on the order of the input. Such "permutation invariant" functions have been studied extensively recently. Here we argue that temporal architectures such as RNNs are highly relevant for such problems, despite the inherent dependence of RNNs on order. We show that RNNs can be regularized towards permutation invariance, and that this can result in compact models, as compared to non-recurrent architectures. We implement this idea via a novel form of stochastic regularization. Existing solutions mostly suggest restricting the learning problem to hypothesis classes which are permutation invariant by design. Our approach of enforcing permutation invariance via regularization gives rise to models which are semi permutation invariant (e.g. invariant to some permutations and not to others). We show that our method outperforms other permutation invariant approaches on synthetic and real world datasets.

READ FULL TEXT
research
02/05/2019

Permutation Invariant Likelihoods and Equivariant Transformations

In this work, we fill a substantial void in machine learning and statist...
research
10/12/2021

On Permutation Invariant Problems in Large-Scale Inference

Simultaneous statistical inference problems are at the basis of almost a...
research
07/17/2021

PICASO: Permutation-Invariant Cascaded Attentional Set Operator

Set-input deep networks have recently drawn much interest in computer vi...
research
05/12/2022

Minimal Neural Network Models for Permutation Invariant Agents

Organisms in nature have evolved to exhibit flexibility in face of chang...
research
11/02/2020

Classification of Periodic Variable Stars with Novel Cyclic-Permutation Invariant Neural Networks

Neural networks (NNs) have been shown to be competitive against state-of...
research
01/16/2018

Embedding a θ-invariant code into a complete one

Let A be a finite or countable alphabet and let θ be a literal (anti)mor...
research
11/05/2018

Janossy Pooling: Learning Deep Permutation-Invariant Functions for Variable-Size Inputs

We consider a simple and overarching representation for permutation-inva...

Please sign up or login with your details

Forgot password? Click here to reset