Formally, any finite joint probability distribution over
exchangeable random variables(called “entities” from now on) must fulfill the following requirement for all permutations :
It has been shown that finite exchangeable distributions can be written as a signed mixture of i.i.d. distributions Kerns and Székely (2006). Note that this differs from de Finetti’s theorem, which is a similar statement for exchangeable processes, i.e. infinite sequences of random variables, and states that in this case the distribution is a mixture of i.i.d. processes: , with a probability distribution. For illustration of this difference, take a distribution of two exchangeable random variables that sum up to a fixed number (here commutativity ensures exchangeability)—in this case sampling of the two numbers cannot be written as conditionally independent given a . Recent generative models build on top of de Finetti’s results Korshunova et al. (2018); Bender et al. (2019), and hence assume an underlying infinite sequence of exchangeable variables.
In this work we present an architecture to explicitly model finite exchangeable data. In other words, we are concerned with data comprised of sets which are i.i.d. sampled from our underlying data, but each set
is a finite exchangeable set, which can be non-i.i.d. data samples in some arbitrary order. We develop a density estimation model that is permutation invariant and is able to model dependencies between the entities in this setting. We call the resulting architectureSet Flow, as it builds on ideas of normalizing flows, in particular compositions of bijections like Real NVP Dinh et al. (2017), and combines these ideas with set models Zaheer et al. (2017).
The paper is structured as follows. In Section 2 we review background concepts and Section 3 has related work. Section 4 describes our model and how it is trained. In Section 5 we present experiments on synthetic and real data, and finally conclude in Section 6 with a brief summary and discussion of future directions.
One straight-forward approach to generate a set function is to treat the input as a sequence and train an RNN, but augmented with all possible input permutations, in the hopes that the RNN will become invariant to the input order. This approach might be robust to small sequences but for set sizes in the thousands it becomes hard to scale. Also, as described in Vinyals et al. (2016), the order of the sequences does matter and cannot be discarded.
A recently proposed neural network method, which is invariant to the order if its inputs, is the Deep Set architectureZaheer et al. (2017). The key idea of this approach is to map each input to a learned feature representation, on which a pooling operation is performed (e.g. a sum), which then is passed through another function. With being the set of all sets, being a set, the deep set function can be written as , where and are chosen as a neural networks.
Recent methods like Janossy pooling Murphy et al. (2019) expresses a permutation invariant function as the average of a permutation variant function applied to all reorderings of the input sequence which allows the layer to leverage complicated permutation variant functions to construct permutation invariant ones. This is computationally demanding, but can be done in a tractable fashion via approximation of the ordering or via random permutations. One can also train a permutation optimization module that learns a canonical ordering Zhang et al. (2019) to permute a set and then use it in a permutation invariant fashion, typically by processing it via an RNN.
2.2 Density Estimation via Normalizing Flows
Real NVP Dinh et al. (2017) is a type of normalizing flow Tabak and Turner (2013) where densities in the input space are transformed into some simple distribution space , like an isotropic Gaussian, via , which is composed of stacks of bijections or invertible mappings, with the property that the inverse is easy to evaluate and computing the Jacobian determinant takes time. Due to the change of variables formula we can evaluate via the Gaussian by
The bijection introduced by Real NVP called the Coupling Layer satisfies the above two properties. It leaves part of its inputs unchanged and transforms the other part via functions of the un-transformed variables
where is an element wise product, is a scaling and a translation function from , given by neural networks. To model a complex nonlinear density map , a number of coupling layers
are composed together, while alternating the dimensions which are unchanged and transformed. Via the change of variables formula the probability density function (PDF) of the flow given a data point can be written as
Note that the Jacobian for the Real NVP is a block-triangular matrix and thus the log-determinant simply becomes
is the sum over all the vector elements,is the element-wise logarithm and is the diagonal of the Jacobian. This model, parameterized by the weights of the scaling and translation neural networks
, is then trained via stochastic gradient descent (SGD) on training data points where for each batchthe log likelihood (3) as given by
is maximized. One can trivially condition the PDF on some additional information to model by concatenating to the inputs of the scaling and translation function approximators, i.e. and which are modified to map . This does not change the log-determinant of the coupling layers given by (4).
In practice Batch NormalizationIoffe and Szegedy (2015)
is applied, as a bijection, to outputs of coupling layers to stabilize training of normalizing flow. This bijection implements the normalization procedure using a weighted average of a moving average of the layer’s mean and standard deviation values, which are different depending if we are training or doing inference.
3 Related Work
The Real NVP approach can be generalized as in the Masked Autoregressive Flow (MAF) Papamakarios et al. (2017) which models the random numbers used in each stack to generate data. Glow Kingma and Dhariwal (2018) augments Real NVP by the addition of a reversible convolution, as well as removing other components and thus simplifying the overall architecture to obtain qualitatively better samples for high dimensional data like images.
The BRUNO model Korshunova et al. (2018)
performs exact Bayesian inference on sets of data such that the joint distribution over observations is permutation invariant in an autoregressive fashion, in that new samples can be generated conditional on previous ones and a stream of new data points can be easily incorporated at test time. This is easily possible for our method as well, where the network architecture is considerably simple as it only draws upon ideas from normalizing flows. BRUNO, on the other hand, makes use of Student-
processes, i.e. Bayesian models of real-valued functions admitting closed form marginal likelihood and posterior predictive expressionsShah et al. (2014). The main issue with this building block is that inference typically scales cubically in the number of data points, although the Woodbury matrix inversion lemma can be used to alleviate this issue for the streaming data setting.
Similar to BRUNO, the PILET model Bender et al. (2019)
utilizes an autoregressive model, build upon normalizing flow ideas instead of Student--processes Oliva et al. (2018)
. This is combined with a permutation equivariant function to capture interdependence of entities in a set while maintaining exchangeability. They extend their method to make use of a latent code in an exchangeable variational autoencoder framework called PILET-VAE. Note both BRUNO and PILET transform base distributions by applying bijections to entity dimension.
Bayesian Sets Ghahramani and Heller (2006) also models exchangeable sets of binary features but it is not reversible so does not allow sampling from it.
4 Set Flow
In order to make a model invariant to input permutations, one can try to sort the input into some canonical order. While sorting is a very simple solution, for high dimensional points the ordering is in general not stable with respect to the point perturbations and thus does not fully resolve the issue. This makes it hard for a model to learn a consistent mapping even if we constrain the model to have the same set size.
We propose a normalizing flow architecture called Set Flow that in each stack transforms each entity of the set via a shared global Gaussian noise vector, and then this noise vector gets transformed via a symmetric function of all the transformed elements of the set, for example via a Deep Set Zaheer et al. (2017) layer.
Figure 1 shows a single Set Flow stack, which takes its input from layer to the next stack . The block takes a set of entities where , and a global Gaussian noise vector and transforms it to given by:
where is a permutation invariant function given via a Deep Set, and are deep neural networks function approximators and is a standard Real NVP—all of these functions are layer specific and do not share weights across layers. By stacking such Set-Coupling layers we arrive at our Set Flow model. As one can see from the construction this mapping is permutation equivariant due to the Deep Set layer and invertable via the bijections.
The inverse transformation starts by sampling a global noise vector as well as a set of the desired number of Gaussian sample entities and going through the flow model in reverse (or from the top to bottom in Figure 1).
As in the Real NVP we can also condition this model via some for each set of entities by the following modification in (5):
to obtain a set-conditioned model, for example when the entities of a set come from a particular category.
We train the model by sampling batches where for each batch the size of the set is fixed, and construct sets where each set has entities as well as a global noise vector. We use Adam Kingma and Ba (2015) with standard parameters, to maximize the log likelihood:
where for each term above (2) is employed to explicitly evaluate the likelihoods and calculate their derivatives, with respect to which denotes all parameters of the Set Flow model.
Our first goal in the experiments is to demonstrate and analyze the ability of the proposed model to capture non-i.i.d. dependencies within finite sets. In a second set of experiments, we show that the model scales to much larger and complex datasets by learning 3D point clouds.
5.1 Generation of Non-i.i.d. Exchangeable Data Sets
In order to best understand the ability of the model to capture dependencies of entities, we generate a toy dataset of finite sets with a non-i.i.d. structure: equidistant 2D points on circles with varying radius and position. The generative process of each set is given as follows: first, the center position , radius and a rotation offset is sampled uniformly as , and . Then points are generated with coordinates and , where , with independent radial noise and angular noise .
Figure 2 (left) shows sample sets with a size drawn from this generative model—colors indicate set membership. For the experiment, we trained a model on uniformly random sampled set sizes in , where each minibatch of sets contained the same set sizes. The second subfigure from left shows that after set samples, the model groups elements of sets together in clusters, but fails to produce discernible circles with equidistant points on them. After set samples, the model reproduces the dataset more faithfully, as can be seen in the second to right Figure. The rightmost subfigure in Figure 2 (top) shows the distribution of inferred phases from fitted circles to sets of size (the mean phase across the set is subtracted for alignment). It can be seen that the model (green) nicely captures the equidistant peaks, similar to the original data (blue). Note that this implies that the model captured the generative process of the finite set—otherwise there would be more mass in between the peaks. The variance, however, is larger than in the ground truth phases. Similarly, the model has a bias towards smaller circles—as can be seen in the distribution of inferred radii.
5.2 3D Point Cloud Experiments
We train Set Flow from point clouds of Airplane and Chair classes of the ModelNet40 Wu et al. (2015) dataset, where we sample random points from a point cloud of 10,000 for each model to construct a set for the chosen class. We split the model files into a training and test set via an 80% split. We train the model on two individual classes: airplane and chair separately and report the mean test likelihoods in Table 1. We also show some sample generated point clouds in Figure 3 for different set sizes.
We also train the model on three classes (airplane, chair and lamp) together and then given two sets we obtain the noise vectors by passing the sets through our model. We can then linearly interpolate between these two sets and generate samples by passing the interpolated noise, both global and for the entities of the set, backwards. Figure4 shows the results of this experiment for a chair to another chair, chair to a lamp and chair to an airplane.
Finally, we train the model on all 40 classes both without supplying the class labels and with class labels via a set class embedding vector . We report the mean test log-likelihoods over each entity in the set in Table 1 together with results from other methods.
We have implemented all the experiments in PyTorchPaszke et al. (2017) and will make the code available after the review process here111https://www.github.com/xxx/xxx
. We used the following hyperparameters: batch size, global noise vector dimension , size of Deep Set pooling output , size of conditioning embedding vector , number of Set Flow stacks , number of random entities in a set and a learning rate of for all our experiments.
6 Discussion and Conclusions
We have introduced a simple generative architecture for learning and sampling from exchangeable data of finite sets via a normalizing flow architecture using permutation invariant functions like, for example, Deep Sets. As shown in the experiments our model captures dependencies between entities of a set in a computationally feasible manner. We demonstrated the capability of the model to capture finite exchange invariant generative processes on toy data. We also demonstrated state-of-the art performance for generative modeling of 3D point clouds. In principle, the propose model can be applied to higher dimensional data points, like for example sets of images e.g. in an outfit.
In future work we will further explore alternative architectures of these models, utilize them to learn on sets of images and experiment to see if these methods can be used to learn correlations in time series data across a large number of entities.
- Bender et al.  C. Bender, J. J. Garcia, K. O’Connor, and J. Oliva. Permutation invariant likelihoods and equivariant transformations. abs/1902.01967, 2019. URL http://arxiv.org/abs/1902.01967.
- Dinh et al.  L. Dinh, J. Sohl-Dickstein, and S. Bengio. Density estimation using Real NVP, 2017. URL https://arxiv.org/abs/1605.08803.
- Ghahramani and Heller  Z. Ghahramani and K. A. Heller. Bayesian sets. In Y. Weiss, B. Schölkopf, and J. C. Platt, editors, Advances in Neural Information Processing Systems 18, pages 435–442. MIT Press, 2006. URL http://papers.nips.cc/paper/2817-bayesian-sets.pdf.
- Ioffe and Szegedy  S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37, ICML’15, pages 448–456. JMLR.org, 2015. URL http://dl.acm.org/citation.cfm?id=3045118.3045167.
- Kerns and Székely  G. J. Kerns and G. J. Székely. Definetti’s theorem for abstract finite exchangeable sequences. Journal of Theoretical Probability, 19(3):589–608, Aug. 2006. doi: 10.1007/s10959-006-0028-z. URL https://doi.org/10.1007/s10959-006-0028-z.
- Kingma and Ba  D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), 2015.
- Kingma and Dhariwal  D. P. Kingma and P. Dhariwal. Glow: Generative flow with invertible convolutions. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 10236–10245. Curran Associates, Inc., 2018.
- Korshunova et al.  I. Korshunova, J. Degrave, F. Huszar, Y. Gal, A. Gretton, and J. Dambre. Bruno: A deep recurrent model for exchangeable data. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 7190–7198. Curran Associates, Inc., 2018. URL http://papers.nips.cc/paper/7949-bruno-a-deep-recurrent-model-for-exchangeable-data.pdf.
- Murphy et al.  R. L. Murphy, B. Srinivasan, V. Rao, and B. Ribeiro. Janossy pooling: Learning deep permutation-invariant functions for variable-size inputs. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=BJluy2RcFm.
- Oliva et al.  J. Oliva, A. Dubey, M. Zaheer, B. Poczos, R. Salakhutdinov, E. Xing, and J. Schneider. Transformation autoregressive networks. In J. Dy and A. Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 3898–3907, Stockholmsmässan, Stockholm Sweden, 10–15 Jul 2018. PMLR. URL http://proceedings.mlr.press/v80/oliva18a.html.
- Papamakarios et al.  G. Papamakarios, T. Pavlakou, and I. Murray. Masked autoregressive flow for density estimation. Advances in Neural Information Processing Systems 30, 2017.
- Paszke et al.  A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in pytorch. In NIPS-W, 2017.
Qi et al. 
C. R. Qi, H. Su, K. Mo, and L. J. Guibas.
Pointnet: Deep learning on point sets for 3d classification and segmentation.
- Shah et al.  A. Shah, A. Wilson, and Z. Ghahramani. Student-t processes as alternatives to gaussian processes. In Artificial Intelligence and Statistics, pages 877–885, 2014.
- Tabak and Turner  E. G. Tabak and C. V. Turner. A family of nonparametric density estimation algorithms. Communications on Pure and Applied Mathematics, 66(2):145–164, 2013.
- Vinyals et al.  O. Vinyals, S. Bengio, and M. Kudlur. Order matters: Sequence to sequence for sets. In International Conference on Learning Representations (ICLR), 2016. URL http://arxiv.org/abs/1511.06391.
- Wu et al.  Z. Wu, S. Song, A. Khosla, L. Zhang, X. Tang, and J. Xiao. 3d shapenets: A deep representation for volumetric shape modeling. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, USA, June 2015.
- Zaheer et al.  M. Zaheer, S. Kottur, S. Ravanbakhsh, B. Poczos, R. R. Salakhutdinov, and A. J. Smola. Deep Sets. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 3391–3401. Curran Associates, Inc., 2017. URL http://papers.nips.cc/paper/6931-deep-sets.pdf.
- Zhang et al.  Y. Zhang, J. Hare, and A. Prügel-Bennett. Learning representations of sets through optimized permutations. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=HJMCcjAcYX.