and convolutional neural networks are the state-of-the-art for many image-based problems(He et al., 2016). Recently, however, models for unstructured inputs in the form of sets have rapidly gained attention (Ravanbakhsh et al., 2016; Zaheer et al., 2017; Qi et al., 2017a; Lee et al., 2018; Murphy et al., 2018; Korshunova et al., 2018).
Importantly, a range of machine learning problems can naturally be formulated in terms of sets; e.g. parsing a scene composed of a set of objects (Eslami et al., 2016; Kosiorek et al., 2018), making predictions from a set of points forming a 3D point cloud (Qi et al., 2017a, b)
, or training a set of agents in reinforcement learning(Sunehag et al., 2017). Furthermore, attention-based models perform a weighted summation of a set of features (Vaswani et al., 2017; Lee et al., 2018). Hence, understanding the mathematical properties of set-based models is valuable both in terms of set-structured applications as well as better understanding the capabilities and limitations of attention-based models.
Many popular machine learning models, including neural networks and Gaussian processes, are fundamentally based on vector inputs111
Or inputs of higher rank, i.e. matrices and tensors.rather than set inputs. In order to adapt these models for use with sets, we must enforce the property of permutation invariance, i.e. the output of the model must not change if the inputs are reordered. Multiple authors, including Ravanbakhsh et al. (2016), Zaheer et al. (2017) and Qi et al. (2017a), have considered enforcing this property using a technique which we term sum-decomposition, illustrated in Figure 1. Mathematically speaking, we say that a function defined on sets of size is sum-decomposable via if there are functions and such that222We use here for brevity – see Definition 2.2 for the fully general definition.
We refer to here as the latent space. Since summation is permutation-invariant, a sum-decomposition is also permutation-invariant. Ravanbakhsh et al. (2016), Zaheer et al. (2017) and Qi et al. (2017b) have also considered the idea of enforcing permutation invariance using other operations, e.g. . In this paper we concentrate on a detailed analysis of sum-decomposition, but some of the limitations we discuss also apply when is used instead of summation.
Our main contributions can be summarised as follows.
Recent proofs, e.g. in Zaheer et al. (2017), consider functions on countable domains. We explain why considering countable domains can lead to results of limited practical value (i.e. cannot be implemented with a neural network), and why considering continuity on uncountable domains such as is necessary. With reference to neural networks, we ground this discussion in the universal approximation theorem, which relies on continuity on uncountable domains .
In contrast to previous work (Zaheer et al., 2017; Qi et al., 2017a), which considers sufficient conditions for universal function representation, we establish a necessary condition for a sum-decomposition-based model to be capable of universal function representation. Additionally, we provide weaker sufficient conditions which imply a stronger version of universality. Specifically, we show that the dimension of the latent space being at least as large as the maximum number of input elements is both necessary and sufficient for universal function representation.
While primarily targeted at neural networks, these results hold for any implementation of sum-decomposition, e.g. using Gaussian processes, as long as it provides universal function approximation for continuous functions. Proofs of all novel results are available in Appendix B.
In this section we recount the theorems and proofs on sum-decomposition from Zaheer et al. (2017). We begin by introducing important definitions and the notation used throughout our work. Note that we focus on permutation-invariant functions and do not discuss permutation equivariance which is also considered in Zaheer et al. (2017).
A function is permutation-invariant if for all .
We say that a function is sum-decomposable if there are functions and such that
In this case, we say that is a sum-decomposition of .
Given a latent space , we say that is sum-decomposable via when this expression holds for some whose codomain is , i.e. .
We say that is continuously sum-decomposable when this expression holds for some continuous functions and .
We will also consider sum-decomposability where the inputs to are vectors rather than sets - in this context, the sum is over the elements of the input vector.
A set is countable if its number of elements, i.e. the cardinality, is smaller or equal to the number of elements in . This includes both finite and countably infinite sets; e.g. , , and subsets thereof.
A set is uncountable if its number of elements is greater than the number of elements in , e.g. and certain subsets thereof.
Denote the power set of a set by .
Denote the set of finite subsets of a set by .
Denote the set of subsets of a set containing at most elements by .
Throughout, we discuss expressions of the form , where is a set. Note that care must be taken in interpreting this expression when is not finite – we discuss this issue fully in Section A.1.
2.2 Background Theorems
Zaheer et al. (2017) consider the two cases where is a subset of, or drawn from, a countable and an uncountable universe . We now outline the theorems and proofs relating to these two cases.
Theorem 2.8 (Countable case).
Let where is countable. Then is permutation-invariant if and only if it is sum-decomposable via .
Since is countable, each can be mapped to a unique element in by a function . Let . If we can choose so that is injective, then we can set , giving
i.e. f is sum-decomposable via .
Now consider . Under this mapping, each corresponds to a unique real number expressed in base 4. Therefore is injective, and the conclusion follows. ∎
This construction works for any set size , and even for sets of infinite size. However, it assumes that is a set with no repeated elements, i.e. multisets are not supported. Specifically, the construction will fail with multisets because fails to be injective if its domain includes multisets. In Section A.3, we extend Theorem 2.8 to also support multisets, with the restriction that infinite sets are no longer supported.
Theorem 2.9 (Uncountable case).
Let , and let be a continuous function. Then is permutation-invariant if and only if it is continuously sum-decomposable via .
Show that the mapping defined by for is injective and continuous.333In the original proof, is denoted .
Show that has a continuous inverse.
Define by .
Define by .
Note that, by definition of and , is a continuous sum-decomposition of via . ∎
Zaheer et al. (2017) conjecture that any continuous permutation-invariant function on , the power set of , is continuously sum-decomposable. In Section 3, we show that this is not possible, and in Section 4 we show that even if the domain of is restricted to , the finite subsets of , then is a necessary condition for arbitrary functions to be continuously sum-decomposable. Additionally, we prove that is a sufficient condition – implying together with the above that it is not possible to do better than this.
3 The Importance of Continuity
In this section, we argue that continuity is essential to discussions of function representation, that it has been neglected in prior work on permutation-invariant functions, and that this neglect has implications for the strength and generality of existing results.
Intuitively speaking a function is continuous if, at every point in the domain, the variation of the output can be made arbitrarily small by limiting the variation in the input. Continuity is the reason that, for instance, working to machine precision usually produces sensible results. Truncating to machine precision alters the input to a function slightly, but continuity ensures that the change in output is also slight.
In Zaheer et al. (2017), the authors demonstrate that when is a countable set, e.g. the rational numbers, any function is sum-decomposable via . This is taken as a hopeful indication that sum-decomposability may extend to uncountable domains, e.g. . Extending to the uncountable case may appear, at first glance, to be a mere formality – we are, after all, ultimately interested in implementing algorithms on finite hardware. Nevertheless, it is not true that a theoretical result for a countably infinite domain must be strong enough for practical purposes. In fact, considering functions on uncountably infinite domains such as is of real importance.
Turning specifically to neural networks, the universal approximation theorem says that any continuous function can be approximated by a neural network, but not that any function can be approximated by a neural network (Cybenko, 1989). A similar statement is true for other approximators, such as some Gaussian processes (Rasmussen & Williams, 2006). The notion of continuity required here is specifically that of continuity on compact subsets of .
Crucially, if we wish to work mathematically with continuity in a way that closely matches our intuitions, we must consider uncountable domains. To illustrate this point, consider the rational numbers . is dense in , and it is tempting to think that is therefore “all we need”. However, a theoretical guarantee of continuity on is weak, and does not imply continuity on . The universal approximation theorem for neural networks relies on continuity on , and we cannot usefully take continuity on as a proxy for this property. Figure 2 shows a function which is continuous on , and illustrates that a continuous function on may not extend continuously to . This figure also illustrates that continuity on defies our intuitions about what continuity should mean, and is too weak for the universal approximation theorem for neural networks. We require the stronger notion of continuity on .
In light of the above, it is clear that continuity is a key property for function representation, and also that there is a crucially important difference between countable and uncountable domains. This raises two problems for Theorem 2.8. First, the theorem does not consider the continuity of the sum-decomposition when the domain has some non-trivial topological structure (e.g. ). Second, we still care about continuity on , and there is no guarantee that this is possible given continuity on .
In fact, the continuity issue cannot be overcome – we can demonstrate that in general the sum-decomposition of Theorem 2.8, which goes via , cannot be made continuous for :
theoremqdiscontinuous There exist functions such that, whenever is a sum-decomposition of via , is discontinuous at every point .
We can actually say something more general than the above. Our proof can easily be adapted to demonstrate that if is injective, or if we want a fixed to suffice for any , then can only be continuous at isolated points of the underlying set , regardless of whether . I.e., it is not specifically due to the structure of that continuous sum-decomposability fails. In fact, it fails whenever we have a non-trivial topological structure. For functions which we want to model using a neural network, this is worrying.
It is not possible to represent an everywhere-discontinuous with a neural network. We therefore view Theorem 2.8 as being of limited practical relevance and as not providing a reliable intuition for what should be possible in the uncountable case. We do however see this result as mathematically interesting, and have obtained the following result extending it to the case where the domain is uncountable. This result is slightly weaker than the countable case, in that the domain of can contain arbitrarily large finite sets, but not infinite sets.
theoremuncdiscontinuous Let . Then is sum-decomposable via .
Once again, the sum-decomposition is highly discontinuous. The limitation that is not defined on infinite sets cannot be overcome:
theoremuncfinite If is uncountable, then there exist functions which are not sum-decomposable. Note that this holds even if the sum-decomposition is allowed to be discontinuous.
To summarise, we show why considering countable domains can lead to results of limited practical value and why considering continuity on uncountable domains is necessary. We point out that some of the previous work is therefore of limited practical relevance, but regard it as mathematically interesting. In this vein, we extend the analysis of sum-decomposability when continuity is not required.
4 Practical Function Representation
Having established the necessity of considering continuity on , we now explore the implications for sum-decomposability of permutation-invariant functions. These considerations lead to concrete recommendations for model design and provide theoretical support for elements of current practice in the area. Specifically, we present three theorems whose implications can be summarised as follows.
A latent dimensionality of is sufficient for representing all continuous permutation-invariant functions on sets of size .
To guarantee that all continuous permutation-invariant functions can be represented for sets of size , a latent dimensionality of at least is necessary.
The key result which is the basis of the second statement and which underpins this discussion is as follows.
theoremmaxnotdecomp Let . Then there exist permutation invariant continuous functions which are not continuously sum-decomposable via .
Restated in more practical terms, this implies that for a sum-decomposition-based model to be capable of representing arbitrary continuous functions on sets of size , the latent space in which the summation happens must be chosen to have dimension at least . A similar statement is true for the analogous concept of max-decomposition – details are available in Section B.6.
To prove this theorem, we first need to state and prove the following lemma.
Let , and suppose , are functions such that:
Now let , and write for the restriction of to sets of size .
Then is injective for all .
We proceed by induction. The base case is clear.
Now let , and suppose that is injective. Now suppose there are sets such that . First note that, by (2), we must have:
So now write:
where , and similarly for .
From the central equality, and (3), we have:
Equipped with this lemma, we can now prove Section 4.
We proceed by contradiction. Suppose that functions and exist satisfying (2). Define by:
Denote the set of all with by , and let be the restriction of to . Since is a sum of continuous functions, it is also continuous, and by Lemma 4.1, it is injective.
Now note that is a convex open subset of , and is therefore homeomorphic to . Therefore, our continuous injective can be used to construct a continuous injection from to . But it is well known that no such continuous injection exists when . Therefore our decomposition (2) cannot exist. ∎
It is crucial to note that functions for which a lower-dimensional sum-decomposition does not exist need not be “badly-behaved” or difficult to specify. The limitation extends to functions of genuine interest. For our proof, we have specifically demonstrated that even is not continuously sum-decomposable when .
From Theorem 2.9, we also know that for a fixed input set size , any continuous permutation-invariant function is continuously sum-decomposable via . It is, however, possible to adapt the construction of Zaheer et al. (2017) to strengthen the result in two ways. Firstly, we can perform the sum-decomposition via :
[Fixed set size]theoremoriunc Let be continuous. Then is permutation-invariant if and only if it is continuously sum-decomposable via .
Secondly, we can deal with variable set sizes :
[Variable set size]theoremarbitrary Let be continuous. Then is permutation-invariant if and only if it is continuously sum-decomposable via .
Note that we must take some care over the notion of continuity in this theorem – see Section A.2.
Section 4 does not imply all functions require . Some functions, such as the mean, can be represented in a lower dimensional space. The statement rather says that if we do not want to impose any limitations on the complexity of the function, the latent space needs to have dimensionality at least .
Lemma 4.1 suggests that sum-decomposition via a latent space with dimension should suffice to model any function. Neural network models in the recent literature, however, deviate from these guidelines in several ways, indicating a disconnect between theory and practice. For example, the models in Zaheer et al. (2017) and Qi et al. (2017a) are considerably more complex than Equation 1, e.g. they apply several permutation-equivariant layers to the input before a permutation-invariant layer.
In light of Section 4, this disconnect becomes less surprising. We have shown that, for a target function of sufficient complexity, is the bare minimum required for the model to be capable of representing the target function. Achieving this would rely on the parameterisation of and being flexible enough and on the availability of a suitable optimisation method. In practice, we should not be surprised that more than the bare minimum capacity in our model is required for good performance. Even with , the model might not converge to the desired solution. At the same time, when we are dealing with real datasets, the training data may contain noise and redundant information, e.g. in the form of correlations between elements in the input, inducing functions of limited complexity that may in fact be representable with .
4.2 Illustrative Example
We now use a toy example to illustrate some practical implications of our results. Based on Section 4, we expect the number of input elements to have an influence on the required latent dimension , and in particular, we expect that the required latent dimension may increase without bound.
We train a neural network with the architecture presented in Figure 1 to predict the median of a set of values. We choose the median as a function because it is relatively simple but cannot be trivially represented via a sum in a fixed-dimensional latent space, in contrast to e.g. the mean, which is sum-decomposable via .444The construction for is not entirely trivial for variable set size, but going via is straightforward. and
We vary the latent dimension and the input set size to investigate the link between these two variables and the predictive performance. The MLPs parameterising and are given comparatively many layers and hidden units, relative to the simplicity of the task, to ensure that the latent dimension is the bottleneck. Further details are described in Appendix D.
Figure 3(a) shows the RMSE depending on the latent dimension for different input sizes. We make three observations.
For each set size, the error decreases monotonically with the dimension of the latent space.
Beyond a certain point, increasing the dimension of the latent space does not further reduce the error. We denote this the “critical point”.
As the set size increases, so does the latent dimension at the critical point.
Figure 3(b) shows the critical points as a function of the input size, indicating a roughly linear relationship between the two. Note that the critical points occur at . This can be explained by the fact that the models do not learn an algorithmic solution for computing the median, but rather to estimate it given samples drawn from the specific input distribution seen during training. Furthermore, estimating the median of a distribution, like other functions, renders some information in the input redundant. Therefore, the mapping from input to latent space does not need to be injective, allowing a model to solve the task with a smaller value of .
5 Related Work
Much of the recent work on deep learning with unordered sets follows the paradigm discussed in(Ravanbakhsh et al., 2016), Zaheer et al. (2017), and Qi et al. (2017a) which leverage the structure illustrated in Figure 1. Zaheer et al. (2017) provide an in-depth theoretical analysis which is discussed in detail in Section 2. Qi et al. (2017a) also derive a sufficiency condition for universal function approximation. In their proof, however, they set the latent dimension to where depends on the error tolerance for how closely the target function has to be approximated. As a result, the latent dimension goes to infinity for exact representation. In similar vain, Herzig et al. (2018) consider permutation-invariant functions on graphs.
A key application domain of set-based methods is the processing of point clouds, as the constituent points do not have an intrinsic ordering. The work by Qi et al. (2017a) on 3D point clouds, one of the first to use a permutation-invariant neural networks, is extended in Qi et al. (2017b) by sampling and grouping points in a hierarchical fashion to model the interaction between nearby points in the input space more explicitly. Qi et al. (2018) combine RGB and lidar data for object detection by using image detectors to generate bounding box proposals which are then further processed by a set-based model. Achlioptas et al. (2018) and Yi et al. (2018) show that set-based models can also be used to learn generative models of point clouds.
Vinyals et al. (2015) suggest that even though recurrent networks are universal approximators, the ordering of the input is crucial for good performance. Hence, they propose model that relies on attention to achieve permutation invariance in order to solve a sorting task. In general, it is worth noting that there exists a connection between the model in Zaheer et al. (2017) and recent attention-based models such as the one proposed in Vaswani et al. (2017). In this case, the aggregation layer includes a weighting parameter which is computed based on a key-query system which is also permutation invariant. Since the value of the weighting parameters could be learned to be , it is trivial to show that such an attention algorithm is also in principle able to approximate any permutation-invariant function, of course depending on the remaining parts of the architecture. Inspired by inducing point methods, Set Transformer (Lee et al., 2018) propose a computationally more efficient attention-module and demonstrate better performance on a range of set-based tasks. While stacking several of attention-modules can capture higher order dependencies, a more general treatment of this is offered by permutation-invariant, learnable Janossy Pooling (Murphy et al., 2018).
Similar to the methods considered here, Neural Processes (Garnelo et al., 2018b) and Conditional Neural Processes (Garnelo et al., 2018a) also rely on aggregation via summation in order to infer a distribution from a set of data points. Kim et al. (2019) add an attention mechanism to neural processes to improve empirical performance. Generative Query Networks (Eslami et al., 2018; Kumar et al., 2018) can be regarded as an instantiation of neural processes to learn useful representations of 3D scenes from multiple 2D views. Yang et al. (2018) also aggregate information from multiple views to compute representations of 3D objects.
consider exchangeable sequences – sequences consisting of random variables with a joint likelihood which is invariant under permutations.Bloem-Reddy & Teh (2018) propose a model including an additional noise variable leveraging the reparametrisation trick introduced by Kingma & Welling (2014); Rezende et al. (2014). Korshunova et al. (2018) use RealNVP (Dinh et al., 2016) as a bijective function which sequentially computes the parameters of a Student-t process.
This work derives theoretical limitations on the representation of arbitrary permutation-invariant functions on sets via a finite latent space. To this end, we demonstrate why continuity requires statements on uncountable domains, as opposed to countable domains, in order to ensure the practical usefulness of those statements. Under this constraint, we prove that a latent space whose dimension is at least as large as the maximum input set size is both sufficient and necessary for a model to be capable of universal function representation. The models which we have covered in this analysis are popular for a range of practical applications and can be implemented e.g. by neural networks or Gaussian processes. In future work, we would like to investigate the effect of constructing models with both permutation-equivariant and permutation-invariant modules on the required dimension of the latent space. Examining the implications of using self-attention, e.g. as in Lee et al. (2018), would be of similar interest.
The authors would like to thank Sudhanshu Kasewa for proof reading a draft of the paper.
- Achlioptas et al. (2018) Achlioptas, P., Diamanti, O., Mitliagkas, I., and Guibas, L. Learning Representations and Generative Models for 3D Point Clouds. International Conference on Machine Learning, 2018.
- Bloem-Reddy & Teh (2018) Bloem-Reddy, B. and Teh, Y. W. Neural network models of exchangeable sequences. Advances in Neural Information Processing Systems - Workshop, 2018.
Approximation by superpositions of a sigmoidal function.Mathematics of Control, Signals, and Systems, 2(4):303–314, 1989.
- Dinh et al. (2016) Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density estimation using Real NVP. arXiv preprint arXiv:1605.08803, 2016.
Eslami et al. (2016)
Eslami, S. M. A., Heess, N., Weber, T., Tassa, Y., Szepesvari, D., Kavukcuoglu,
K., and Hinton, G. E.
Attend, Infer, Repeat: Fast Scene Understanding with Generative Models.Advances in Neural Information Processing Systems, 2016.
- Eslami et al. (2018) Eslami, S. M. A., Rezende, D. J., Besse, F., Viola, F., Morcos, A. S., Garnelo, M., Ruderman, A., Rusu, A. A., Danihelka, I., Gregor, K., Reichert, D. P., Buesing, L., Weber, T., Vinyals, O., Rosenbaum, D., Rabinowitz, N. C., King, H., Hillier, C., Botvinick, M. M., Wierstra, D., Kavukcuoglu, K., and Hassabis, D. Neural scene representation and rendering. Science, 360:1204–1210, 2018.
- Garnelo et al. (2018a) Garnelo, M., Rosenbaum, D., Maddison, C. J., Ramalho, T., Saxton, D., Shanahan, M., Whye, Y., Danilo, T., and Ali, J. R. S. M. Conditional Neural Processes. International Conference on Machine Learning, 2018a.
- Garnelo et al. (2018b) Garnelo, M., Schwarz, J., Rosenbaum, D., Viola, F., Rezende, D. J., Eslami, S. M. A., and Teh, Y. W. Neural Processes. International Conference on Machine Learning, 2018b.
- He et al. (2016) He, K., Zhang, X., Ren, S., and Sun, J. Deep Residual Learning for Image Recognition.
- Herzig et al. (2018) Herzig, R., Raboh, M., Chechik, G., Berant, J., and Globerson, A. Mapping Images to Scene Graphs with Permutation-Invariant Structured Prediction. arXiv preprint arXiv:1802.05451, 2018.
- Kim et al. (2019) Kim, H., Mnih, A., Schwarz, J., Garnelo, M., Eslami, A., Rosenbaum, D., Vinyals, O., and Teh, Y. W. Attentive Neural Processes. arXiv preprint arXiv:1901.05761, 2019.
- Kingma & Welling (2014) Kingma, D. P. and Welling, M. Auto-Encoding Variational Bayes. International Conference on Learning Representations, 2014.
- Korshunova et al. (2018) Korshunova, I., Degrave, J., Huszár, F., Gal, Y., Gretton, A., and Dambre, J. BRUNO: A Deep Recurrent Model for Exchangeable Data. Advances in Neural Information Processing Systems, 2018.
- Kosiorek et al. (2018) Kosiorek, A. R., Kim, H., Posner, I., and Teh, Y. W. Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects. Advances in Neural Information Processing Systems, 2018.
- Kumar et al. (2018) Kumar, A., Eslami, S., Rezende, D. J., Garnelo, M., Viola, F., Lockhart, E., and Shanahan, M. Consistent Generative Query Networks. arXiv preprint arXiv:1807.02033, 2018.
- Lee et al. (2018) Lee, J., Lee, Y., Kim, J., Kosiorek, A. R., Choi, S., and Teh, Y. W. Set Transformer. arXiv preprint arXiv:1810.00825, 2018.
- Murphy et al. (2018) Murphy, R. L., Srinivasan, B., Rao, V., and Ribeiro, B. Janossy Pooling: Learning Deep Permutation-Invariant Functions for Variable-Size Inputs. arXiv preprint arXiv:1811.01900, 2018.
- Qi et al. (2017a) Qi, C. R., Su, H., Mo, K., and Guibas, L. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. IEEE Conference on Computer Vision and Pattern Recognition, 2017a.
- Qi et al. (2017b) Qi, C. R., Yi, L., Su, H., and Guibas, L. J. PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. Advances in Neural Information Processing Systems, 2017b.
- Qi et al. (2018) Qi, C. R., Liu, W., Wu, C., Su, H., and Guibas, L. J. Frustum PointNets for 3D Object Detection from RGB-D Data. IEEE Conference on Computer Vision and Pattern Recognition, 2018.
- Rasmussen & Williams (2006) Rasmussen, C. E. and Williams, C. K. I. Gaussian Processes for Machine Learning. The MIT Press, 2006. ISBN 026218253X.
- Ravanbakhsh et al. (2016) Ravanbakhsh, S., Schneider, J., and Poczos, B. Deep Learning with Sets and Point Clouds. arXiv preprint arXiv:1611.04500, 2016.
Rezende et al. (2014)
Rezende, D. J., Mohamed, S., and Wierstra, D.
Stochastic Backpropagation and Approximate Inference in Deep Generative Models.International Conference on International Conference on Machine Learning, 2014.
- Sunehag et al. (2017) Sunehag, P., Lever, G., Gruslys, A., Czarnecki, W. M., Zambaldi, V., Jaderberg, M., Lanctot, M., Sonnerat, N., Leibo, J. Z., Tuyls, K., et al. Value-Decomposition Networks For Cooperative Multi-Agent Learning. International Conference on Autonomous Agents and MultiAgent Systems, 2017.
- Sutskever et al. (2014) Sutskever, I., Vinyals, O., and Le, Q. Sequence to Sequence Learning with Neural Networks. Advances in Neural Information Processing Systems, 2014.
- Vaswani et al. (2017) Vaswani, A., Shazeer, N., Parmer, N., Uszkoreit, J., Jones, L., Gomez, A., Kaiser, L., and Polosukhin, I. Attention Is All You Need. Advances in Neural Information Processing Systems, 2017.
- Vinyals et al. (2015) Vinyals, O., Bengio, S., and Kudlur, M. Order matters: Sequence to sequence for sets. arXiv preprint arXiv:1511.06391, 2015.
- Yang et al. (2018) Yang, B., Wang, S., Markham, A., and Trigoni, N. Attentional Aggregation of Deep Feature Sets for Multi-view 3D Reconstruction. arXiv preprint arXiv:1808.00758, 2018.
- Yi et al. (2018) Yi, L., Zhao, W., Wang, H., Sung, M., and Guibas, L. GSPN: Generative Shape Proposal Network for 3D Instance Segmentation in Point Cloud. arXiv preprint arXiv:1812.03320, 2018.
- Zaheer et al. (2017) Zaheer, M., Kottur, S., Ravanbhakhsh, S., Póczos, B., Salakhutdinov, R., and Smola, A. Deep Sets. In Advances in Neural Information Processing Systems, 2017.
Appendix A Mathematical Remarks
a.1 Infinite Sums
Throughout this paper we consider expressions of the following form:
Where is an arbitrary set. The meaning of this expression is clear when is finite, but when is infinite, we must be precise about what we mean.
a.1.1 Countable Sums
We usually denote countable sums as e.g. . Note that there is an ordering of the here, whereas there is no ordering in our expression (5). The reason that we consider sums is for their permutation invariance in the finite case, but note that in the infinite case, permutation invariance of sums does not necessarily hold! For instance, the alternating harmonic series can be made to converge to any real number simply by reordering the terms of the sum. For expressions like (5) to make sense, we must require that the sums in question are indeed permutation invariant. This property is known as absolute convergence, and it is equivalent to the property that the sum of absolute values of the series converges. So for (5) to make sense, we will require everywhere that is convergent. For any where this is not the case, we will set .
a.1.2 Uncountable Sums
It is well known that a sum over an uncountable set of elements only converges if all but countably many elements are 0. Allowing sums over uncountable sets is therefore of little interest, since it essentially reduces to the countable case.
a.2 Continuity of Functions on Sets
We are interested in functions on subsets of , i.e. elements of , and the notion of continuity on is not straightforward. As a convenient shorthand, we discuss “continuous” functions on , but what we mean by this is that the function induced by on by is continuous for every .
a.3 Remark on Theorem 2.8
The proof for Theorem 2.8 from Zaheer et al. (2017) can be extended to dealing with multi sets, i.e. sets with repeated elements. To that end, we replace the mapping to natural numbers with a mapping to prime numbers . We then choose . Therefore,
which takes a unique value for each distinct therefore extending the validity of the proof to multi-sets. However, unlike the original series, this choice of diverges with infinite set size.
In fact, it is straightforward to show that there is no function for which provides a unique mapping for arbitrary multi-sets while at same time guaranteeing convergence for infinitely large sets. Assume a function and an arbitrary point such that . Then, the multiset comprising infinitely many identical members would give:
Appendix B Proofs of Theorems
b.1 Figure 2
Consider , the least upper bound of . Write . So we have:
First note that for any . If we had , then we would have, for every :
But then, for instance, we would have:
This is a contradiction, so .
Next, note that must be finite for every upper-bounded (since sup is undefined for unbounded , we do not consider such sets, and may allow to diverge). Even if we allowed the domain of to be , suppose for some upper-bounded set . Then:
This is a contradiction, so for any upper-bounded set .
Now from the above it is immediate that, for any upper-bounded set , only finitely many can have . Otherwise we can find an infinite upper-bounded set with for every , and .
Finally, let . We have already shown that , and we will now construct a sequence with:
If were continuous at , we would have , so the above two points together will give us that is discontinuous at .
So now, for each , consider the set of points which lie within of . Since only finitely many points have , and is infinite, there must be a point with . The sequence of such clearly satisfies both points above, and so is discontinuous everywhere. ∎
b.2 Figure 2
Define by . If we can demonstrate that there exists some such that is injective, then we can simply choose and the result is proved.
Say that a set is finite-sum-distinct if, for any finite subsets , . Now, if we can show that there is a finite-sum-distinct set with the same cardinality as (we denote by ), then we can simply choose to be a bijection from to . Then, by finite-sum-distinctness, will be injective, and the result is proved.
Now recall the statement of Zorn’s Lemma: suppose is a partially ordered set (or poset) in which every totally ordered subset has an upper bound. Then has a maximal element.
The set of f.s.d. subsets of (which we will denote ) forms a poset ordered by inclusion. Supposing that satisfies the conditions of Zorn’s Lemma, it must have a maximal element, i.e. there is a f.s.d. set such that any set with is not f.s.d. We claim that has cardinality .
To see this, let be a f.s.d. set with infinite cardinality (any maximal clearly cannot be finite). We will show that . Define the forbidden elements with respect to to be those elements of such that is not f.s.d. We denote this set of forbidden elements . Now note that, if is maximal, then . In particular, this implies that . But now consider the elements of . By definition of , we have that if and only if such that . So we can write as a sum of finitely many elements of , minus a sum of finitely many other elements of . So there is a surjection from pairs of finite sets of to elements of . i.e.:
But since is infinite:
So , and therefore is not maximal. This demonstrates that must have cardinality .
To complete the proof, it remains to show that satisfies the conditions of Zorn’s Lemma, i.e. that every totally ordered subset (or chain) of has an upper bound. So consider:
We claim that is an upper bound for . It is clear that for every , so it remains to be shown that , i.e. that is f.s.d.
We proceed by contradiction. Suppose that is not f.s.d. Then:
But now by construction of there must be sets with . Let . is totally ordered by inclusion and all sets contained in it are f.s.d., since it is a subset of . Since is finite it has a maximal element . By maximality, we have for all . But then by (8), is not f.s.d., which is a contradiction. So we have that is f.s.d.
satisfies the conditions of Zorn’s Lemma.
Therefore there exists a maximal f.s.d. set, .
We have shown that any such set must have cardinality .
Given an f.s.d. set with cardinality , we can choose to be a bijection between and .
Given such a , we have that is injective on .
Given injective , choose .
This choice gives us by construction.
This completes the proof. ∎
b.3 Figure 2
As discussed above, a sum over uncountably many elements can converge only if countably many elements are non-zero. But as in the proof of Figure 2, for any . So it is immediate that sum-decomposition is not possible for functions operating on uncountable subsets of .
Even restricting to countable subsets is not enough. As in the proof of Figure 2, we must have that for each , for only finitely many . But then if this is the case, let be the set of all with . Since , we know that . But this is a countable union of finite sets, which is impossible because is uncountable.
b.4 Lemma 4.1
The reverse implication is clear. The proof relies on demonstrating that the function defined as follows is a homeomorphism onto its image:
Now define by:
Note that for all , so . Since is a singleton, these two images are homeomorphic, with a homeomorphism given by:
Now by definition, . Since this is a composition of homeomorphisms, is also a homeomorphism. Therefore is a continuous sum-decomposition of via . ∎
b.5 Lemma 4.1
We use the adapted sum-of-power mapping from above, denoted in this section by .
which is shown above to be injective. Without loss of generality, let as in Theorem 2.9.
We separate into two terms:
For an input set with elements and , we say that the set contains “actual elements” as well as “empty” elements which are not in fact part of the input set. Those “empty elements” can be regarded as place fillers when the size of the input set is smaller than , i.e. .
We map those elements to a constant value , preserving the injectiveness of for input sets of arbitrary size :
Equation 10 is no longer strictly speaking a sum-decomposition. This can be overcome by re-arranging it:
The last term in Equation 11 is a constant value which only depends on the choice of and is independent of and . Hence, we can replace by . This leads to a new sum-of-power mapping with:
is injective since is injective, , and the last term in the above sum is constant. is also in the form of a sum-decomposition.
For each , we can follow the reasoning used in the rest of the proof of Theorem 2.9 to note that is a homeomorphism when restricted to sets of size – we denote these restricted functions by . Now each is a continuous function into . We can associate with each a continuous function which maps into , with the tailing dimensions filled with the value .
Now the domains of the are compact and disjoint since . We can therefore find a function which is continuous on and agrees with each on its domain.
To complete the proof, let be a connected compact set with . Let be a function on subsets of of size exactly satisfying:
We can choose to be continuous under the notion of continuity in Section A.2. Then is a continuous sum-decomposition of .
Analogously to sum-decomposition, we define the notion of max-decomposition. A function is max-decomposable if there are functions and such that:
where the max is taken over each dimension independently in the latent space. Our definitions of decomposability via and continuous decomposability also extend to the notion of max-decomposition.
We now state and prove a theorem which is closely related to Section 4, but which establishes limitations on max-decomposition, rather than sum-decomposition.
Let . Then there exist permutation invariant continuous functions which are not max-decomposable via .
Note that this theorem rules out any max-decomposition, whether continuous or discontinuous. We specifically demonstrate that summation is not max-decomposable – as with Section 4, this theorem applies to ordinary well-behaved functions.
Consider . Let , and let such that when .
For , let such that:
That is, attains the maximal value in the -th dimension of the latent space among all . Now since , there is some such that for any . So now consider defined by:
But since we chose such that all were distinct, we have by the definition of . This shows that cannot form part of a max-decomposition for . But was arbitrary, so no max-decomposition exists.
Appendix C A Continuous Function on
This section defines and analyses the function shown in Figure 2, which is continuous on but not on . is defined as the pointwise limit of a sequence of functions , illustrated in Figure 4. We proceed as follows:
Define a sequence of functions on .
Show that the pointwise limit is continuous except at points of the form for some integers and , i.e. except at the dyadic rationals.
Define the function on by .
Note that is continuous except at points of the form for some integers and .
Choose to be irrational, so that all points of discontinuity are also irrational, to obtain a function which is continuous on . (In all figures, we have chosen ).
Informally, we set , and at iteration , we split the unit interval into even subintervals. In every even-numbered subinterval, we reflect the function horizontally around the midpoint of the subinterval. We may write this formally as follows.
Let . Let:
That is, is the midpoint of the unique half-open interval containing :
Write for the -th digit in the binary expansion of , and write for the number of with .
Importantly, is ambiguous if is a dyadic rational, since in this case has both a terminating and a non-terminating expansion. For consistency with our choice of the upward-closed interval for the definition of , we choose the non-terminating expansion in this case.
First, it is clear that the series for