On the Limitations of Representing Functions on Sets

01/25/2019 ∙ by Edward Wagstaff, et al. ∙ 4

Recent work on the representation of functions on sets has considered the use of summation in a latent space to enforce permutation invariance. In particular, it has been conjectured that the dimension of this latent space may remain fixed as the cardinality of the sets under consideration increases. However, we demonstrate that the analysis leading to this conjecture requires mappings which are highly discontinuous and argue that this is only of limited practical use. Motivated by this observation, we prove that an implementation of this model via continuous mappings (as provided by e.g. neural networks or Gaussian processes) actually imposes a constraint on the dimensionality of the latent space. Practical universal function representation for set inputs can only be achieved with a latent dimension at least the size of the maximum number of input elements.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Machine learning models have had great success in taking advantage of structure in their input spaces: recurrent neural networks are popular models for sequential data (Sutskever et al., 2014)

and convolutional neural networks are the state-of-the-art for many image-based problems

(He et al., 2016). Recently, however, models for unstructured inputs in the form of sets have rapidly gained attention (Ravanbakhsh et al., 2016; Zaheer et al., 2017; Qi et al., 2017a; Lee et al., 2018; Murphy et al., 2018; Korshunova et al., 2018).

Importantly, a range of machine learning problems can naturally be formulated in terms of sets; e.g. parsing a scene composed of a set of objects (Eslami et al., 2016; Kosiorek et al., 2018), making predictions from a set of points forming a 3D point cloud (Qi et al., 2017a, b)

, or training a set of agents in reinforcement learning

(Sunehag et al., 2017). Furthermore, attention-based models perform a weighted summation of a set of features (Vaswani et al., 2017; Lee et al., 2018). Hence, understanding the mathematical properties of set-based models is valuable both in terms of set-structured applications as well as better understanding the capabilities and limitations of attention-based models.

Many popular machine learning models, including neural networks and Gaussian processes, are fundamentally based on vector inputs

111

Or inputs of higher rank, i.e. matrices and tensors.

rather than set inputs. In order to adapt these models for use with sets, we must enforce the property of permutation invariance, i.e. the output of the model must not change if the inputs are reordered. Multiple authors, including Ravanbakhsh et al. (2016), Zaheer et al. (2017) and Qi et al. (2017a), have considered enforcing this property using a technique which we term sum-decomposition, illustrated in Figure 1. Mathematically speaking, we say that a function defined on sets of size is sum-decomposable via if there are functions and such that222We use here for brevity – see Definition 2.2 for the fully general definition.

(1)

We refer to here as the latent space. Since summation is permutation-invariant, a sum-decomposition is also permutation-invariant. Ravanbakhsh et al. (2016), Zaheer et al. (2017) and Qi et al. (2017b) have also considered the idea of enforcing permutation invariance using other operations, e.g. . In this paper we concentrate on a detailed analysis of sum-decomposition, but some of the limitations we discuss also apply when is used instead of summation.

Figure 1: Illustration of the model structure proposed in several works (Zaheer et al., 2017; Qi et al., 2017a) for representing permutation-invariant functions. The sum operation enforces permutation invariance for the model as a whole. and can be implemented by e.g. neural networks.

Our main contributions can be summarised as follows.

  1. Recent proofs, e.g. in Zaheer et al. (2017), consider functions on countable domains. We explain why considering countable domains can lead to results of limited practical value (i.e. cannot be implemented with a neural network), and why considering continuity on uncountable domains such as is necessary. With reference to neural networks, we ground this discussion in the universal approximation theorem, which relies on continuity on uncountable domains .

  2. In contrast to previous work (Zaheer et al., 2017; Qi et al., 2017a), which considers sufficient conditions for universal function representation, we establish a necessary condition for a sum-decomposition-based model to be capable of universal function representation. Additionally, we provide weaker sufficient conditions which imply a stronger version of universality. Specifically, we show that the dimension of the latent space being at least as large as the maximum number of input elements is both necessary and sufficient for universal function representation.

While primarily targeted at neural networks, these results hold for any implementation of sum-decomposition, e.g. using Gaussian processes, as long as it provides universal function approximation for continuous functions. Proofs of all novel results are available in Appendix B.

2 Preliminaries

In this section we recount the theorems and proofs on sum-decomposition from Zaheer et al. (2017). We begin by introducing important definitions and the notation used throughout our work. Note that we focus on permutation-invariant functions and do not discuss permutation equivariance which is also considered in Zaheer et al. (2017).

2.1 Definitions

Definition 2.1.

A function is permutation-invariant if for all .

Definition 2.2.

We say that a function is sum-decomposable if there are functions and such that

In this case, we say that is a sum-decomposition of .

Given a latent space , we say that is sum-decomposable via when this expression holds for some whose codomain is , i.e. .

We say that is continuously sum-decomposable when this expression holds for some continuous functions and .

We will also consider sum-decomposability where the inputs to are vectors rather than sets - in this context, the sum is over the elements of the input vector.

Definition 2.3.

A set is countable if its number of elements, i.e. the cardinality, is smaller or equal to the number of elements in . This includes both finite and countably infinite sets; e.g. , , and subsets thereof.

Definition 2.4.

A set is uncountable if its number of elements is greater than the number of elements in , e.g. and certain subsets thereof.

Notation 2.5.

Denote the power set of a set by .

Notation 2.6.

Denote the set of finite subsets of a set by .

Notation 2.7.

Denote the set of subsets of a set containing at most elements by .

Remark.

Throughout, we discuss expressions of the form , where is a set. Note that care must be taken in interpreting this expression when is not finite – we discuss this issue fully in Section A.1.

2.2 Background Theorems

Zaheer et al. (2017) consider the two cases where is a subset of, or drawn from, a countable and an uncountable universe . We now outline the theorems and proofs relating to these two cases.

Theorem 2.8 (Countable case).

Let where is countable. Then is permutation-invariant if and only if it is sum-decomposable via .

Proof.

Since is countable, each can be mapped to a unique element in by a function . Let . If we can choose so that is injective, then we can set , giving

i.e. f is sum-decomposable via .

Now consider . Under this mapping, each corresponds to a unique real number expressed in base 4. Therefore is injective, and the conclusion follows. ∎

Remark.

This construction works for any set size , and even for sets of infinite size. However, it assumes that is a set with no repeated elements, i.e. multisets are not supported. Specifically, the construction will fail with multisets because fails to be injective if its domain includes multisets. In Section A.3, we extend Theorem 2.8 to also support multisets, with the restriction that infinite sets are no longer supported.

Theorem 2.9 (Uncountable case).

Let , and let be a continuous function. Then is permutation-invariant if and only if it is continuously sum-decomposable via .

The proof by Zaheer et al. (2017) of Theorem 2.9 is more involved than for Theorem 2.8. We do not include it here in full detail, but briefly summarise below.

  1. Show that the mapping defined by for is injective and continuous.333In the original proof, is denoted .

  2. Show that has a continuous inverse.

  3. Define by .

  4. Define by .

  5. Note that, by definition of and , is a continuous sum-decomposition of via . ∎

Remark.

Zaheer et al. (2017) conjecture that any continuous permutation-invariant function on , the power set of , is continuously sum-decomposable. In Section 3, we show that this is not possible, and in Section 4 we show that even if the domain of is restricted to , the finite subsets of , then is a necessary condition for arbitrary functions to be continuously sum-decomposable. Additionally, we prove that is a sufficient condition – implying together with the above that it is not possible to do better than this.

3 The Importance of Continuity

In this section, we argue that continuity is essential to discussions of function representation, that it has been neglected in prior work on permutation-invariant functions, and that this neglect has implications for the strength and generality of existing results.

Intuitively speaking a function is continuous if, at every point in the domain, the variation of the output can be made arbitrarily small by limiting the variation in the input. Continuity is the reason that, for instance, working to machine precision usually produces sensible results. Truncating to machine precision alters the input to a function slightly, but continuity ensures that the change in output is also slight.

0.00

0.25

0.50

0.75

1.00

1.25

1.50

0.0

0.2

0.4

0.6

0.8

1.0

Figure 2: The function shown here is continuous at every rational point in . Intuitively, this is because all jumps occur at irrational values, namely at certain fractions of . It defies our intuitions for what continuity should mean, and illustrates the fact that continuity on is a much weaker property than continuity on . The latter property is required to satisfy the universal approximation theorem for neural networks. is defined and discussed in Appendix C.

In Zaheer et al. (2017), the authors demonstrate that when is a countable set, e.g. the rational numbers, any function is sum-decomposable via . This is taken as a hopeful indication that sum-decomposability may extend to uncountable domains, e.g. . Extending to the uncountable case may appear, at first glance, to be a mere formality – we are, after all, ultimately interested in implementing algorithms on finite hardware. Nevertheless, it is not true that a theoretical result for a countably infinite domain must be strong enough for practical purposes. In fact, considering functions on uncountably infinite domains such as is of real importance.

Turning specifically to neural networks, the universal approximation theorem says that any continuous function can be approximated by a neural network, but not that any function can be approximated by a neural network (Cybenko, 1989). A similar statement is true for other approximators, such as some Gaussian processes (Rasmussen & Williams, 2006). The notion of continuity required here is specifically that of continuity on compact subsets of .

Crucially, if we wish to work mathematically with continuity in a way that closely matches our intuitions, we must consider uncountable domains. To illustrate this point, consider the rational numbers . is dense in , and it is tempting to think that is therefore “all we need”. However, a theoretical guarantee of continuity on is weak, and does not imply continuity on . The universal approximation theorem for neural networks relies on continuity on , and we cannot usefully take continuity on as a proxy for this property. Figure 2 shows a function which is continuous on , and illustrates that a continuous function on may not extend continuously to . This figure also illustrates that continuity on defies our intuitions about what continuity should mean, and is too weak for the universal approximation theorem for neural networks. We require the stronger notion of continuity on .

In light of the above, it is clear that continuity is a key property for function representation, and also that there is a crucially important difference between countable and uncountable domains. This raises two problems for Theorem 2.8. First, the theorem does not consider the continuity of the sum-decomposition when the domain has some non-trivial topological structure (e.g. ). Second, we still care about continuity on , and there is no guarantee that this is possible given continuity on .

In fact, the continuity issue cannot be overcome – we can demonstrate that in general the sum-decomposition of Theorem 2.8, which goes via , cannot be made continuous for :

theoremqdiscontinuous There exist functions such that, whenever is a sum-decomposition of via , is discontinuous at every point .

We can actually say something more general than the above. Our proof can easily be adapted to demonstrate that if is injective, or if we want a fixed to suffice for any , then can only be continuous at isolated points of the underlying set , regardless of whether . I.e., it is not specifically due to the structure of that continuous sum-decomposability fails. In fact, it fails whenever we have a non-trivial topological structure. For functions which we want to model using a neural network, this is worrying.

It is not possible to represent an everywhere-discontinuous with a neural network. We therefore view Theorem 2.8 as being of limited practical relevance and as not providing a reliable intuition for what should be possible in the uncountable case. We do however see this result as mathematically interesting, and have obtained the following result extending it to the case where the domain is uncountable. This result is slightly weaker than the countable case, in that the domain of can contain arbitrarily large finite sets, but not infinite sets.

theoremuncdiscontinuous Let . Then is sum-decomposable via .

Once again, the sum-decomposition is highly discontinuous. The limitation that is not defined on infinite sets cannot be overcome:

theoremuncfinite If is uncountable, then there exist functions which are not sum-decomposable. Note that this holds even if the sum-decomposition is allowed to be discontinuous.

To summarise, we show why considering countable domains can lead to results of limited practical value and why considering continuity on uncountable domains is necessary. We point out that some of the previous work is therefore of limited practical relevance, but regard it as mathematically interesting. In this vein, we extend the analysis of sum-decomposability when continuity is not required.

4 Practical Function Representation

Having established the necessity of considering continuity on , we now explore the implications for sum-decomposability of permutation-invariant functions. These considerations lead to concrete recommendations for model design and provide theoretical support for elements of current practice in the area. Specifically, we present three theorems whose implications can be summarised as follows.

  1. A latent dimensionality of is sufficient for representing all continuous permutation-invariant functions on sets of size .

  2. To guarantee that all continuous permutation-invariant functions can be represented for sets of size , a latent dimensionality of at least is necessary.

The key result which is the basis of the second statement and which underpins this discussion is as follows.

theoremmaxnotdecomp Let . Then there exist permutation invariant continuous functions which are not continuously sum-decomposable via .

Restated in more practical terms, this implies that for a sum-decomposition-based model to be capable of representing arbitrary continuous functions on sets of size , the latent space in which the summation happens must be chosen to have dimension at least . A similar statement is true for the analogous concept of max-decomposition – details are available in Section B.6.

To prove this theorem, we first need to state and prove the following lemma.

Lemma 4.1.

Let , and suppose , are functions such that:

(2)

Now let , and write for the restriction of to sets of size .

Then is injective for all .

Proof.

We proceed by induction. The base case is clear.

Now let , and suppose that is injective. Now suppose there are sets such that . First note that, by (2), we must have:

(3)

So now write:

(4)

where , and similarly for .

But now:

From the central equality, and (3), we have:

Now by injectivity of , we have . Combining this with (3) and (4), we must have , and so is injective. ∎

Equipped with this lemma, we can now prove Section 4.

Proof.

We proceed by contradiction. Suppose that functions and exist satisfying (2). Define by:

Denote the set of all with by , and let be the restriction of to . Since is a sum of continuous functions, it is also continuous, and by Lemma 4.1, it is injective.

Now note that is a convex open subset of , and is therefore homeomorphic to . Therefore, our continuous injective can be used to construct a continuous injection from to . But it is well known that no such continuous injection exists when . Therefore our decomposition (2) cannot exist. ∎

It is crucial to note that functions for which a lower-dimensional sum-decomposition does not exist need not be “badly-behaved” or difficult to specify. The limitation extends to functions of genuine interest. For our proof, we have specifically demonstrated that even is not continuously sum-decomposable when .

From Theorem 2.9, we also know that for a fixed input set size , any continuous permutation-invariant function is continuously sum-decomposable via . It is, however, possible to adapt the construction of Zaheer et al. (2017) to strengthen the result in two ways. Firstly, we can perform the sum-decomposition via :

[Fixed set size]theoremoriunc Let be continuous. Then is permutation-invariant if and only if it is continuously sum-decomposable via .

Secondly, we can deal with variable set sizes :

[Variable set size]theoremarbitrary Let be continuous. Then is permutation-invariant if and only if it is continuously sum-decomposable via .

Note that we must take some care over the notion of continuity in this theorem – see Section A.2.

4.1 Discussion

Section 4 does not imply all functions require . Some functions, such as the mean, can be represented in a lower dimensional space. The statement rather says that if we do not want to impose any limitations on the complexity of the function, the latent space needs to have dimensionality at least .

Lemma 4.1 suggests that sum-decomposition via a latent space with dimension should suffice to model any function. Neural network models in the recent literature, however, deviate from these guidelines in several ways, indicating a disconnect between theory and practice. For example, the models in Zaheer et al. (2017) and Qi et al. (2017a) are considerably more complex than Equation 1, e.g. they apply several permutation-equivariant layers to the input before a permutation-invariant layer.

In light of Section 4, this disconnect becomes less surprising. We have shown that, for a target function of sufficient complexity, is the bare minimum required for the model to be capable of representing the target function. Achieving this would rely on the parameterisation of and being flexible enough and on the availability of a suitable optimisation method. In practice, we should not be surprised that more than the bare minimum capacity in our model is required for good performance. Even with , the model might not converge to the desired solution. At the same time, when we are dealing with real datasets, the training data may contain noise and redundant information, e.g. in the form of correlations between elements in the input, inducing functions of limited complexity that may in fact be representable with .

4.2 Illustrative Example

We now use a toy example to illustrate some practical implications of our results. Based on Section 4, we expect the number of input elements to have an influence on the required latent dimension , and in particular, we expect that the required latent dimension may increase without bound.

We train a neural network with the architecture presented in Figure 1 to predict the median of a set of values. We choose the median as a function because it is relatively simple but cannot be trivially represented via a sum in a fixed-dimensional latent space, in contrast to e.g. the mean, which is sum-decomposable via .444The construction for is not entirely trivial for variable set size, but going via is straightforward. and

are parameterised by multi-layer perceptrons (MLPs). The input sets are randomly drawn from either a uniform, a Gaussian, or a Gamma distribution.

(a)

Performance on median estimation depending on latent dimension. Different colours depict different set sizes. Each data point is averaged over 500 runs with different seeds. Shaded areas indicate confidence intervals. Coloured dashed lines indicate

.
(b) Extracted ‘critical points’ from above graph. The coloured data points depict minimum latent dimension for optimal performance for different set sizes.
Figure 3: Illustrative toy example: a neural network is trained to predict the median of an unordered set.

We vary the latent dimension and the input set size to investigate the link between these two variables and the predictive performance. The MLPs parameterising and are given comparatively many layers and hidden units, relative to the simplicity of the task, to ensure that the latent dimension is the bottleneck. Further details are described in Appendix D.

Figure 3(a) shows the RMSE depending on the latent dimension for different input sizes. We make three observations.

  1. For each set size, the error decreases monotonically with the dimension of the latent space.

  2. Beyond a certain point, increasing the dimension of the latent space does not further reduce the error. We denote this the “critical point”.

  3. As the set size increases, so does the latent dimension at the critical point.

Figure 3(b) shows the critical points as a function of the input size, indicating a roughly linear relationship between the two. Note that the critical points occur at . This can be explained by the fact that the models do not learn an algorithmic solution for computing the median, but rather to estimate it given samples drawn from the specific input distribution seen during training. Furthermore, estimating the median of a distribution, like other functions, renders some information in the input redundant. Therefore, the mapping from input to latent space does not need to be injective, allowing a model to solve the task with a smaller value of .

5 Related Work

Much of the recent work on deep learning with unordered sets follows the paradigm discussed in

(Ravanbakhsh et al., 2016), Zaheer et al. (2017), and Qi et al. (2017a) which leverage the structure illustrated in Figure 1. Zaheer et al. (2017) provide an in-depth theoretical analysis which is discussed in detail in Section 2. Qi et al. (2017a) also derive a sufficiency condition for universal function approximation. In their proof, however, they set the latent dimension to where depends on the error tolerance for how closely the target function has to be approximated. As a result, the latent dimension goes to infinity for exact representation. In similar vain, Herzig et al. (2018) consider permutation-invariant functions on graphs.

A key application domain of set-based methods is the processing of point clouds, as the constituent points do not have an intrinsic ordering. The work by Qi et al. (2017a) on 3D point clouds, one of the first to use a permutation-invariant neural networks, is extended in Qi et al. (2017b) by sampling and grouping points in a hierarchical fashion to model the interaction between nearby points in the input space more explicitly. Qi et al. (2018) combine RGB and lidar data for object detection by using image detectors to generate bounding box proposals which are then further processed by a set-based model. Achlioptas et al. (2018) and Yi et al. (2018) show that set-based models can also be used to learn generative models of point clouds.

Vinyals et al. (2015) suggest that even though recurrent networks are universal approximators, the ordering of the input is crucial for good performance. Hence, they propose model that relies on attention to achieve permutation invariance in order to solve a sorting task. In general, it is worth noting that there exists a connection between the model in Zaheer et al. (2017) and recent attention-based models such as the one proposed in Vaswani et al. (2017). In this case, the aggregation layer includes a weighting parameter which is computed based on a key-query system which is also permutation invariant. Since the value of the weighting parameters could be learned to be , it is trivial to show that such an attention algorithm is also in principle able to approximate any permutation-invariant function, of course depending on the remaining parts of the architecture. Inspired by inducing point methods, Set Transformer (Lee et al., 2018) propose a computationally more efficient attention-module and demonstrate better performance on a range of set-based tasks. While stacking several of attention-modules can capture higher order dependencies, a more general treatment of this is offered by permutation-invariant, learnable Janossy Pooling (Murphy et al., 2018).

Similar to the methods considered here, Neural Processes (Garnelo et al., 2018b) and Conditional Neural Processes (Garnelo et al., 2018a) also rely on aggregation via summation in order to infer a distribution from a set of data points. Kim et al. (2019) add an attention mechanism to neural processes to improve empirical performance. Generative Query Networks (Eslami et al., 2018; Kumar et al., 2018) can be regarded as an instantiation of neural processes to learn useful representations of 3D scenes from multiple 2D views. Yang et al. (2018) also aggregate information from multiple views to compute representations of 3D objects.

Bloem-Reddy & Teh (2018) and Korshunova et al. (2018)

consider exchangeable sequences – sequences consisting of random variables with a joint likelihood which is invariant under permutations.

Bloem-Reddy & Teh (2018) propose a model including an additional noise variable leveraging the reparametrisation trick introduced by Kingma & Welling (2014); Rezende et al. (2014). Korshunova et al. (2018) use RealNVP (Dinh et al., 2016) as a bijective function which sequentially computes the parameters of a Student-t process.

6 Conclusions

This work derives theoretical limitations on the representation of arbitrary permutation-invariant functions on sets via a finite latent space. To this end, we demonstrate why continuity requires statements on uncountable domains, as opposed to countable domains, in order to ensure the practical usefulness of those statements. Under this constraint, we prove that a latent space whose dimension is at least as large as the maximum input set size is both sufficient and necessary for a model to be capable of universal function representation. The models which we have covered in this analysis are popular for a range of practical applications and can be implemented e.g. by neural networks or Gaussian processes. In future work, we would like to investigate the effect of constructing models with both permutation-equivariant and permutation-invariant modules on the required dimension of the latent space. Examining the implications of using self-attention, e.g. as in Lee et al. (2018), would be of similar interest.

Acknowledgements

The authors would like to thank Sudhanshu Kasewa for proof reading a draft of the paper.

References

Appendix A Mathematical Remarks

a.1 Infinite Sums

Throughout this paper we consider expressions of the following form:

(5)

Where is an arbitrary set. The meaning of this expression is clear when is finite, but when is infinite, we must be precise about what we mean.

a.1.1 Countable Sums

We usually denote countable sums as e.g. . Note that there is an ordering of the here, whereas there is no ordering in our expression (5). The reason that we consider sums is for their permutation invariance in the finite case, but note that in the infinite case, permutation invariance of sums does not necessarily hold! For instance, the alternating harmonic series can be made to converge to any real number simply by reordering the terms of the sum. For expressions like (5) to make sense, we must require that the sums in question are indeed permutation invariant. This property is known as absolute convergence, and it is equivalent to the property that the sum of absolute values of the series converges. So for (5) to make sense, we will require everywhere that is convergent. For any where this is not the case, we will set .

a.1.2 Uncountable Sums

It is well known that a sum over an uncountable set of elements only converges if all but countably many elements are 0. Allowing sums over uncountable sets is therefore of little interest, since it essentially reduces to the countable case.

a.2 Continuity of Functions on Sets

We are interested in functions on subsets of , i.e. elements of , and the notion of continuity on is not straightforward. As a convenient shorthand, we discuss “continuous” functions on , but what we mean by this is that the function induced by on by is continuous for every .

a.3 Remark on Theorem 2.8

The proof for Theorem 2.8 from Zaheer et al. (2017) can be extended to dealing with multi sets, i.e. sets with repeated elements. To that end, we replace the mapping to natural numbers with a mapping to prime numbers . We then choose . Therefore,

(6)

which takes a unique value for each distinct therefore extending the validity of the proof to multi-sets. However, unlike the original series, this choice of diverges with infinite set size.

In fact, it is straightforward to show that there is no function for which provides a unique mapping for arbitrary multi-sets while at same time guaranteeing convergence for infinitely large sets. Assume a function and an arbitrary point such that . Then, the multiset comprising infinitely many identical members would give:

(7)

Appendix B Proofs of Theorems

b.1 Figure 2

*

Proof.

Consider , the least upper bound of . Write . So we have:

First note that for any . If we had , then we would have, for every :

But then, for instance, we would have:

This is a contradiction, so .

Next, note that must be finite for every upper-bounded (since sup is undefined for unbounded , we do not consider such sets, and may allow to diverge). Even if we allowed the domain of to be , suppose for some upper-bounded set . Then:

This is a contradiction, so for any upper-bounded set .

Now from the above it is immediate that, for any upper-bounded set , only finitely many can have . Otherwise we can find an infinite upper-bounded set with for every , and .

Finally, let . We have already shown that , and we will now construct a sequence with:

If were continuous at , we would have , so the above two points together will give us that is discontinuous at .

So now, for each , consider the set of points which lie within of . Since only finitely many points have , and is infinite, there must be a point with . The sequence of such clearly satisfies both points above, and so is discontinuous everywhere. ∎

b.2 Figure 2

*

Proof.

Define by . If we can demonstrate that there exists some such that is injective, then we can simply choose and the result is proved.

Say that a set is finite-sum-distinct if, for any finite subsets , . Now, if we can show that there is a finite-sum-distinct set with the same cardinality as (we denote by ), then we can simply choose to be a bijection from to . Then, by finite-sum-distinctness, will be injective, and the result is proved.

Now recall the statement of Zorn’s Lemma: suppose is a partially ordered set (or poset) in which every totally ordered subset has an upper bound. Then has a maximal element.

The set of f.s.d. subsets of (which we will denote ) forms a poset ordered by inclusion. Supposing that satisfies the conditions of Zorn’s Lemma, it must have a maximal element, i.e. there is a f.s.d. set such that any set with is not f.s.d. We claim that has cardinality .

To see this, let be a f.s.d. set with infinite cardinality (any maximal clearly cannot be finite). We will show that . Define the forbidden elements with respect to to be those elements of such that is not f.s.d. We denote this set of forbidden elements . Now note that, if is maximal, then . In particular, this implies that . But now consider the elements of . By definition of , we have that if and only if such that . So we can write as a sum of finitely many elements of , minus a sum of finitely many other elements of . So there is a surjection from pairs of finite sets of to elements of . i.e.:

But since is infinite:

So , and therefore is not maximal. This demonstrates that must have cardinality .

To complete the proof, it remains to show that satisfies the conditions of Zorn’s Lemma, i.e. that every totally ordered subset (or chain) of has an upper bound. So consider:

We claim that is an upper bound for . It is clear that for every , so it remains to be shown that , i.e. that is f.s.d.

We proceed by contradiction. Suppose that is not f.s.d. Then:

(8)

But now by construction of there must be sets with . Let . is totally ordered by inclusion and all sets contained in it are f.s.d., since it is a subset of . Since is finite it has a maximal element . By maximality, we have for all . But then by (8), is not f.s.d., which is a contradiction. So we have that is f.s.d.

In summary:

  1. satisfies the conditions of Zorn’s Lemma.

  2. Therefore there exists a maximal f.s.d. set, .

  3. We have shown that any such set must have cardinality .

  4. Given an f.s.d. set with cardinality , we can choose to be a bijection between and .

  5. Given such a , we have that is injective on .

  6. Given injective , choose .

  7. This choice gives us by construction.

This completes the proof. ∎

b.3 Figure 2

*

Proof.

Consider .

As discussed above, a sum over uncountably many elements can converge only if countably many elements are non-zero. But as in the proof of Figure 2, for any . So it is immediate that sum-decomposition is not possible for functions operating on uncountable subsets of .

Even restricting to countable subsets is not enough. As in the proof of Figure 2, we must have that for each , for only finitely many . But then if this is the case, let be the set of all with . Since , we know that . But this is a countable union of finite sets, which is impossible because is uncountable.

b.4 Lemma 4.1

*

Proof.

The reverse implication is clear. The proof relies on demonstrating that the function defined as follows is a homeomorphism onto its image:

Now define by:

Note that for all , so . Since is a singleton, these two images are homeomorphic, with a homeomorphism given by:

Now by definition, . Since this is a composition of homeomorphisms, is also a homeomorphism. Therefore is a continuous sum-decomposition of via . ∎

b.5 Lemma 4.1

*

Proof.

We use the adapted sum-of-power mapping from above, denoted in this section by .

which is shown above to be injective. Without loss of generality, let as in Theorem 2.9.

We separate into two terms:

(9)

For an input set with elements and , we say that the set contains “actual elements” as well as “empty” elements which are not in fact part of the input set. Those “empty elements” can be regarded as place fillers when the size of the input set is smaller than , i.e. .

We map those elements to a constant value , preserving the injectiveness of for input sets of arbitrary size :

(10)

Equation 10 is no longer strictly speaking a sum-decomposition. This can be overcome by re-arranging it:

(11)

The last term in Equation 11 is a constant value which only depends on the choice of and is independent of and . Hence, we can replace by . This leads to a new sum-of-power mapping with:

(12)

is injective since is injective, , and the last term in the above sum is constant. is also in the form of a sum-decomposition.

For each , we can follow the reasoning used in the rest of the proof of Theorem 2.9 to note that is a homeomorphism when restricted to sets of size – we denote these restricted functions by . Now each is a continuous function into . We can associate with each a continuous function which maps into , with the tailing dimensions filled with the value .

Now the domains of the are compact and disjoint since . We can therefore find a function which is continuous on and agrees with each on its domain.

To complete the proof, let be a connected compact set with . Let be a function on subsets of of size exactly satisfying:

We can choose to be continuous under the notion of continuity in Section A.2. Then is a continuous sum-decomposition of .

b.6 Max-Decomposition

Analogously to sum-decomposition, we define the notion of max-decomposition. A function is max-decomposable if there are functions and such that:

where the max is taken over each dimension independently in the latent space. Our definitions of decomposability via and continuous decomposability also extend to the notion of max-decomposition.

We now state and prove a theorem which is closely related to Section 4, but which establishes limitations on max-decomposition, rather than sum-decomposition.

Theorem B.1.

Let . Then there exist permutation invariant continuous functions which are not max-decomposable via .

Note that this theorem rules out any max-decomposition, whether continuous or discontinuous. We specifically demonstrate that summation is not max-decomposable – as with Section 4, this theorem applies to ordinary well-behaved functions.

Proof.

Consider . Let , and let such that when .

For , let such that:

That is, attains the maximal value in the -th dimension of the latent space among all . Now since , there is some such that for any . So now consider defined by:

(13)
(14)

Then:

But since we chose such that all were distinct, we have by the definition of . This shows that cannot form part of a max-decomposition for . But was arbitrary, so no max-decomposition exists.

Appendix C A Continuous Function on

This section defines and analyses the function shown in Figure 2, which is continuous on but not on . is defined as the pointwise limit of a sequence of functions , illustrated in Figure 4. We proceed as follows:

  1. Define a sequence of functions on .

  2. Show that the pointwise limit is continuous except at points of the form for some integers and , i.e. except at the dyadic rationals.

  3. Define the function on by .

  4. Note that is continuous except at points of the form for some integers and .

  5. Choose to be irrational, so that all points of discontinuity are also irrational, to obtain a function which is continuous on . (In all figures, we have chosen ).

0.00

0.25

0.50

0.75

1.00

1.25

1.50

0.0

0.2

0.4

0.6

0.8

1.0

Progression of

0.00

0.25

0.50

0.75

1.00

1.25

1.50

0.0

0.2

0.4

0.6

0.8

1.0

0.00

0.25

0.50

0.75

1.00

1.25

1.50

0.0

0.2

0.4

0.6

0.8

1.0

Figure 4: Several iterations of

Informally, we set , and at iteration , we split the unit interval into even subintervals. In every even-numbered subinterval, we reflect the function horizontally around the midpoint of the subinterval. We may write this formally as follows.

Let . Let:

That is, is the midpoint of the unique half-open interval containing :

Write for the -th digit in the binary expansion of , and write for the number of with .

Importantly, is ambiguous if is a dyadic rational, since in this case has both a terminating and a non-terminating expansion. For consistency with our choice of the upward-closed interval for the definition of , we choose the non-terminating expansion in this case.

Then:

First, it is clear that the series for