    # Universal approximation of symmetric and anti-symmetric functions

We consider universal approximations of symmetric and anti-symmetric functions, which are important for applications in quantum physics, as well as other scientific and engineering computations. We give constructive approximations with explicit bounds on the number of parameters with respect to the dimension and the target accuracy ϵ. While the approximation still suffers from curse of dimensionality, to the best of our knowledge, these are first results in the literature with explicit error bounds. Moreover, we also discuss neural network architecture that can be suitable for approximating symmetric and anti-symmetric functions.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

We consider in this study universal approximations of symmetric and anti-symmetric functions. A function is (totally) symmetric if

 f(xσ(1),…,xσ(N))=f(x1,…,xN), (1.1)

for any permutation , and elements . Similarly is (totally) anti-symmetric if

 (1.2)

for any permutation, where is the signature of . Note that the permutation is only applied to the particle indices , but not the Cartesian indices for each . In other words, is not totally symmetric / anti-symmetric when viewed as a function on . This is the relevant setup in many applications in scientific and engineering computation. A totally symmetric function is also called a permutation invariant function. A closely related concept is the permutation equivariant mapping, which is of the form that satisfies

 Yi(xσ(1),…,xσ(N))=Yσ(i)(x1,…,xN),i=1,…,N (1.3)

for any permutation , and . Here each component , and can be different from .

Perhaps the most important example of totally symmetric and anti-symmetric functions is the wavefunction of identical particles in quantum mechanics. The indistinguishability of identical particles implies that their wavefunctions should be either totally symmetric or totally anti-symmetric upon exchanging the variables associated with any two of the particles, corresponding to two categories of particles: bosons and fermions. The former can share quantum states, giving rise to, e.g., the celebrated Bose-Einstein condensate; while the latter cannot not share quantum states as described by the famous Pauli exclusion principle. Such exchange/permutation symmetry also arise from other applications than identical particles in quantum mechanics, mostly in the form of symmetric functions. For instance, in chemistry and materials science, the interatomic potential energy should be invariant under the permutation of the atoms of the same chemical species. Another example is in computer vision, where the classification of point clouds should not depend on the ordering of points.

The dimension of symmetric and anti-symmetric functions is usually large in practice because it is proportional to the number of considered elements. This means that in computation the notorious difficulty of “curse of dimensionality” is often encountered when dealing with such functions. Recent years have witnessed compelling success of neural networks in representing high-dimensional symmetric functions with great accuracy and efficiency, see, e.g., [2, 18, 20, 21] for interatomic potential energy, and [13, 14] for 3D classification and segmentation of point sets. For anti-symmetric functions, some recent work in the past year [4, 7, 8, 12] has shown exciting potential of solving the many-electron Schrödinger equation with neural networks. Within the framework of variational Monte Carlo (VMC), for some benchmark systems, the anti-symmetric wavefunction parameterized by neural networks can be on a par with the state-of-the-art wavefunctions constructed based on chemical or physical knowledge.

Despite the empirical success of neural network approximating symmetric and anti-symmetric functions, theoretical understanding of these approximations is still limited. There are numerous results (see e.g. [1, 5, 9]

) concerning the universal approximation of general continuous functions on compact domains. Nevertheless, if the target function is symmetric or anti-symmetric, it is much less investigated whether one can achieve the universal approximation with a class of functions with the same symmetry constraints. Explicitly guaranteeing the symmetric or anti-symmetric property of an ansatz is often mandatory. For example, for electronic systems, if the wavefunction is not constrained within the space of anti-symmetric functions, the resulting variational energy could be lower than the exact ground-state energy and would be no longer physically meaningful. From a machine learning perspective, symmetries can also significantly reduce the number of effective degrees of freedom, improve the efficiency of training, and enhance the generalizability of the model. However, one needs to first make sure that the function class is still universal and sufficiently expressive. Moreover, in many scientific applications, besides being symmetric or anti-symmetric, the target function of interest should be at least continuous. For example, a many-body wavefunction should be continuous to ensure the local energy defined through the second order derivatives is finite everywhere; an interatomic potential energy should be continuous to guarantee the total energy is conserved during molecular dynamics simulations. Therefore we wish the universal ansatz for symmetric/anti-symmetric functions is continuous as well.

The universal approximations of symmetric functions was partially studied in . However, as will be illustrated in Section 2, the proof of  only holds for the case when

. Moreover, there is no error estimation provided for the proposed approximation. The more recent work

 considered the universal approximation of permutation invariant functions and equivariant mappings for as well. By respecting the permutation symmetry, the resulting neural network involves much fewer parameters than the corresponding dense neural networks. Similarly, when any anti-symmetric polynomial can be factorized as the product of a Vandermonde determinant and a totally symmetric polynomial. Such a universal representation has been known since Cauchy . However, to our knowledge there is no such simple factorization for to guide the design of neural network architecture.

In this paper we aim to study the universal approximation of general symmetric and anti-symmetric functions for any . We now summarize the main results of the paper: First, for the symmetric function, we give two different proofs of the universality of the ansatz proposed in , both with explicit error bounds. The first proof is based on the Ryser formula  for permanents, and the second is based on the partition of the state space, as elaborated in Section 2 and Section 3, respectively. Moreover, we also show in Section 4 that, for the general anti-symmetric function with elements in any dimension, a simple ansatz combining Vandermonde determinants and the ansatz for symmetric functions is universal, with similar explicit error bounds as in the symmetric case. For readers’ convenience, we summarize below in two theorems the ansatz we proved with the universal approximation for symmetric and anti-symmetric functions. The approximation rate only relies on a weak condition that the gradient is uniformly bounded. Note that both ansatzes do not require the procedure of sorting elements so that they can be made continuous functions in favor of many scientific applications. We conclude in Section 5 with some practical considerations and future directions for further investigation. The proofs of Theorem 1 and 1 are given in Section 3 and 4, respectively.

[Approximation to symmetric functions] Let be a continuously differentiable, totally symmetric function, where is a compact subset of . Let . Then there exist , , such that for any ,

 ∣∣ ∣∣f(X)−ϕ(N∑j=1g(xj))∣∣ ∣∣≤ϵ,

where , the number of feature variables, is bounded from above by

 O(2N(Nd)Nd/2/(ϵNdN!)). (1.4)

[Approximation to anti-symmetric functions] Let be a continuously differentiable, totally anti-symmetric function, where is a compact subset of . Then there exist permutation equivariant mappings , and permutation invariant functions , , such that for any ,

 ∣∣ ∣∣f(X)−K∑k=1Uk(X)∏i

where is bounded from above by

 O((Nd)Nd/2/(ϵNdN!)).

For each , there exists , with , such that for any ,

 Uk(X)=ϕk(N∑j=1gk(xj)).

## 2 Totally symmetric functions

Let , with . Consider a totally symmetric function . It is proved in  that when (therefore ), the following universal approximation representation holds

 f(X)=ϕ(N∑j=1g(xj)) (2.1)

for continuous functions , and . For completeness we briefly recall the proof.

Let . Define the mapping , with each component function defined as

 zq=Eq(X):=N∑n=1(xn)q,q=0,1,…,N.

It can be shown that the mapping is a homeomorphism between and its image in  . Hence, if we let and , then we have

 f(X)=ϕ(N∑j=1g(xj)).

Here the number of feature variables is by construction. The main difficulty associated with this construction is that the mapping can be arbitrarily complex to be approximated in practice. In fact the construction is similar in flavor to the Kolmogorov-Arnold representation theorem , which provides a universal representation for multivariable continuous functions, but without any a priori guarantee of the accuracy with respect to the number of parameters.

In order to generalize to the case , the proof of [19, Theorem 9] in fact suggested an alternative proof for the case as follows. Using the Stone-Weierstrass theorem, a totally symmetric function can be approximated by a polynomials of high degree. After symmetrization, this polynomial becomes a totally symmetric polynomial. By the fundamental theorem of symmetric polynomials , any symmetric polynomial can be represented by a polynomial of elementary symmetric polynomials. In other words, for any symmetric polynomial , we have

 P(X)=Q(e1(X),…,eN(X)),

where is some polynomial, and the elementary polynomials are defined as

 ek(X)=∑1≤j1

Using the Newton-Girard formula, an elementary symmetric polynomial can be represented with power sums by

 ek(X)=1k!∣∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣∣E1(X)100⋯0E2(X)E1(X)20⋯0E3(X)E2(X)E1(X)3⋯0⋮⋮⋮⋮⋱⋮Ek−1(X)Ek−2(X)Ek−3(X)Ek−4(X)⋯k−1Ek(X)Ek−1(X)Ek−2(X)Ek−3(X)⋯E1(X)∣∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣∣. (2.2)

Now let be the same function defined in the previous proof, and define in terms of and the determinant computation. We can obtain a polynomial approximation in the form of

 P(X)=ϕ(N∑j=1g(xj))=f(X)+O(ϵ).

Here the error is due to the Stone-Weierstrass approximation. Letting we obtain the desired representation.

However, it is in fact not straightforward to extend the two proofs above to the case . For the first proof, we can not define an ordered set when each to define the homeomorphism . For the second proof, in the case a monomial (before symmetrization) takes the form

 N∏i=1d∏α=1xγi,αi,α,γi,α∈N.

Note that is only symmetric with respect to the particle index , but not the component index . Hence the symmetrized monomial is not a totally symmetric function with respect to all variables. Therefore the fundamental theorem of symmetric polynomials does not apply.

Below we prove that the representation (2.1) indeed holds for any , and therefore we complete the proof of . For technical reasons to be illustrated below, and without loss of generality, we shift the domain and assume . Following the Stone-Weierstrass theorem and after symmetrization, can be approximated by a symmetric polynomial. Every symmetric polynomial can be written as the linear combination of symmetrized monomials of the form

Here , and stands for the permanent of .

Following the Ryser formula  for representing a permanent (noting that permanent is invariant under transposition), we have

 perm([fi(xj)])= (−1)N∑S⊆{1,…,N}(−1)|S|N∏i=1∑j∈Sfj(xi) = (−1)N∑S⊆{1,…,N}(−1)|S|e∑Ni=1log(∑j∈Sfj(xi)).

Here we have used that for all . Now we write down the approximation using a symmetric polynomial, which is a linear combination of symmetrized monomials

 f(X)+O(ϵ)=P(X)= L∑l=1c(l)perm([f(l)i(xj)]) = (−1)NL∑l=1c(l)∑S⊆{1,…,N}(−1)|S|e∑Ni=1log(∑j∈Sf(l)j(xi))

Define with each component function

 g(l)S(x)=log⎛⎝∑j∈Sf(l)j(x)⎞⎠.

Then we define given by

 ϕ(Y)=(−1)NL∑l=1c(l)∑S⊆{1,…,N}(−1)|S|eY(l)S,

where is the -th component of . We now have an approximation of the target totally symmetric function in the desired form

 f(X)=ϕ(N∑j=1g(xj))+O(ϵ).

and we finish the proof. Here the number of feature variables is , where is the number of symmetrized monomials used in the approximation.

## 3 Totally symmetric function, revisited

In this section, we prove Theorem 1 for any . In particular, our proof is more explicit and does not rely on the Stone-Weierstrass theorem. The main idea is to partition the space into a lattice and use piecewise-constant function to approximate the target permutation invariant function.

Again without loss of generality we assume . We then partition the domain into a lattice with grid size along each direction. Due to symmetry, we can assign a lexicographical order to all lattice points . That is, if for the first where and differs,

. We define the tensor product of the

copies of the lattice as , and a wedge of is defined accordingly as

 ⋀NL:={Z=(z1,…,zN)|z1⪯z2⋯⪯zN}.

For each , a corresponding union of boxes in can be written as

 BZ;δ=⋃σ∈S(N){X | xi=zσ(i)+δui,ui∈[0,1]d}.

By construction, the piecewise-constant approximation to the target permutation invariant function is then

 f(X)≈∑Z∈⋀NLf(Z)1BZ;δ(X)+O(δ√Nd).

Here we have assumed that the derivative is uniformly bounded for .  Note that the indicator function is permutation invariant and can be rewritten as

 1BZ;δ(X)= 1CZ∑σ∈S(N)1{X|xi=zσ(i)+δui,ui∈[0,1]d,1≤i≤N}(X) = 1CZ∑σ∈S(N)N∏i=11{xi=zσ(i)+δui,ui∈[0,1]d}(xi) = 1CZ∑σ∈S(N)N∏i=11{xσ(i)=zi+δui,ui∈[0,1]d}(xσ(i))=1CZperm([fZi(xj)]),

where . The constant takes care of repetition that can happen depending on . When all elements in are distinct, the box lives in only corresponds to one permutation, so in this case. If say and all other elements distinct, then the box lives in may have two corresponding permutations that differ by a swapping of the first two elements. In this case, will account for the arising repetition. Next we apply the Ryser formula to the permanent,

 1BZ;δ(X)=1CZperm([fZi(xj)])=(−1)NCZ∑S⊆{1,…,N}(−1)|S|e∑Ni=1log(∑j∈SfZj(xi)).

We can now define where each component function is given by

 gZS(x)=log⎛⎝∑j∈SfZj(x)⎞⎠

and we define as

 ϕ(Y)=∑Z∈⋀NL(−1)Nf(Z)CZ∑S⊆{1,…,N}(−1)|S|eYZS. (3.1)

Since ’s are indicator functions we naturally have . In the case when , . In this case, , and therefore its contribution to vanishes as desired. In summary, we arrive at the universal approximation

 f(X)=ϕ(N∑j=1g(xj))+O(δ√Nd). (3.2)

Due to the explicit tabulation strategy, the number of terms needed in the approximation (3.2) can be counted as follows. The number of points in is , where comes from the lexicographic ordering. Note that formally as , can vanish for fixed . However, this means that the number of elements has exceeded the number of grid points in and is unreasonable. So we should at least have . In order to obtain an -close approximation of , we require . When , we have , and the number of points in becomes . For each , the number of terms to be summed over in Eq. (3.1) is . Therefore in order to obtain an -approximation, the number of feature variables is given by Eq. (1.4). This proves Theorem 1.

This is of course a very pessimistic bound, and we will discuss on the practical implications for designing neural network architectures in Section 5. We remark that one may expect that following the same tabulation strategy, we may also provide a quantitative bound for constructed by the homeomorphism mapping as discussed in Section 2. However, the difference is that our bound only relies on the smoothness of the original function and hence the bound for . On the other hand, the mapping and hence can be arbitrarily pathological, and therefore it is not even clear how to obtain a double-exponential type of bound as discussed above. We also remark that if the indicator functions and in the proof are replaced by proper smooth cutoff functions with respect to corresponding domains, the ansatz in Eq. (3.2) can be continuous to accommodate the applications that requires continuity.

## 4 Totally anti-symmetric functions

Now we consider an anti-symmetric function. Similar to the symmetric case, when and is a polynomial of and anti-symmetric, it is known that

 f(x1,…,xN)=U(X)∏i

where is a symmetric polynomial and the second term is a Vandermonde determinant. This was first proved by Cauchy , who of course also first introduced the concept of determinant in its modern sense.

Our aim is to generalize to . Without loss of generality we again assume . The construction of the ansatz is parallel to the totally symmetric case. Recall the lattice , the wedge , and the corresponding union of boxes . In the totally symmetric case, we make a piece-wise constant approximation over the union of boxes. For the anti-symmetric case, we would need to insert an anti-symmetric factor (w.r.t. ):

 f(X)=∑Z∈⋀NLf(Z)ψZ(X)ψZ(Z)1BZ;δ(X)+O(δ√Nd), (4.1)

where is a totally anti-symmetric function which might be chosen depending on . Note that in principle any anti-symmetric function can be chosen as long as is bounded away from . Motivated by the one-dimension result, we focus on constructions of given by the Vandermonde determinant. Given a permutation equivariant map , we consider

 ψZ(X)=∏i

It is clear that due to the permutation equivariance of , the defined is anti-symmetric. It thus suffices to choose the map such that . Observe that due to anti-symmetry whenever for some . In particular, this means that we only need to consider for those that all ’s are different.

We will consider two specific concrete constructions below corresponding to different choice of the equivariant map. The first is an intuitive way to achieve equivariance through sorting, for the purpose of illustration. The second is a linear transformation showing that the ansatz proved in Theorem

1 can be continuous, after replacing the indicator function with a proper smooth cutoff function.

Construction 1. As ’s are distinct, for , there exists a unique such that

 X∈{X′∣x′i=zσ(i)+δui,ui∈[0,1]d} (4.3)

Denote this unique permutation as , and we take the permutation equivariant map such that

 yZi(X)=σZ,X(i). (4.4)

Since gives the sorting of according to , it is easy to see that the above is equivariant, and

 ψZ(X)=∏i

Note that , so we arrive at

 f(X)=∑Z∈⋀NLUZ(X)∏i

with

 UZ(X)=f(Z)∏i

Construction 2. Our second construction is based on the choice of a linear permutation equivariant map given by

 [YZ(X)]i=(aZ)⊤xi.

The corresponding Vandermonde determinant is then given by

 ψZ(X)=∏i

The resulting approximation to is

 f(X)=∑Z∈⋀NLUZ(X)∏i

where we denote the symmetric part

 UZ(X)=f(Z)∏i

It thus suffices to choose such that . As the set

consists of a discrete set of vectors, such a

exists since has measure in .

In both constructions,

is a scaled version of the characteristic function, and can thus be treated similarly as in the totally symmetric case based on Ryser’s formula. As both construction depends on the same tabular strategy as the totally symmetric case, the number of terms involved in the sum over

is the same too, which we will not repeat. The total number of feature variables is the same as that in Eq. (1.4) due to the use of the indicator function. This proves Theorem 1.

## 5 Practical considerations and discussion

In this paper we study the universal approximation for symmetric and anti-symmetric functions. Following the line of learning theory, there are many questions open. For instance, the impact of symmetry on the generalization error remains unclear. This requires in-depth understanding of the suitable function class for symmetric and anti-symmetric functions, such as some adapted Barron space . Note that a recent work  investigates the approximation and generalization bound of permutation invariant deep neural networks in the general case , however with two limitations. The first is a rather strong assumption that the target function is Lipschitz with respect to the norm (but not the usual Euclidean norm). This can be a severe limitation as the dimension (both and ) increases. Indeed, under the same Lipschitz assumption of , the number of feature variables in our Theorem 1 can be improved accordingly to . The second limitation is that the proposed ansatz in  introduces sorting layers to represent the sorting procedure at the first step. The sorting procedure will bring discontinuity, which leads to serious problems in some scientific applications, as explained in the introduction.

For anti-symmetric function, the ansatz suitable for the practical computation is of interest since one main motivation for studying anti-symmetric function is to integrate neural network-based wavefunction into VMC. The Vandermonde determinant considered here is proved to provide a simple but universal ansatz. Its universality suggests us to consider the following trial wavefunction in VMC:

 f(X)=K∑k=1Uk(X)∏i

where is symmetric and is a permutation equivariant map. The ansatz for each and can be still quite flexible. Another more general yet more complicated ansatz is based on replacing the above Vandermonde determinant with the Slater determinant (see e.g. [12, 8])

 det[Yk(X)]=∣∣ ∣ ∣∣yk1(X)…yk1(X)⋮⋮ykN(X)…ykN(X)∣∣ ∣ ∣∣.

The Slater determinant is widely used in quantum chemistry. It is known to be universal under the complete basis set and, indeed, the basis set derived from the Hartree-Fock approximation provides a fairly good starting point for most of the modern quantum chemistry methods. However, the complexity of computing a Slater determinant is , while it is only for a Vandermonde determinant. This may become a more severe issue when one calculates the local energy, which involves the evaluation of the Laplacian of the trial wavefunction. Therefore, it remains interesting to spend more effort on the Vandermonde ansatz and its variants, hoping to find good ones that strike a good balance between accuracy and efficiency. It would also be interesting to learn from the second quantized representation of quantum systems, which lifts the symmetry requirement of functions to that of linear operators, and leads to powerful representations such as matrix product states.

## Acknowledgement

We thank the hospitality of the American Institute of Mathematics (AIM) for the workshop “Deep learning and partial differential equation” in October 2019, which led to this collaborative effort. The work of LL and JZ was supported in part by the Department of Energy under Grant No. DE-SC0017867, and No. DE-AC02-05CH11231. The work of YL and JL was also supported in part by the Natinoal Science Foundation via grants DMS-1454939 and ACI-1450280.