1 Introduction
Valiant’s conjecture [22], that , is often referred to as the algebraic counterpart to the conjecture that . It has proved as elusive as the latter. The conjecture is equivalent to the statement that there is no polynomialsize family of arithmetic circuits for computing the permanent of a matrix, over any field of characteristic other than 2. Here, arithmetic circuits are circuits with input gates labelled by variables from some set or constants from a fixed field , and internal gates labelled with the operations and . The output of such a circuit is some polynomial in , and we think of the circuit as a compact representation of this polynomial. In particular, if the set of variables form the entries of an matrix, i.e. , then denotes the polynomial , which is the permanent of the matrix. For conciseness, we refer to the family of polynimials as the permanent. While a lower bound for the size of general arithmetic circuits computing the permanent remains out of reach, lower bounds have been established for some restricted classes of circuits. In particular, it is known that there is no subexponential family of monotone circuits for the permanent. This was first shown for the field of real numbers [17] and a proof for general fields, with a suitably adapted notion of monotonicity is given in [18]. An exponential lower bound for the permanent is also known for depth3 arithmetic circuits [15] for all finite fields. It should be noted that in both these cases, the exponential lower bound obtained for the permanent also applies to the determinant, i.e. the family of polynomials , where is . However, the determinant is in and so there do exist polynomialsize families of general circuits for the determinant. In this paper, we consider a new restriction on arithmetic circuits based on a natural notion of symmetry, and we show that it distingushes between the determinant and the permanent. That is to say, we are able to show nearly exponential lower bounds on the size of any family of symmetric arithmetic circuits for computing the permanent. On the other hand, we are able to show that there are polynimialsize symmetric circuits for computing the determinant. We prove the upper bound on the determinant for fields of characteristic zero, and conjecture that it holds for all fields. On the other hand, our lower bound for the permanent is established for all fields of characteristic other than . This is the best that can be hoped for, as the permanent and the determinant coincide for fields of characteristic . We next define (informally) the notion of symmetry we use. A formal definition follows in Section 3. Note that the permanent and the determinant are not symmetric polynomials in the usual meaning of the word, in that they are not invariant under arbitrary permutations of their variables. However, they do have natural symmetries, i.e. permutations of the variables induced by row and column permutations. Specifically, is invariant under arbitrary permutations of the rows and columns of the matrix , while is invariant under simultaneous permutations of the rows and columns. We say that an arithmetic circuit (seen as a labelled directed acyclic graph) for computing is symmetric if the action of any permutation to its input variables (i.e. taking to ) extends to an automorphism of . Similarly, a circuit for computing is symmetric if the action of on the inputs (taking to ) extends to an automorphism of . This notion of symmetry in circuits has been studied previously in the context of Boolean circuits for deciding graph properties, or properties of relational structures (see [13, 19, 2]). Specifically, such symmetric circuits arise naturally in the translation into circuit form of specifications of properties in a logic or similar highlevel formalism. Similarly, we can think of a symmetric arithmetic circuit as a straightline program which treats the rows and columns of a matrix as being indexed by unordered sets. It is clear that many natural algorithms have this property. For example, Ryser’s formula for computing the permanent naturally yields a symmetric circuit. Polynomialsize families of symmetric Boolean circuits with threshold gates form a particularly robust class, with links to fixedpoint logics [2]
. In particular, this allows us to deploy methods for proving inexpressiblity in such logics to prove lower bounds on the size of symmetric circuits. A close link has also been established between the power of such circuits and linear programming extended formulations with a geometric notion of symmetry
[5]. Our lower bound for the permanent is established by first giving a symmetrypreserving translation of arithmetic circuits to Boolean circuits with threshold gates, and then establishing a lower bound there for computing the permanent of a matrix. The lower bounds for symmetric Boolean circuits are based on a measure we call the counting width of graph parameters (the term is introduced in [11]). This is also sometimes known as the WeisfeilerLeman dimension. In short, we have, for each an equivalence relation , known as the dimensional WeisfeilerLeman equivalence, that is a coarse approximation of isomorphism, getting finer with increasing . The counting width of a graph parameter is the smallest , as a function of the graph size , such that is constant on classes of graphs of size . From known results relating Boolean circuits and counting width [2, 5], we know that the existence of subexponential size symmetric circuits computing implies a sublinear upper bound on its counting width. Hence, using the standard relationship between the permanent of a matrix and the number of perfect matchings in a bipartite graph, we obtain our lower bound for the permanent in fields of characteristic zero by showing a linear lower bound on the counting width of —the number of perfect matchings in . Indeed, showing the same for for every prime also establishes the lower bound for the permanent in all positive characteristics. The linear lower bound on the counting width of the number of perfect matchings is a result of interest in its own right, quite apart from the lower bounds it yields for circuits for the permanent. Indeed, there is an interest in determining the counting width of concrete graph parameters (see, for instance, [4]), and the result here is somewhat surprising. The decision problem of determining whether a graph has any perfect matching is known to have constant counting width. Indeed, the width is for bipartite graphs [7]. For general graphs, it is known to be strictly greater than but still bounded above by a constant [3]. In Section 2 we introduce some preliminary definitions and notation. In Section 3, we introduce the key definitions and properties of symmetric circuits. Some of this material is a review of existing literature and some introduces new notions in relation to arithmetic circuits. Section 4 establishes the upper bound for symmetric circuit size for the determinant, by translating Le Verrier’s method to symmetric circuits. Finally the lower bound for the permanent is established in Sections 5 and 6. The first of these gives the symmetrypreserving translation from arithmetic circuits to Boolean circuits with threshold gates, and the second gives the main construction proving the linear lower bound for the counting width of the number of perfect matchings in a bipartite graph.2 Background
In this section we discuss relevant background and introduce notation. We write for the positive integers and for the nonnegative integers. For denotes the set and the set . For a set we write to denote the powerset of .
2.1 Groups
For a set , is the symmetric group on . For we write to abbreviate . The sign of a permutation is defined so that if is even and otherwise . Let be a group acting on a set . We denote this as a left action, i.e. for , . The action extends in a natural way to powers of . So, for , . It also extends to the powerset of and functions on as follows. The action of on is defined for and by . For any set, the action of on is defined for and by for all . We refer to all of these as the natural action of on the relevant set. Let and for each let be a group acting on . The action of the direct product on is defined for and by . If instead then the action of on is defined for and such that if then . Again, we refer to either of these as the natural action of on .
2.2 Fields and Linear Algebra
Let and be finite nonempty sets. An matrix with entries in is a function . For , let . We recover the more familiar notion of an matrix with rows and columns indexed by ordered sets by taking and . The permanent of a matrix is invariant under taking row and column permutations, while the determinant and trace are invariant under taking simultaneous row and column permutations. With this observation in mind, we define these three functions for unordered matrices. Let be a commutative ring and be a matrix where . Let be the set of bijections from to . The permanent of over is . Suppose . The determinant of over is . The trace of over is . In all three cases we omit reference to the ring when it is obvious from context or otherwise irrelevant. We always use to denote a field and to denote the characteristic of . For any prime or prime power we write for the finite field of order . We are often interested in polynomials defined over a set of variables with a natural matrix structure, i.e. . We identify with this matrix. We also identify any function of the form with the matrix with entries in defined by replacing each with . For let . Let and . In other words, is the formal polynomial defined by taking the permanent of an matrix with th entry , and similarly for the determinant. We write to abbreviate and to abbreviate .
2.3 Counting Width
For any , the dimensional WeisfeilerLeman equivalence (see [8]), denoted is an equivalence relation on graphs that provides an overapproximation of isomorphism in the sense that for isomorphic graphs and , we have for all . Increasing values of give finer relations, so implies for all . The equivalence relation is decidable in time , where is the size of the graphs. If , then implies that and are isomorphic. The WeisfeilerLeman equivalences have been widely studied and they have many equivalent characterizations in combinatorics, logic, algebra and linear optimization. One of its many uses has been to establish inexpressibility results in logic. These can be understood through the notion of counting width. A graph parameter is a function from graphs to which is isomorphism invariant. Examples are the chromatic number, the number of connected components or the number of perfect matchings. For a graph parameter and any fixed , there is a smallest value of such that is invariant. This motivates the definition.
Definition 1.
For any graph parameter , the counting width of is the function defined for such that is the smallest such that for all graphs of size , if , then .
The counting width of a class of graphs is the counting width of its indicator function. This notion of counting width for classes of graphs was introduced in [11], which we here extend to graph parameters. Note that the counting width of any graph parameter is at most . Cai, Fürer and Immerman [8] first showed that there is no fixed for which coincides with isomorphism. Indeed, in our terminology, they construct a class of graphs with counting width . Since then, many classes of graphs have been shown to have linear counting width, including the class of Hamiltonian graphs and the class of 3colourable graphs (see [5]. In other cases, such as the class of graphs that contain a perfect matching, it has been proved that they have counting width bounded by a constant [3]. Our interest in counting width stems from the relation between this measure and lower bounds for symmetric circuits. Roughly speaking, we know that if a class of graphs is recognized by a family of polynomialsized symmetric threshold circuits, it has bounded counting width (a more precise version of this statement is given in Theorem 15 below). Our lower bound construction in Section 6 is based on the graphs constructed by Cai et al. [8]. While we review some of the details of the construction in Section 6, a reader unfamiliar with the construction may wish to consult a more detailed introduction. The original construction can be found in [8] and a version closer to what we use is given in [10].
2.4 Circuits
We give a general definition of a circuit that incorporates both Boolean and arithmetic circuits.
Definition 2 (Circuit).
A circuit over the basis with variables and constants is a directed acyclic graph with a labelling where each vertex of indegree is labelled by an element of and each vertex of indegree greater than is labelled by an element of .
Let , where be a circuit with constants . We call the elements of gates, and the elements of wires. We call the gates with indegree input gates and gates with outdegree output gates. We call those input gates labelled by elements of constant gates. We call those gates that are not input gates internal gates. For we say that is a child of if . We write to denote the set of children of . We write to denote the subcircuit of rooted at . Unless otherwise stated we always assume a circuit has exactly one output gate. If is a field , and is the set , we have an arithmetic circuit over . If , and is a collection of Boolean functions, we have a Boolean circuit over the basis . We define two bases here. The first is the standard basis containing the functions , , and . The second is the threshold basis which is the union of and , where for each , is defined for a string so that if, and only if, the number of s in at least . We call a circuit defined over this basis a threshold circuit. Another useful Boolean function is , which is defined by . We do not explicitly include it in the basis as it is easily defined in . In general, we require that a basis contain only functions that are invariant under all permutations of their inputs (we define this notion formally in Definition 4). This is the case for the arithmetic functions and and for all of the Boolean functions in and . Let be a circuit defined over such a basis with variables and constants . We evaluate for an assignment by evaluating each gate labelled by some to and each gate labelled by some to , and then recursively evaluating each gate according to its corresponding basis element. We write to denote the value of the gate and to denote the value of the output gate. We say that computes the function . It is conventional to consider an arithmetic circuit over with variables to be computing a polynomial in , rather than a function . This polynomial is defined via a similar recursive evaluation, except that now each gate labelled by a variable evaluates to the corresponding formal variable, and we treat addition and multiplication as ring operations in . Each gate then evaluates to some polynomial in . The polynomial computed by is the value of the output gate. For more details on arithmetic circuits see [21] and for Boolean circuits see [23].
3 Symmetric Circuits
In this section we discuss different symmetry conditions for functions and polynomials. We also introduce the notion of a symmetric circuit.
3.1 Symmetric Functions
Definition 3.
For any group , we say that a function , along with an action of on is a symmetric function, if for every , .
We are interested in some specific group actions, and we define these and give them names next, as well as illustrating them with examples.
Definition 4.
If , we call a symmetric function , fully symmetric.
Examples of fully symmetric functions are those that appear as labels of gates in a circuit, including , , , and .
Definition 5.
If and is symmetric with the natural action of on , then we say it is matrix symmetric.
Matrix symmetric functions are those where the input is naturally seen as a matrix and the result in invariant under aribtrary row and column permutations. The canonical example for us of a matrixsymmetric function is the permanent. The determinant is not matrixsymmetric over fields of characteristic other than , but does satisfy a more restricted notion of symmetry that we define next.
Definition 6.
If and is symmetric with the natural action of on , then we say it is square symmetric.
The determinant is one example of a square symmetric function. However, as the determinant of a matrix is also invariant under the operation of transposing the matrix, we also consider this variation. To be precise, let be the permutation that takes to for all . Let be the diagonal of (i.e. the image of in its natural action on ). We write for the group generated by .
Definition 7.
A function that is symmetric with the natural action of on is said to be transposesymmetric.
Finally, another useful notion of symmetry in functions is where the inputs are naturally partitioned into sets.
Definition 8.
If , , and is symmetric, we say that it is partition symmetric.
In Section 5, we consider a generalization of circuits to the case where the labels in the basis are not necessarily fully symmetric functions, but they are still partition symmetric. The structure of such a circuit can not be described simply as a DAG, but requires additional labels on wires, as we shall see. In this paper, we mainly treat the permament, as a matrixsymmetric function, and the determinant as a transposesymmetric function.
3.2 Symmetric Circuits
Symmetric Boolean circuits have been considered in the literature, particularly in connection with definability in logic. In that context, we are considering circuits which take relational structures (such as graphs) as inputs and we require their computations to be invariant under reorderings of the elements of the structure. Here, we generalize the notion to arbitrary symmetry groups, and also consider them in the context of arithmetic circuits. In order to define symmetric circuits, we first need to define the automorphisms of a circuit.
Definition 9 (Circuit Automorphism).
Let be a circuit over the basis with variables and constants . For , we say that a bijection is an automorphism extending if for every gate in we have that

if is a constant gate then ,

if is a nonconstant input gate then ,

if is a wire, then so is

if is labelled by , then so is .
We say that a circuit with variables is rigid if for every permutation there is at most one automorphism of extending . We are now ready to define the key notion of a symmetric circuit.
Definition 10 (Symmetric Circuit).
For a symmetric function , a circuit computing is said to be symmetric if for every , the action of on extends to an automorphism of . We say is strictly symmetric if it has no other automorphisms.
For a gate in a symmetric circuit , the orbit of , denoted by , is the the set of all such that there exists an automorphism of with . We write for the maximum size of an orbit in . Though symmetric arithmetic circuits have not previously been studied, symmetric Boolean circuits have [13, 19, 2]. It is known that polynomialsize symmetric threshold circuits (i.e. over the basis ) are more powerful than polynomialsize symmetric circuits over the standard Boolean basis [2]. In particular, the majority function is not computable by any family of polynomialsize symmetric circuits over . On the other hand, it is also known [12] that adding any fully symmetric functions to the basis does not take us beyond the power of . Thus, the threshold basis gives the robust notion, and that is what we use here. It is also this that has the tight connection with counting width mentioned above.
3.3 Polynomials
In the study of arithmetic complexity, we usually think of a circuit over a field with variables in as expressing a polynomial in , rather than computing a function from to . The distinction is signficant, particularly when is a finite field, as it is possible for distinct polynomials to represent the same function. The definitions of symmetric functions given in Section 3.1 extend easily to polynomials. So, for a group acting on , a polynomial is said to be symmetric if for all . We define fully symmetric, matrix symmetric, square symmetric and transpose symmetric polynomials analogously. Every matrix symmetric polynomial is also square symmetric. Also, every transpose symmetric polynomial is square symmetric. The permanent is both matrix symmetric and transpose symmetric, while the determinant is transpose symmetric, but not matrix symmetric. In this paper, we treat as a matrix symmetric polynomial and as a transpose symmetric polynomial. It is clear that a symmetric polynomial determines a symmetric function. An arithmetic circuit expressing a symmetric polynomial is said to be symmetric if the action of each on the inputs of extends to an automorphism of . What are standardly called the symmetric polynomials are, in our terminology, fully symmetric. In particular, the homogeneous polynomial is fully symmetric. There is a known lower bound of on the size of any circuit expressing this polynomial [6]. It is worth remarking that the matching upper bound is achieved by a symmetric circuit. Thus, at least in this case, there is no gain to be made by breaking symmetries in the circuit. Similarly, we have tight upper and lower bounds for the elementary symmetric polynomials over infinite fields [20]. Again, the upper bound is achieved by symmetric circuits. The best known upper bound for general arithmetic circuits for expressing the permanent is given by Ryser’s formula:
It is easily seen that this expression is symmetric, and it yields a symmetric circuit of size . Our main result, Theorem 14 gives us a near matching lower bound on the size of symmetric circuits for expressing . A symmetric circuit expressing a symmetric polynomial is also a symmetric circuit computing the function determined by . In establishing our upper bound for the determinant, we show the existence of small symmetric circuits for the polynomial, and hence also for the function. For the lower bound on the permanent, we show that there are no small symmetric circuits for computing the function, hence also none for the polynomial.
4 An UpperBound for the Determinant
In this section we show that for any field of characteristic there is a polynomialsize family of symmetric arithmetic circuits over computing . We define this family using Le Verrier’s method for calculating the characteristic polynomial of a matrix. We review this method briefly. The characteristic polynomial of an matrix is
where
are the eigenvalues of
, counted with multiplicity. It is known that and . Le Verrier’s method gives, for each , the following linear recurrence for in terms of :where for each , . The determinant can thus be computed by recursively computing each and finally computing . We direct the reader to Section 3.4.1 in [16] for a detailed review of this approach. It follows from the above that we can compute the determinant as follows:

for each compute ,

for each compute , and

for each recursively compute .
We now show how this algorithm can be implemented via a uniform family of symmetric arithmetic circuits. Roughly speaking, for the first step, we can compute all entries of the matrix in parallel, and this guarantees that it can be done symmetrically. The second step involves a sum over the diagonal and is clearly invariant under all permutations of the diagonal. This produces a single value for each , and thus the final step, which is an iterative calculation involving these previously computed values, is independent of the order of the rows and columns. We now formalize this procedure.
Theorem 11.
For a field of characteristic , there exists a family of symmetric arithmetic circuits over computing for which the function is computable in time .
Proof.
Let and let be an matrix of variables, for an index set with . We now describe an implementation of Le Verrier’s method for matrices as arithmetic circuit over the set of variables . We construct this circuit as follows.

For each we include a family of gates intended to compute the entries in the th power of the matrix . For each we include a gate intended to compute . Let and for all , .

For each we include a gate intended to compute the trace of . Let and for , .

For each we include a gate intended to compute the coefficient in the characteristic polynomial. Let and for all let
Let be the output gate of . It follows from the discussion preceding the statement of the theorem that computes . It remains to show that the circuit is symmetric. Let . Let be defined such that for each input gate labelled we have , for each gate of the form we have , and for every other gate we have . It can be verified that is a circuit automorphism extending . Similarly, if is the transpose perrmutation, i.e. , then we can extend it to an automorphism of by letting . It follows that is a symmetric arithmetic circuit. The circuit contains constant gates labelled by . There are other input gates. There are additional gates required to compute all gates of the form . There are additional gates required to compute all gates of the form . There are at most additional gates required to compute all gates of the form . It follows that the circuit is of size . The above description of the circuit can be adapted to define an algorithm that computes the function in time . ∎
Le Verrier’s method explicitly involves multiplications by field elements for , and so cannot be directly applied to fields of positive characteristic. We conjecture that it is also possible to give symmetric arithmetic circuits of polynomial size to compute the determinant over arbitrary fields. Indeed, there are many known algorithms that yield polynomialsize families of arithmetic circuits over fields of positive characteristic computing . It seems likely that some of these could be implemented symmetrically.
5 From Arithmetic To Boolean Circuits
We establish our lower bound on the size of symmetric arithmetic circuits for the permanent by giving a lower bound for symmetric Boolean threshold circuits, for related decision problems. The main construction for those is given in Section 6 below. In this section, we show that symmetric arithmetic circuits for the permanent can be translated into Boolean threshold circuits for the related decision problems, while preserving the condition of symmetry. This is the main result of this section, Theorem 13. We prove the main result in three stages. First, for each field we define a basis of partitionsymmetric functions intended to act as Boolean analogues of addition and multiplication over . Secondly, we show in Lemma 12 that each function in can be computed by a rigid strictly symmetric threshold circuit. Thirdly, we prove Theorem 13 by showing that for a family of symmetric arithmetic circuits over we can define a family of symmetric circuits for a related decision problem. We complete the proof using Lemma 12 to replace each gate in every circuit labelled by a function in with the symmetric circuit that computes it. We now define for each field the basis . Let be a finite set, be a disjoint union of nonempty sets, and . We define a Boolean function that given computes the sum over all of the number of elements of that maps to , weighted by , and returns if this sum is exactly . We also define an analogous function for multiplication . Formally, these functions are defined for as follows
and
It is easily seen that both and are partitionsymmetric. Let be the set of all functions and . In order to define a circuit over a basis that may include partitionsymmetric functions we need some additional structure so that the children of gates labelled by partitionsymmetric functions can be identified with an appropriate part. Let be a circuit over the set of variables and let be a gate in labelled by a partitionsymmetric function , where for some finite set and nonempty sets . We associate with a bijection . We evaluate for an input as follows. For we let be defined such that for all . Let . We now show that any partitionsymmetric function can be computed by a rigid symmetric threshold circuit.
Lemma 12.
Let be a partitionsymmetric function. There exists a rigid strictly symmetric threshold circuit computing .
Proof.
Let be a disjoint union of finite sets indexed by , and be a partitionsymmetric function. The fact that is partition symmetric means that whether for some is determined by the number of (for each ) for which . Write for this number. Then, there is a set such that if, and only if, . Moreover, since the sets are finite, so is . Then if, and only if, the following Boolean expression is true:
We can turn this expression into a circuit with an gate at the output, whose children are gates, one for each , let us call it . The children of are a set of gates, one for each , let us call it , which is labelled by and has as children all the inputs . This circuit is symmetric and rigid, but not necessarily strictly symmetric, as it may admit automorphisms that do not respect the partition of the inputs as . To remedy this, we create pairwise nonisomorphic gadgets , one for each . Each is a oneinput, oneoutput circuit computing the identity function. For example, could be a tower of singleinput gates, and we choose a different height for each . We now modify to obtain by inserting between each input and each gate a copy of the gadget . The circuit clearly computes by construction. We argue that it is rigid and strictly symmetric. To see that it is symmetric, consider any in its natural action on . This extends to an automorphism of that takes the gadget to while fixing all gates and . To see that there are no other automorphisms, suppose is an automorphism of . It must fix the output gate. Also cannot map a gate to for because the gadgets and are nonisomorphic. Suppose that maps to . Then, it must map to . Since the labels of these gates are and respectively, we conclude that for all and therefore . ∎
We now prove the main result of the section. This provides a translation of amsymmetric arithmetic circuit to and equivalent symmetric threshold circuit, without a blowup in the size of orbits. When we say the circuits are equivalent, we mean we consider the function computed by on  inputs, and an aribtrary decision problem on the possible outputs of .
Theorem 13.
Let be a group acting on a set of variables . Let be a symmetric arithmetic circuit over a field and with variables computing a symmetric function. Let be finite. Then there is a symmetric threshold circuit with variables , such that for all we have

if, and only if, and

.
Proof.
For let be the set of values taken by the gate for any given assignment to the input gates, i.e. . The restriction to matrices ensures that is finite. Let be the output gate of . If let be the circuit consisting of a single gate labelled by and if let consist of a single gate labelled by . Suppose that neither of these two cases hold. We construct by first constructing a Boolean circuit over satisfying the statement of the theorem and then, using Lemma 12, replacing each gate in labelled by a partition symmetric function with an appropriate rigid strictly symmetric threshold circuit. We define from by replacing each internal gate in by a family of gates for such that if, and only if, . We also add a single output gate in that has as children exactly those gates where . We define from recursively as follows. Let .

If is an nonconstant input gate in let be an input gate in labelled by the same variable as and let be a gate with child .

If is a constant gate in labelled by some field element let be a constant gate in labelled by .

Suppose is an internal gate. Let . For let . Let . For each let be a gate in such that if is an addition gate or multiplication gate then is labelled by or , respectively. The labelling function is defined for such that if then .
We add one final gate to form with . Let . We now show by induction that for all and , if, and only if, . Let . If is an input gate then the claim holds trivially. Suppose is an internal gate and let . Suppose is an addition gate. Then is labelled by the function where , for , , and . Then
A similar argument suffices if is a multiplication gate. It follows that if, and only if, there exists such that if, and only if, . We now show that is a symmetric circuit. Let and be an automorphism of extending . Let be defined such that for each gate , and for the output gate , . It can be verified by induction that is an automorphism of extending . We now show that . It suffices to prove that for and that if, and only if, . The forward direction follows from the above argument establishing that is symmetric. Let and and suppose . For each gate pick some such that if or then and for all , if then . Let be an automorphism of such that . Let be defined for such that . We now show that is an automorphism of , and so . Note that, since preserves the labelling on the gates in , it follows that for all , and so . Let and suppose . Then , and so and . It follows that is injective, and so bijective. Let . Then
The first and last equivalences follow from the construction of the circuit. The remaining conditions for to be an automorphism can be easily verified. We define from by recursively replacing each internal gate labelled by some partition symmetric with the rigid strictly symmetric threshold circuit computing defined in Lemma 12. computes the same function as . Since is symmetric for a partition symmetric function is symmetric. It follows from the fact that is both rigid and strictly symmetric that . The result follows. ∎
6 A LowerBound for the Permanent
In this section, we establish the lower bound on the size of symmetric arithmetic circuits for the permanent.
Theorem 14.
If is a field with , then for any there is no family of symmetric arithmetic circuits over of orbit size computing .
Our proof establishes something stronger. We actually show that there are no symmetric arithmetic circuits of orbit size that compute the function for matrices . Clearly, a circuit that computes the polynomial also computes this function. For a discussion of functional lower bounds, as opposed to polynomial lower bounds, see [14]. Theorem 14 is proved by showing lower bounds on the counting widths of functions which determine the number of perfect matchings in a bipartite graph. The connection of circuit orbit size to counting width comes through the following theorem (see [2, 5]).
Theorem 15.
Let be a family of symmetric Boolean threshold circuits of orbit size for some deciding a class of graphs . Then, the counting width of is .
This theorem is easily obtained by the methods of [2] and [5]. Indeed, [2, Theorem 4] shows for circuit families of size at most , a bound on the size of supports of size . In Theorem 6 of that paper, an explicit link between orbit size and counting width is stated for circuits with polynomial orbit size, and hence constant size support. Combining the methods of the two, easily yields Theorem 15, at least for . The improvement to orbit size is obtained by the methods from [5, Theorem 1]. This last is stated in terms of the size of the circuit rather than its orbit size. However, the proof easily yields the bound for orbit size. If is a bipartite graph, let denote the number of perfect matchings in and, for a prime number , we write for the congruence class of . It is well known if is a balanced bipartite graph with vertex bipartition , and is the biadjacency matrix of , then the permanent of (say, over the rational field ) is the number of distinct perfect matchings of . Moreover, since is a matrix, whenever is a subfield of . In particular, for any field of characteristic zero, and for any field of characteristic , . To avoid unnecessary case distinctions, we write where is either or a prime , with the understanding that . Then, we can say that for any field with , . Combining Theorem 13 with Theorem 15 gives us the following consequences.
Corollary 16.
If is a field of characteristic and there is a family of symmetric circuits over computing of orbit size , then the counting width of is .
Proof.
Let be the counting width of . Then, by definition, we can find for each , a pair of balanced bipartite graphs and on at most vertices such that but . Let . Then, by Theorem 13 and the assumption that there is a family of symmetric circuits over computing of orbit size
Comments
There are no comments yet.