Combinatorial Resultants in the Algebraic Rigidity Matroid

Motivated by a rigidity-theoretic perspective on the Localization Problem in 2D, we develop an algorithm for computing circuit polynomials in the algebraic rigidity matroid associated to the Cayley-Menger ideal for n points in 2D. We introduce combinatorial resultants, a new operation on graphs that captures properties of the Sylvester resultant of two polynomials in the algebraic rigidity matroid. We show that every rigidity circuit has a construction tree from K_4 graphs based on this operation. Our algorithm performs an algebraic elimination guided by the construction tree, and uses classical resultants, factorization and ideal membership. To demonstrate its effectiveness, we implemented our algorithm in Mathematica: it took less than 15 seconds on an example where a Groebner Basis calculation took 5 days and 6 hrs.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

10/24/2018

Complexity, combinatorial positivity, and Newton polytopes

The Nonvanishing Problem asks if a coefficient of a polynomial is nonzer...
12/28/2017

A PSPACE Construction of a Hitting Set for the Closure of Small Algebraic Circuits

In this paper we study the complexity of constructing a hitting set for ...
08/27/2015

Algebraic Local Cohomology with Parameters and Parametric Standard Bases for Zero-Dimensional Ideals

A computation method of algebraic local cohomology with parameters, asso...
01/29/2018

Approximate Vanishing Ideal via Data Knotting

The vanishing ideal is a set of polynomials that takes zero value on the...
06/28/2021

Conormal Spaces and Whitney Stratifications

We describe a new algorithm for computing Whitney stratifications of com...
11/07/2020

On the Complexity of CSP-based Ideal Membership Problems

In this paper we consider the Ideal Membership Problem (IMP for short), ...
12/07/2020

Algebraic geometry of discrete interventional models

We investigate the algebra and geometry of general interventions in disc...

1 Introduction

This paper addresses combinatorial, algebraic and algorithmic aspects of a question motivated by the following ubiquitous problem from distance geometry:

Localization.

A graph together with weights associated to its edges is given. The goal is to find placements of the graph in some Euclidean space, so that the edge lengths match the given weights. In this paper, we work in 2D. A system of quadratic equations can be easily set up so that the possible placements are among the (real) solutions of this system. Rigidity Theory can help predict, a priori, whether the set of solutions will be discrete (if the given weighted graph is rigid) or continuous (if the graph is flexible). In the rigid case, the double-exponential Gröbner basis algorithm can be used, in principle, to eliminate all but one of the variables. Once a polynomial in a single variable is obtained, numerical methods are used to solve it. We then select one solution, substitute it in the original equations, eliminate to get a polynomial in a new variable and repeat.

Single Unknown Distance Problem.

Instead of attempting to directly compute the coordinates of all the vertices, we restrict our attention to the related problem of finding the possible values of a single unknown distance corresponding to a non-edge (a pair of vertices that are not connected by an edge). Indeed, if we could solve this problem for a collection of non-edge pairs that form a trilateration when added to the original collection of edges, then a single solution in Cartesian coordinates could be easily computed afterwards in linearly many steps of quadratic equation solving.

Rigidity circuits.

We formulate the single unknown distance problem in terms of Cayley rather than Cartesian coordinates. Known theorems from Distance Geometry, Rigidity Theory and Matroid Theory help reduce this problem to finding a certain irreducible polynomial in the Cayley-Menger ideal, called the circuit polynomial. Its support is a graph called a circuit in the rigidity matroid, or shortly a rigidity circuit. Substituting given edge lengths in the circuit polynomial results in a uni-variate polynomial which can be solved for the unknown distance.

The focus of this paper is the following:

Main Problem.

Given a rigidity circuit, compute its corresponding circuit polynomial.

Related work.

While both distance geometry and rigidity theory have a distinguished history for which a comprehensive overview would be too long to include here, very little is known about computing circuit polynomials. To the best of our knowledge, their study in arbitrary polynomial ideals was initiated in the PhD thesis of Rosen [rosen:thesis]. His Macaulay2 code [rosen:GitHubRepo] is useful for exploring small cases, but the Cayley-Menger ideal is beyond its reach. A recent article [rosen:sidman:theran:algebraicMatroidsAction:2020] popularizes algebraic matroids and uses for illustration the smallest circuit polynomial in the Cayley-Menger ideal. We could not find non-trivial examples anywhere. Indirectly related to our problem are results such as [WalterHusty], where an explicit univariate polynomial of degree 8 is computed (for an unknown angle in a configuration given by edge lengths, from which the placement of the vertices is determined) and [sitharam:convexConfigSpaces:2010], for its usage of Cayley coordinates in the study of configuration spaces of some families of distance graphs. A closely related problem is that of computing the number of embeddings of a minimally rigid graph [streinu:borcea:numberEmbeddings:2004], which has received a lot of attention in recent years (e.g. [capco:schicho:realizations:2017, bartzos:emiris:etAl:realizations:2021, emiris:tsigaridas:varvitsiotis:mixedVolume, emiris:mourrain], to name a few). References to specific results in the literature that are relevant to the theory developed here and to our proofs are given throughout the paper.

How tractable is the problem?

Circuit polynomial computations can be done, in principle, with the double-exponential time Gröbner basis algorithm with an elimination order. On one example, the GroebnerBasis function of Mathematica 12 (running on a 2019 iMac computer with 6 cores at 3.6Ghz) took 5 days and 6 hours, but in most cases it timed out or crashed. Our goal is to make such calculations more tractable by taking advantage of structural information inherent in the problem.

Our Results.

We describe a new algorithm to compute a circuit polynomial with known support. It relies on resultant-based elimination steps guided by a novel inductive construction for rigidity circuits. While inductive constructions have been often used in Rigidity Theory, most notably the Henneberg sequences for Laman graphs [henneberg:graphischeStatik:1911-68] and Henneberg II sequences for rigidity circuits [BergJordan], we argue that our construction is more natural due to its direct algebraic interpretation. In fact, this paper originated from our attempt to interpret Henneberg II algebraically. We have implemented our method in Mathematica and applied it successfully to compute all circuit polynomials on up to vertices and a few on vertices, the largest of which having over two million terms. The previously mentioned example that took over 5 days to complete with GroebnerBasis, was solved by our algorithm in less than 15 seconds.

Main Theorems.

We first define the combinatorial resultant of two graphs as an abstraction of the classical resultant. Our main theoretical result is split into the combinatorial Theorem 1 and the algebraic Theorem 1, each with an algorithmic counterpart.

Each rigidity circuit can be obtained, inductively, by applying combinatorial resultant operations starting from circuits. The construction is captured by a binary resultant tree whose nodes are intermediate rigidity circuits and whose leaves are graphs.

Theorem 1 leads to a graph algorithm for finding a combinatorial resultant tree of a circuit. Each step of the construction can be carried out in polynomial time using variations on the Pebble Game matroidal sparsity algorithms [streinu:lee:pebbleGames:2008] combined with Hopcroft and Tarjan’s linear time -connectivity algorithm [hopcroft:tarjan:73]. However, it is conceivable that the tree could be exponentially large and thus the entire construction could take an exponential number of steps: understanding in detail the algorithmic complexity of our method remains a problem for further investigation.

Each circuit polynomial can be obtained, inductively, by applying resultant operations. The procedure is guided by the combinatorial resultant tree from Theorem 1 and builds up from circuit polynomials. At each step, the resultant produces a polynomial that may not be irreducible. A polynomial factorization and a test of membership in the ideal is applied to identify the factor which is the circuit polynomial.

The resulting algebraic elimination algorithm runs in exponential time, in part because of the growth in size of the polynomials that are being produced. Several theoretical open questions remain, whose answers may affect the precise time complexity analysis.

Computational experiments.

We implemented our algorithms in Mathematica V12.1.1.0 on two personal computers with the following specifications: Intel i5-9300H 2.4GHz, 32 GB RAM, Windows 10 64-bit; and Intel i5-9600K 3.7GHz, 16 GB RAM, macOS Mojave 10.14.5. We also explored Macaulay2, but it was much slower than Mathematica (hours vs. seconds) in computing one of our examples. The polynomials resulting from our calculations are made available on a github repository [malic:streinu:GitHubRepo].

Overview of the paper.

In Section 2 we introduce the background concepts from matroid theory and rigidity theory. We introduce combinatorial resultants in Section 3, prove Theorem 1 and describe the algorithm for computing a combinatorial resultant tree. In Section 4 we introduce the background concepts pertaining to resultants and elimination ideals. In Section 5 we introduce algebraic matroids. In Section 6 we introduce the Cayley-Menger ideal and define properties of its circuit polynomials. In Section 7 we prove Theorem 1 and in Section 8 we present a summary of the preliminary experimental results we carried with our implementation. We conclude in Section 9 with a summary of remaining open questions.

2 Preliminaries: rigidity circuits

We start with the combinatorial aspects of our problem. In this section we review the essential notions and results from combinatorial rigidity theory of bar-and-joint frameworks in dimension that are relevant for our paper.

Notation.

We work with (sub)graphs given by subsets of edges of the complete graph on vertices . If is a (sub)graph, then , resp. denote its vertex, resp. edge set. The support of is . The vertex span of edges is the set of all edge-endpoint vertices. A subgraph is spanning if its edge set spans . The neighbours of are the vertices adjacent to in .

Frameworks.

A 2D bar-and-joint framework is a pair of a graph whose vertices are mapped to points in via the placement map given by . We view the edges as rigid bars and the vertices as rotatable joints which allow the framework to deform continuously as long as the bars retain their original lengths. The realization space of the framework is the set of all its possible placements in the plane with the same bar lengths. Two realizations are congruent if they are related by a planar isometry. The configuration space of the framework is made of congruence classes of realizations. The deformation space of a given framework is the particular connected component of the configuration space that contains it.

A framework is rigid if its deformation space consists in exactly one configuration, and flexible otherwise. Combinatorial rigidity theory of bar-and-joint frameworks seeks to understand the rigidity and flexibility of frameworks in terms of their underlying graphs.

Laman Graphs.

The following theorem relates rigidity theory of 2D bar-and-joint frameworks to graph sparsity.

[Laman’s Theorem] A bar-and-joint framework is generically minimally rigid in 2D iff its underlying graph has exactly edges, and any proper subset of vertices spans at most edges.

The genericity condition appearing in the statement of this theorem refers to the placements of the vertices. We’ll introduce this concept rigorously in Section 6. For now we retain the most important consequence of the genericity condition, namely that small perturbations of the vertex placements do not change the rigidity or flexibility properties of the framework. This allows us to refer to the rigidity and flexibility of a (generic) framework solely in terms of its underlying graph.

A graph satisfying the conditions of Laman’s theorem is called a Laman graph. It is minimally rigid in the sense that it has just enough edges to be rigid: if one edge is removed, it becomes flexible. Adding extra edges to a Laman graph keeps it rigid, but the minimality is lost. Such graphs are said to be rigid and overconstrained. In short, for a graph to be rigid, its vertex set must span a Laman graph; otherwise the graph is flexible.

Matroids.

A matroid is an abstraction capturing (in)dependence relations among collections of elements from a ground set, and is inspired by both linear dependencies (among, say, rows of a matrix) and by algebraic constraints imposed by algebraic equations on a collection of otherwise free variables. The standard way to specify a matroid is via its independent sets, which have to satisfy certain axioms [Oxley:2011] (skipped here, since they are not relevant for our presentation). A base is a maximal independent set and a set which is not independent is said to be dependent. A minimal dependent set is called a circuit. Relevant for our purposes are the following general aspects: (a) (hereditary property) a subset of an independent set is also independent; (b) all bases have the same cardinality, called the rank of the matroid. Further properties will be introduced in context, as needed.

In this paper we encounter three types of matroids: a graphic matroid, defined on a ground set given by all the edges of the complete graph ; this is the -sparsity matroid or the generic rigidity matroid described below; a linear matroid, defined on an isomorphic set of

row vectors

of the rigidity matrix associated to a bar-and-joint framework; and an algebraic matroid, defined on an isomorphic ground set of variables ; this is the algebraic matroid associated to the Cayley-Menger ideal and will be defined in Section 6.

The -sparsity matroid: independent sets, bases, circuits.

The -sparse graphs on vertices form the collection of independent sets for a matroid on the ground set of edges of the complete graph [whiteley:Matroids:1996], called the (generic) 2D rigidity matroid. The bases of the matroid are the maximal independent sets, hence the Laman graphs. A set of edges which is not sparse is a dependent set. For instance, adding one edge to a Laman graph creates a dependent set of edges, called a Laman-plus-one graph (Fig. 1).

Figure 1: A Laman-plus-one graph contains a unique circuit (highlighted): (Left two) The circuit is not spanning the entire vertex set. (Right) A spanning circuit.

A minimal dependent set is a (sparsity) circuit. The edges of a circuit span a subset of the vertices of . A circuit spanning is said to be a spanning or maximal circuit in the sparsity matroid . See Fig. 2 for examples.

Figure 2: The four types of circuits on vertices: 2D double-banana, -wheel , Desargues-plus-one and -plus-one.

A Laman-plus-one graph contains a unique subgraph which is minimally dependent, in other words, a unique circuit. A spanning rigidity circuit is a special case of a Laman-plus-one graph: it has a total of edges but it satisfies the -sparsity condition on all proper subsets of at most vertices. Simple sparsity considerations can be used to show that the removal of any edge from a circuit results in a Laman graph.

Figure 3: Splitting a -connected circuit into -connected circuits via inverse of two-sum operations.

Operations on circuits.

If and are two graphs, we use a consistent notation for their number of vertices and edges , , , and for their union and intersection of vertices and edges, as in , , , and similarly for edges, with and . Let and be two circuits with exactly one common edge . Their -sum is the graph with and . The inverse operation of splitting into and is called a -split (Fig. 3).

Any -sum of two circuits is a circuit. Any 2-split of a circuit is a pair of circuits.

Proof.

From sparsity consideration, we prove that the -sum has edges (on vertices), and -sparsity is also maintained on subsets. Indeed, the total number of edges in the -sum is .

See also [BergJordan], Lemmas 4.1, 4.2. ∎

Note that not every circuit admits a -separation, e.g. the Desargues-plus-one circuit does not admit a -separation. However every circuit that is not -connected does admit a -separation (see also Lemma 2.4(c) in [BergJordan]).

Connectivity.

It is well known and easy to show that a circuit is always a -connected graph. If a circuit is not -connected, we refer to it simply as a -connected circuit. The Tutte decomposition [tutte:connectivity:1966] of a -connected graph into -connected components amounts to identifying separating pairs of vertices. For a circuit, the separating pairs induce -splits (inverse of -sum) operations and produce smaller circuits. Thus a -connected circuit can be constructed from -connected circuits via -sums (Fig. 3).

Figure 4: A Henneberg II extension of the Desargues-plus-one circuit.

Inductive constructions for -connected circuits.

A Henneberg II extension (also called an edge splitting operation) is defined for an edge and a non-incident vertex , as follows: the edge is removed, a new vertex and three new edges are added. Berg and Jordan [BergJordan] have shown that, if is a -connected circuit, then a Henneberg II extension on is also a -connected circuit. The inverse Henneberg II operation on a circuit removes one vertex of degree and adds a new edge among its three neighbors in such a way that the result is also a circuit. Berg and Jordan have shown that every -connected circuit admits an inverse Henneberg II operation which also maintains -connectivity. As a consequence, a -connected circuit has an inductive construction, i.e. it can be obtained from by Henneberg II extensions that maintain -connectivity. Their proof is based on the existence of two non-adjacent vertices with -connected inverse Henneberg II circuits. We will make use in Section 3 of the following weaker result, which does not require the maintenance of -connectivity in the inverse Henneberg II operation.

[Theorem 3.8 in [BergJordan]] Let be a -connected circuit with . Then either has four vertices that admit an inverse Henneberg II that is a circuit, or has three pairwise non-adjacent vertices that admit an inverse Henneberg II that is a circuit (not necessarily -connected).

3 Combinatorial Resultants

We define now the combinatorial resultant operation on two graphs, prove Theorem 1 and describe its algorithmic implications.

Figure 5: A -wheel , a complete graph, their common Laman graph (dotted, with red elimination edge) and their combinatorial resultant, which is a Laman-plus-one graph but not a circuit.

Combinatorial resultant.

Let and be two distinct graphs with non-empty intersection and let be a common edge. The combinatorial resultant of and on the elimination edge is the graph with vertex set and edge set .

The 2-sum appears as a special case of a combinatorial resultant when the two graphs have exactly one edge in common, which is eliminated by the operation. Circuits are closed under the -sum operation, but they are not closed under general combinatorial resultants (Fig. 5). We are interested in combinatorial resultants that produce circuits from circuits.

Circuit-valid combinatorial resultant sequences.

Two circuits are said to be properly intersecting if their common subgraph is Laman.

The combinatorial resultant of two circuits has edges iff the common subgraph of the two circuits is Laman.

Figure 6: A -wheel and a complete graph, their common Laman graph (dotted, with red elimination edge) and their combinatorial resultant, the -wheel circuit.
Proof.

Let and be two circuits with vertices and edges, , and let be their combinatorial resultant with vertices and edges. By inclusion-exclusion and . Substituting here the values for and , we get . We have iff . Since both and are circuits, it is not possible that one edge set be included in the other: circuits are minimally dependent sets of edges and thus cannot contain other circuits. As a proper subset of both and , satisfies the hereditary -sparsity property. If furthermore has exactly edges, then it is Laman. ∎

A combinatorial resultant operation applied to two properly intersecting circuits is said to be circuit-valid if it results in a spanning circuit. An example is shown in Fig. 6. Being properly intersecting is a necessary but not sufficient condition for the combinatorial resultant of two circuits to produce a circuit (Fig. 5).

Open Problem 1.

Find necessary and sufficient conditions for the combinatorial resultant of two circuits to be a circuit.

In Section 2 we have seen that a -connected circuit can be obtained from -connected circuits via -sums. The proof of Theorem 1 is completed by Proposition 3 below.

Let be a -connected circuit spanning vertices. Then, in polynomial time, we can find two circuits and such that has vertices, has at most vertices and can be represented as the combinatorial resultant of and .

Proof.

We apply a weaker version of Lemma 2 to find two non-adjacent vertices and of degree 3 such that a circuit can be produced via an inverse Henneberg II operation on vertex in (see Fig. 7). Let the neighbors of vertex be such that was not an edge of and is the one added to obtain the new circuit .

Figure 7: The 3-connected circuit spanning vertices with two non-adjacent vertices (red) and (blue) of degree 3. Note that and may not be disjoint. An inverse Henneberg II at removes the red edges at and adds dotted red edge . Circuit (red).

To define circuit , we first let be the subgraph of induced by . Simple sparsity consideration show that is a Laman graph. The graph obtained from by adding the edge , as in Fig. 8 (left), is a Laman-plus-one graph containing the three edges incident to (which are not in ) and the edge (which is in ). contains a unique circuit (Fig. 8 left) with edge (see e.g. [Oxley:2011, Proposition 1.1.6]). It remains to prove that contains and its three incident edges. If does not contain , then it is a proper subgraph of . But this contradicts the minimality of as a circuit. Therefore is a vertex in , and because a vertex in a circuit can not have degree less than , contains all its three incident edges.

Figure 8: Remove from the edges from (blue dotted) and add red edge . Circuit (blue).

The combinatorial resultant of the circuits and with the eliminated edge satisfies the desired property that .

The algorithm below captures this procedure. The main steps, the Inverse Henneberg II step on a circuit at line 4 and finding the unique circuit in a Laman-plus-one graph at line 6 can be done in polynomial time using properties of the and -sparsity pebble games from [streinu:lee:pebbleGames:2008]. ∎

Input: -connected circuit
Output: circuits , and edge such that

1:for each vertex of degree  do
2:     if inverse Henneberg II is possible on
3:    and there is a non-adjacent degree vertex  then
4:         Get circuit and edge by inverse Henneberg II in on
5:         Let without (and its edges) and with new edge
6:         Compute unique circuit in
7:         return circuits and edge
8:     end if
9:end for
Algorithm 1 Inverse Combinatorial Resultant

Resultant tree.

The inductive construction of a circuit using combinatorial resultant operations can be represented in a tree structure. Let be a rigidity circuit with vertices. A resultant tree for the circuit is a rooted binary tree with as its root and such that: (a) the nodes of are circuits; (b) circuits on level have at most vertices; (c) the two children of a parent circuit are such that , for some common edge , and (d) the leaves are complete graphs on 4 vertices. The complexity of finding a resultant tree depends on the size of the tree, whose depth is at most . The combinatorial resultant tree may thus be anywhere between linear to exponential in size. The best case occurs when the resultant tree is path-like, with each internal node having a leaf. The worst case could be a complete binary tree: each internal node at level would combine two circuits with the same number of vertices into a circuit with vertices. Sporadic examples of small balanced combinatorial resultant trees exist (e.g. -plus-one), but it remains an open problem to find infinite families of such examples. Even if such a family would be found, it is still conceivable that alternative, non-balanced combinatorial resultant trees could yield the same circuit.

Open Problem 2.

Characterize the circuits produced by the worst-case size of the combinatorial resultant tree.

Open Problem 3.

Are there infinite families of circuits with only balanced combinatorial resultant trees?

Open Problem 4.

Refine the time complexity analysis of the combinatorial resultant tree algorithm.

The representation of as the combinatorial resultant of two smaller circuits is not unique, in general. An example is the “double-banana” 2-connected circuit shown in Figure 9.

Figure 9: The 2-connected double-banana circuit can be obtained as combinatorial resultant from two graphs (left, -sum), and from two wheels on 4 vertices (right). Dashed lines indicate the eliminated edges, and in each case one of the two circuits is highlighted to distinguish from .

4 Preliminaries: Resultants and Elimination Ideals

We turn now to the algebraic aspects of our problem. In this section we review known concepts and facts about resultants and elimination ideals that are essential ingredients of our proofs in Section 7.

Resultants.

The resultant can be introduced in several equivalent ways [GelfandKapranovZelevinsky]. Here we use its definition as the determinant of the Sylvester matrix.

Let be a ring of polynomials and with and be such that at least one of or is positive and

The resultant of and with respect to the indeterminate , denoted , is the determinant of the Sylvester matrix

where the submatrix containing only the coefficients of is of dimension , and the submatrix containing only the coefficients of is of dimension . Unless , the columns and of and , respectively, are not aligned in the same column of , as displayed above, but rather the first is shifted to the left or right of the second, depending on the relationship between and . We will make implicit use of the following well-known symmetric and multiplicative properties of the resultant:

([GelfandKapranovZelevinsky, pp. 398]) Let . The resultant of and satisfies

  • ,

  • .

Let be a unique factorization domain and . Then and have a common factor in if and only if .

This proposition is stated in [GriffithsHarris, pp. 9] without proof. When is a field, a proof of this property can be found in [CoxLittleOshea, Chapter 3, Proposition 3 of §6], which directly generalizes to polynomial rings via Hilbert’s Nullstellensatz.

Homogeneous properties.

Circuit polynomials in the Cayley-Menger ideal are homogeneous polynomials (see Proposition 6), hence we are interested in the properties of the resultant of homogenous polynomials. It is well known that the resultant of two homogeneous polynomials is itself homogeneous, and with degree , where , resp.  are the homogeneous degrees of , resp. . For completeness we prove these two facts; the exposition follows [CoxLittleOshea].

Let , , and be positive integers such that and . Let and be polynomials of degree and in , with generic coefficients , , and , , , respectively, i.e.

The resultant is a homogeneous polynomial in the ring of degree .

Proof.

This property follows from the Leibniz expansion of the determinant of the Sylvester matrix , where is the -th entry, which shows that each term of is, up to sign, equal to

for some permutation of the set . This term is non-zero if and only if for all , and since , the homogeneous degree of is .∎

Let and be homogeneous polynomials in of homogeneous degree and , respectively, so that are homogeneous of degree , for all and all . If , then it is a homogeneous polynomial in of degree

We were not able to find a reference for this proposition in the literature, however the case when , resp.  are homogeneous of degree , resp.  and such that and , so that

can be found in e.g. [CoxLittleOshea, pp. 454] as Lemma 5 of of Chapter 8, stating that in that case is of homogeneous degree . The proof below is a direct adaptation of the proof of Lemma 5 in [CoxLittleOshea, pp. 454], and the Lemma 5 itself follows directly from Proposition 4 by substituting and so to obtain .

Proof.

Let be the Sylvester matrix of and with respect to , and let, up to sign, be a non-zero term in the Leibniz expansion of its determinant for some permutation of .

A non-zero entry has degree if and degree if . Therefore, the total degree of is

Elimination ideals.

Let be an ideal of and non-empty. The elimination ideal of with respect to is the ideal of the ring .

Elimination ideals frequently appear in the context of Gröbner bases [Buchberger, CoxLittleOshea] which give a general approach for computing elimination ideals: if is a Gröbner basis for with respect to an elimination order (see Exercises 5 and 6 in §1 of Chapter 3 in [CoxLittleOshea]), e.g. the lexicographic order , then the elimination ideal which eliminates the first indeterminates from in the specified order has as its Gröbner basis.

We will frequently make use of the following well-known result.

If is a prime ideal of and is non-empty, then the elimination ideal is prime.

Proof.

If then certainly so at least one of or is in . ∎

Let be non-empty and . Furthermore, let , where . It is clear from the definition of the resultant that . In Section 7 we will make use of the following proposition.

Let be an ideal of and . Then is in the elimination ideal .

A proof of this proposition can be found in [CoxLittleOshea, pp. 167].

5 Preliminaries: Ideals and Algebraic Matroids

Recall that a set of vectors in a vector space is linearly dependent if there is a non-trivial linear relationship between them. Similarly, given a finite collection of complex numbers, we say that is algebraically dependent if there is a non-trivial polynomial relationship between the numbers in .

More precisely, let be a field (e.g. ) and a field extension of . Let be a finite subset of .

We say that is algebraically dependent over if there is a non-zero (multivariate) polynomial with coefficients in vanishing on . Otherwise, we say that is algebraically independent over .

Algebraic independence and algebraic matroids.

It was noticed by van der Waerden that the algebraically independent subsets of a finite subset of satisfy matroid axioms [VDWmoderne, VDW2] and therefore define a matroid called the algebraic matroid on over .

Let be a field and a field extension of . Let be a finite subset of . The algebraic matroid on over is the matroid such that if and only if is algebraically independent over .

In this paper we use an equivalent definition of algebraic matroids in terms of polynomial ideals. Before stating this equivalent defintion, we will first recall some elementary definitions and properties of ideals in polynomial rings. For a general reference on polynomial rings the reader may consult [Lang].

Notations and conventions.

To keep the presentation focused on the goal of the paper, we refrain from giving the most general form of a statement or a proof. We work over the field of rational numbers . In this section, the set of variables denotes . Polynomial rings are always of the form , over sets of variables . The support of a polynomial is the set of indeterminates appearing in . The degree of a variable in a polynomial is denoted by .

Polynomial ideals.

A set of polynomials is an ideal of if it is closed under addition and multiplication by elements of . Every ideal contains the zero ideal , and if an ideal contains an element of , then . A generating set for an ideal is a set of polynomials such that every polynomial in the ideal is an algebraic combination (addition and multiplication) of elements in with coefficients in . Hilbert Basis Theorem (see [CoxLittleOshea]) guarantees that every ideal in a polynomial ring has a finite generating set. Ideals generated by a single polynomial are called principal. An ideal is a prime ideal if, whenever , then either or . A polynomial is irreducible (over ) if it cannot decomposed into a product of non-constant polynomials in . A principal ideal is prime iff it is generated by an irreducible polynomial. However, an ideal generated by two or more irreducible polynomials is not necessarily prime.

Let be an ideal of . A minimal prime ideal over is a prime ideal of minimal among all prime ideals containing with respect to set inclusion. By Zorn’s lemma a proper ideal of always has at least one minimal prime ideal above it.

Dimension.

By definition, the dimension of the ring is . This definition of dimension of a polynomial ring is a special case of the more general concept of Krull dimension of a commutative ring. The dimension of an ideal of is the cardinality of the maximal subset with the property .

We say that a (strict) chain of prime ideals of has length . The codimension or height of a prime ideal of is defined as the supremum of the lengths of maximal chains of prime ideals such that .

In a polynomial ring all prime ideals have a finite height and any two maximal chains of prime ideals terminating at have the same height. Furthermore, we have .

An important bound on the codimension of a prime ideal is given by: [Krull’s Height Theorem] Let be an ideal generated by elements, and let be a prime ideal over . Then . Conversely, if is a prime ideal such that , then it is a minimal prime of an ideal generated by elements.

Krull’s Height Theorem holds more generally for all Noetherian rings, see [Eisenbud, Chapter 10] or [Jacobson2, Chapter 7]. An immediate consequence of the Height Theorem is that prime ideals of codimension 1 are principal.

If is a prime ideal of codimension , then is principal.

Proof.

By Krull’s Height Theorem is minimal over the ideal generated by some . If is not irreducible, then it has an irreducible factor that is in . Therefore, we have . By minimality we must have . ∎

Algebraic matroid of a prime ideal.

Intuitively, a collection of variables is independent if it is not constrained by any polynomial in the ideal, and dependent otherwise. Thus the algebraic matroid induced by the ideal is, informally, a matroid on the ground set of variables whose independent sets are subsets of variables that are not supported by any polynomial in the ideal. Its dependent sets are supports of polynomials in the ideal.

Every algebraic matroid of a prime ideal arises as an algebraic matroid of a field extension in the sense of Definition 5, and vice-versa. This equivalence is well-known and we include it for completeness.

Formally, let be a prime ideal of the polynomial ring . We define a matroid on , depending on the ideal , called the algebraic matroid of and denoted , in the following way.

The quotient ring is an integral domain with a well defined fraction field which contains as a subfield. The image of under the canonical injections

is the subset of , where denotes the equivalence class of in both and .

Let be a non-empty subset of . Consider its image in under the canonical injections. For clarity, let and for some fixed . The set is by definition algebraically dependent over if and only if there exists a non-zero polynomial vanishing on , i.e.

Clearly, is algebraically dependent over if and only if , that is if and only if

where denotes the ring of polynomials supported on subsets of . Similarly, is algebraically independent over if and only if

Let be a prime ideal in the polynomial ring . The algebraic matroid of , denoted , is the matroid on the ground set with

where denotes the ring of polynomials supported on subsets of .

Equivalence of the two definitions

The above construction shows that any algebraic matroid with respect to a prime ideal can be realized as an algebraic matroid over with the ground set in the field extension of . Conversely, given a set of elements in a field extension of , we can realize any algebraic matroid on over as an algebraic matroid of a prime ideal of in the following way: let be the homomorphism

mapping for all and for all . Let be a dependent set in . Then vanishes on a polynomial in so the kernel is non-zero, and clearly any polynomial in defines a dependency in . Therefore, if we denote by the ring of polynomials supported on subsets of , we have

if and only if is a dependent set of .

For the rest of the paper we will work exclusively with algebraic matroids of prime ideals.

Circuits and circuit polynomials.

A circuit is a minimal set of variables supported by a polynomial in . A polynomial whose support is a circuit is called a circuit polynomial. A theorem of Lovasz and Dress [DressLovasz] states that a circuit polynomial is unique in the ideal with the given support , up to multiplication by a constant (we’ll just say, shortly, that it is unique). Furthermore, the circuit polynomial is irreducible.

We retain the following property, stating that circuit polynomials generate elimination ideals supported on circuits.

Let be a prime ideal in and a circuit of the algebraic matroid . The ideal is principal and generated by an irreducible circuit polynomial , which is unique up to multiplication by a constant.

Proof.

Since is a circuit the ideal has dimension at least equal to . It can not have dimension greater than or equal to since . Therefore has dimension and codimension . By Corollary 5,   is principal, and since is irreducible over , it also generates .∎

6 The Cayley-Menger ideal and its algebraic matroid

In this section we introduce the 2D Cayley-Menger ideal . We then define the corresponding circuit polynomials and their supports, and make the connection with combinatorial rigidity circuits.

We will show that the algebraic matroid of is isomophic to the -sparsity matroid . This equivalence is well-known, however we were not able to track down the original reference, and include a proof for completeness.

The Cayley-Menger ideal and its algebraic matroid.

We use variables for unknown squared distances between pairs of points. The distance matrix of labeled points is the matrix of squared distances between pairs of points. The Cayley matrix is the distance matrix bordered by a new row and column of 1’s, with zeros on the diagonal:

Cayley’s Theorem says that, if the distances come from a point set in the Euclidean space , then the rank of this matrix must be at most . Thus all the minors of the Cayley-Menger matrix should be zero. These minors induce polynomials in which generate the -Cayley-Menger ideal. They are called the standard generators, are homogeneous polynomials with integer coefficients and are irreducible over . The -Cayley-Menger ideal is a prime ideal of dimension [borcea:cayleyMengerVariety:2002, Giambelli, HarrisTu, JozefiakLascouxPragacz] and codimension . We work with the 2D Cayley-Menger ideal , generated by the minors of the Cayley matrix. The algebraic matroid of the 2D Cayley-Menger ideal is denoted by .

The algebraic matroid of .

As defined in Section 5, the algebraic matroid is the matroid on the ground set where is independent if

where, as before, denotes the ring of polynomials over supported on the indeterminates in .

The rank of is equal to .

Proof.

Immediate from the definition of dimension of an ideal in a ring of polynomials (Section 5). ∎

When the rank of is precisely the rank of the -sparsity matroid on vertices. In fact, the two matroids are isomorphic. For the rest of the paper we fix and abbreviate with just .

The algebraic matroid and the -sparsity matroid are isomorphic.

We will prove the equivalence of and by proving that both are equivalent to the -dimensional generic linear rigidity matroid that we now introduce.

Let be a graph and a bar-and-joint framework with points .

The rigidity matrix (or just when there is no possibility of confusion) of the bar-and-joint framework is the matrix with columns indexed by the vertices and rows indexed by the edges with . The -th entry in the row is , the -th entry is , and all other entries are .

The rigidity matrix is defined up to an order of the vertices and the edges; to eliminate this ambiguity we fix the order on the vertices as and we order the edges with lexicographically.

For example, let . Then the rows are ordered as , , , , and and the corresponding rigidity matrix is given by

Let be a graph and a bar-and-joint framework. The 2-dimensional linear rigidity matroid is the matroid where if and only if the rows of indexed by are linearly independent.

Note that the matroid depends on the plane configuration . For example, if , a configuration in which at most two vertices of are on a line, and a configuration in which the vertices are on the same line, then .

Let be a graph and consider the set of all possible plane configurations for .

We say that a 2D bar-and-joint framework is generic if the rank of the row space of is maximal among all plane configurations in .

If and are distinct generic plane configurations for a graph , the 2D linear matroids and are isomorphic [graverServatiusServatius, Theorem 2.2.1], hence the following matroid is well-defined.

Let be a graph. The 2-dimensional generic linear matroid is the 2D linear matroid where is any generic plane configuration for .

That for a given graph on vertices the generic linear matroid and the sparsity matroid are isomorphic follows from Laman’s theorem (Theorem 2).

We now have to show that is equivalent to the generic linear rigidity matroid . This equivalence will be the consequence of a classical result of Ingleton [Ingleton, Section 6] (see also [EhrenborgRota, Section 2]) stating that algebraic matroids over a field of characteristic zero are linearly representable over an extension of , with the linear representation given by the Jacobian. We now note that the Cayley-Menger variety is realized as the Zariski closure of the image of the map given by

The Jacobian of the edge function at a generic point in is precisely the matrix for a generic configuration .

This completes the proof of Theorem 6. From now on, we will use the isomorphism to move freely between the formulation of algebraic circuits as subsets of variables and their graph-theoretic interpretation as graphs that are rigidity circuits. Given a (rigidity) circuit , we denote by the corresponding circuit polynomial in the Cayley-Menger ideal .

Comments: beyond dimension 2?

Note that the -dimensional linear rigidity matroid and the algebraic matroid of the