# A Constraint Propagation Algorithm for Sums-of-Squares Formulas over the Integers

Sums-of-squares formulas over the integers have been studied extensively using their equivalence to consistently signed intercalate matrices. This representation, combined with combinatorial arguments, has been used to produce sums-of-squares formulas and to show that formulas of certain types cannot exist. In this paper, we introduce an algorithm that produces consistently signed intercalate matrices, or proves their nonexistence, extending previous methods beyond what is computationally feasible by hand.

There are no comments yet.

## Authors

• 2 publications
• ### Construction and application of provable positive and exact cubature formulas

Many applications require multi-dimensional numerical integration, often...
08/05/2021 ∙ by Jan Glaubitz, et al. ∙ 0

• ### Explicit formulas for the inverses of Toeplitz matrices, with applications

We derive explicit formulas for the inverses of truncated block Toeplitz...
05/03/2021 ∙ by Akihiko Inoue, et al. ∙ 0

• ### A Tractable Logic for Molecular Biology

We introduce a logic for knowledge representation and reasoning on prote...
09/18/2019 ∙ by Adrien Husson, et al. ∙ 0

• ### Applications of the analogy between formulas and exponential polynomials to equivalence and normal forms

We show some applications of the formulas-as-polynomials correspondence:...
05/18/2019 ∙ by Danko Ilik, et al. ∙ 0

• ### Lyndon Words, the Three Squares Lemma, and Primitive Squares

We revisit the so-called "Three Squares Lemma" by Crochemore and Rytter ...
06/24/2020 ∙ by Hideo Bannai, et al. ∙ 0

• ### Separation of bounded arithmetic using a consistency statement

This paper proves Buss's hierarchy of bounded arithmetics S^1_2 ⊆ S^2_2 ...
04/14/2019 ∙ by Yoriyuki Yamagata, et al. ∙ 0

• ### Scalable Anytime Algorithms for Learning Formulas in Linear Temporal Logic

Linear temporal logic (LTL) is a specification language for finite seque...
10/13/2021 ∙ by Ritam Raha, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1. Introduction

A sums-of-squares formula of type over is an identity of the form

 (x21+⋯+x2r)(y21+⋯+y2s)=z21+⋯+z2n

where each is a bilinear expression in the ’s and ’s over .

Sums-of-squares formulas have been studied since 1898, when Hurwitz proved that the only real normed division algebras are the real numbers, the complex numbers, the quaternions, and the octonions. This theorem is proved by considering sums-of-squares formulas of type over . In his paper, Hurwitz posed the general question: for what types does a sums-of-squares formula exist over a given field ? [1] [2] We may also consider sums-of-squares formulas over rings.

Whether existence of a sums-of-squares formula depends on the base ring or field remains an open question. Sums-of-squares formulas over the integers are particularly important, since a sums-of-squares formula over maps to a formula over any field via the natural map .

The general question about the existence of sums-of-squares formulas has valuable connections to topology and geometry, since sums-of-squares formulas induce immersions of projective space into Euclidean space, and they induce Hopf maps between spheres. Shapiro’s book [4] covers the history of sums-of-squares formulas and past results.

The special case of a sums-of-squares formula over the integers has been studied extensively using combinatorics. Yuzvinsky [5] introduced an equivalence between sums-of-squares formulas over the integers and consistently signed intercalate matrices. By studying these matrices, Yuzhvinsky and others were able to produce many new formulas, and prove many new results on the existence of formulas of various types.

In this paper, we revisit the equivalence between sums-of-squares formulas over the integers and consistently signed intercalate matrices. We introduce a constrain propagation algorithm which produces a consistently signed intercalate matrix of a given type, or shows that such a matrix does not exist. Thus, we get the corresponding conclusion for sums-of-squares formulas over the integers. We discuss canonical forms for consistently signed intercalate matrices, which significantly improve the efficiency of the algorithm.

## 2. Equivalence with Consistently Signed Intercalate Matrices

In this section, we review the equivalence of sums-of-squares formulas over the integers with consistently signed intercalate matrices.

###### Definition 2.1.

An intercalate matrix of type is an matrix with entries in the set of elements such that:

• The entries along each row are distinct.

• The entries along each column are distinct.

• If , then . (Equivalently, each submatrix contains an even number of distinct elements.)

Such a matrix is called consistently signed if we can assign a sign ( or ) to each entry such that if , and so , then the

submatrix consisting of these four elements has an odd number of minus signs.

An example of a consistently signed intercalate matrix of type is

 ⎛⎜⎝123452−14−363−4−127⎞⎟⎠.

A consistently signed intercalate matrix of type is equivalent to a sums-of-squares formula of type through the following correspondence:

• If th entry of is , then occurs in the expansion of with matching sign.

For example, the sums-of-squares formula of type corresponding to the consistently signed intercalate matrix above is given by

 z1 =x1y1−x2y2−x3y3, z2 =x1y2+x2y1+x3y4, z3 =x1y3−x2y4+x3y1, z4 =x1y4+x2y3−x3y2, z5 =x1y5, z6 =x2y5, z7 =x3y5.

## 3. Constraint Propagation Algorithm

We now summarize the methods of the constraint propagation algorithm for producing a consistently signed intercalate matrix of a given type (or concluding that such a formula does not exist). A more detailed treatment of the key functions of the algorithm is included in 4, with pseudocode.

The input for this procedure will be a partially completed matrix. For example, for a formula, we might start with the matrix

 ⎛⎜ ⎜ ⎜⎝1234∗∗∗∗∗∗∗∗∗∗∗∗⎞⎟ ⎟ ⎟⎠,

where the asterisks, , indicate that an entry is not yet determined. For large types, choosing a good input is critical to the efficiency of the algorithm. This choice is discussed in 5.

The output of the procedure is a completed consistently signed intercalate matrix with the form of the input matrix, or the conclusion that such a matrix does not exist.

The procedure works by maintaining a list of possible values at each unknown entry, and updating these lists as entries are filled in. For example, for the input above, the lists would be initialized as

 ⎛⎜ ⎜ ⎜ ⎜⎝1234{±2,±3,±4}{±1,±3,±4}{±1,±2,±4}{±1,±2,±3}{±2,±3,±4}{±1,±3,±4}{±1,±2,±4}{±1,±2,±3}{±2,±3,±4}{±1,±3,±4}{±1,±2,±4}{±1,±2,±3}⎞⎟ ⎟ ⎟ ⎟⎠,

since the colors of entries in the same column must be distinct.

The next step is to choose a test value from the list of possibilities for one of the entries. The procedure makes a copy of the matrix including the chosen test entry, propagates the consequences of the choice, and continues to work with this matrix. This test case will eventually either result in a complete consistently signed intercalate matrix, or the conclusion that none exists. If it produces a matrix, that matrix is the output for the original procedure. If there is no such matrix, the choice of a test value is eliminated from the original matrix, and the procedure chooses a new possible value to test.

The choice of the test entry can significantly affect the efficiency of the algorithm, and when we eliminate a test entry, we can often reach additional conclusions and eliminate additional possibilities. These refinements are discussed in 6.

In our example, the procedure might choose to try as a possibility for the entry . The procedure makes a copy of the matrix, enters the chosen test value, and propagates the consequences of this test value. It would first eliminate the color from the remaining entries in the second row and first column,

 ⎛⎜ ⎜ ⎜ ⎜⎝12342{±1,±3,±4}{±1,±4}{±1,±3}{±3,±4}{±1,±3,±4}{±1,±2,±4}{±1,±2,±3}{±3,±4}{±1,±3,±4}{±1,±2,±4}{±1,±2,±3}⎞⎟ ⎟ ⎟ ⎟⎠.

In order for the submatrix in the upper left corner to satisfy the condition on squares, we can also conclude that the entry must be . We then propagate the consequences of this assignment.

 ⎛⎜ ⎜ ⎜ ⎜⎝12342−1{±4}{±3}{±3,±4}{±3,±4}{±1,±2,±4}{±1,±2,±3}{±3,±4}{±3,±4}{±1,±2,±4}{±1,±2,±3}⎞⎟ ⎟ ⎟ ⎟⎠.

Since we know that the entry must be the color (although we don’t know the sign yet), we can eliminate ’s from the other entries in the column, and similarly for ’s in the last column.

 ⎛⎜ ⎜ ⎜ ⎜⎝12342−1{±4}{±3}{±3,±4}{±3,±4}{±1,±2}{±1,±2}{±3,±4}{±3,±4}{±1,±2}{±1,±2}⎞⎟ ⎟ ⎟ ⎟⎠.

This process continues, making choices and propagating the consequences in rows, columns, and squares. For our example, we obtain the following matrix by making the choices that the entry is , the entry is , and the entry is .

 ⎛⎜ ⎜ ⎜⎝12342−14−33−4−1243−2−1⎞⎟ ⎟ ⎟⎠.

## 4. More Details

We now give a detailed description of the various functions used in the constraint propagation algorithm, finishing with high-level pseudo-code for the algorithm. The code itself is available at www.math.umn.edu/~mklynn/sos_pub.

We begin with the function makeMatrix that takes values for , , and , and sets up an matrix, where the entries are lists of all possible entries, . The object that we return includes this matrix, as well as various other structures that will be useful later in the algorithm. These structures include:

• A list of coordinates for the matrix

• A list of the rows of the matrix

• A list of the columns of the matrix

• A list of the squares of the matrix

• For each coordinate, a list of all other coordinates in the same row or same column.

• For each coordinate, a list of all submatrices (squares) which include that coordinate.

• An initially empty list of assignments to propagate

• An initially empty list of known colors to propagate

This provides the framework that we’ll use to produce a consistently signed intercalate matrix. As we assign entries, we’ll propagate the consequences of this assignment, eliminating possibilities from related entries. By eliminating entries in this way, we greatly reduce the search space compared to a brute force search, or even a back-tracking approach as in [3].

We now turn our attention to the function which assigns values to entries. This function could be called by a user, who wants to test for specific consistently signed intercalate matrices. It will be called once values become known as a result of constraint propagation, and it will be called to try strategically chosen test values once all constraints have been propagated. Calling this function as a result of constraint propagation will be discussed later in this section. The user’s choice of assignment will be discussed in 5. Choosing good test values will be discussed in 6.

The function assign takes our matrix as an input, along with a and a to which it will be assigned. If the coordinate has already been assigned to be this value, assign immediately returns . Then, if the value is not a possibility for that coordinate, assign returns . Otherwise, assign makes the assignment, adds it to the list of assignments to propagate, and returns .

Before we introduce the propagation functions, we introduce the function eliminate, which will be used by the propagation functions. This function takes the our matrix with a and , and deletes that value as a possibility for that coordinate. If the elimination means that there is only one remaining possibility for that entry, that value is assigned. When the elimination function reduces the entry to a single color with either sign as an option, the color and entry are added to the running list of colors to propagate. The function returns if the elimination is impossible, and otherwise.

Note that the structure of the functions assign and eliminate ensure that we will never have duplicate entries in our lists of assignments and colors to propagate.

We now turn our attention to the propagation functions, beginning with the function propagateRowsAndColumns, which takes our matrix, a and its . The function propagates the consequences of this assignment to the row and column of , eliminating the color of from all of those entries. The function returns if one if these eliminations is impossible, and it returns otherwise.

Implementation of the propagateSquares function involves many helper functions, the details of which we omit for brevity. We provide only very high level pseudo-code for the function propagateSquare, which propagates constraints in a single square. The function propagateSquares then calls propagateSquare for each square containing our coordinate.

The function propagateSquare takes our matrix with a , , and a containing . The function propagates the consequences of this assignment to the given square. It returns if it encounters a contradiction at some point, otherwise it returns .

All of the above functions rely on knowing the value of one entry, and determining how this affects the values of related entries. However, in the example in 3, we saw that we were able to make similar eliminations when only the color (but not the sign) of an entry is known. For propagating to rows and columns, we can still use the propagateRowsAndColumns function, since the sign of the entry has no influence on this function. However, for the squares, we need a separate function, which we now define.

The function propagateSquaresColor takes our matrix with a and , and it propagates the consequences of knowing that the value of is to all submatrices containing . It returns if it encounters a contradiction at some point, otherwise it returns .

We now give psuedo-code for our overall propagation function, propagate, which calls the above functions as long as there are values and colors to propagate.

Before we describe the overall algorithm for finding consistently signed intercalate matrices, we describe two verification functions.

The first verification function, verify, checks there is no contradiction in the values assigned so far in the matrix, and ignores the unknown values. It returns if there is no contradiction so far, and if there is a contradiction.

The second verification function, verifyComplete, checks if every entry of the matrix has been assigned a value. It returns if the matrix is complete, and it returns otherwise.

We now provide the pseudocode for the main algorithm, which takes a partially completed matrix of type as input, and either returns a consistently signed intercalate matrix of the given type, or returns if such a matrix does not exist. This function, , is called recursively in order to find the desired matrix.

Prior to calling the function , a user can choose to make some assignments. Choosing these assignments wisely can significantly reduce the run time of the algorithm, while maintaining correctness. Choice of these initial assignments is discussed in 5.

The algorithm works by choosing a value to test at a particular input, and propagating the consequences of that assignment. If the propagation does not lead to a contradiction and does not complete the matrix, this process is repeated. If the propagation leads to a completed consistently signed intercalate matrix, that matrix is returned. If the propagation leads to a contradiction, we backtrack and eliminate that value as a possibility from the entry where it was tested. A new test value is then chosen, and the process repeats.

In some situations, when we backtrack and eliminate a test value, we can make other eliminations as well. The choice of the test value and these eliminations are discussed in 6.

An implementation of these functions in python is available at www.math.umn.edu/~mklynn/sos_pub, with a sample use of these functions.

In this implementation, we include two different versions of the complete matrix function, corresponding to two different methods for selecting test values. These methods are discussed in 6.

## 5. Group Action and Choice of Input

If we have a consistently signed intercalate matrix of type , we can produce many more such matrices by manipulating our matrix. In particular, we can obtain another consistently signed intercalate matrix with the following operations:

• permuting the rows of ,

• permuting the columns of ,

• permuting the colors of ,

• flipping the signs of all elements in a row (or multiple rows) of ,

• flipping the signs of all elements in a column (or multiple columns) of ,

• flipping the signs of all elements of a color (or multiple colors) of .

We can also view these operations as an action of on the set of sums-of-squares formulas of type over , as in [3].

When searching for formulas using our algorithm, we can use this fact to significantly reduce the number of potential matrices that we test. In particular, we can choose our input based on these observations.

###### Proposition 5.1.

There exists a consistently signed intercalate matrix of type if and only if there is a consistently signed intercalate matrix of type such that

• The first row has entries .

• The th entry is for .

###### Proof.

Suppose we have a consistently signed intercalate matrix of type .

For an matrix with entries chosen from colors, by the pigeonhole principle, some color, , must occur at least times. Let be the matrix obtained from by swapping the colors and , so that the color occurs times.

Suppose occurs in the entry . If we swap the first and th rows and swap the first and th columns, that is now in the entry . Assuming , there is another in the matrix. Since the colors along rows and columns must be distinct, it occurs at an entry , where and . If we swap the second and th rows and swap the second and th columns, then that is now in the entry . Continuing this, we get in the entries for .

Now, for such that the entry is , we flip the signs in the th row. This ensures that the entries are all for .

The first row now has entries . Since colors along rows must be distinct, . We then swap the color of and , so the second entry is now . If it is , we flip the signs of all ’s in the matrix. The first row then has entries , where the color of is not or . We swap this color with , and fix the sign if needed. Continuing this for the rest of the entries in the row, we obtain a matrix with entries along the first row.

Note that the operations on the first row left the ’s along the diagonal intact, so we have a matrix of the desired form. ∎

With a matrix of this form, we also know that the first column must have for the first entries, due to the conditions on squares. However, the propagation algorithm immediately fills in these values, so we don’t usually bother including them with our input.

More careful analysis of the form of consistently signed intercalate matrices of particular types can yield additional assumptions about our input matrix.

## 6. Choice of Test Values

The algorithm for producing a consistently signed intercalate matrix necessitates making a choice of a test coordinate and value, perhaps many times. Different selections for test values can have a dramatic effect on the run time of the algorithm, so we want to make these choices carefully.

In the implementation of the algorithm available at www.math.umn.edu/~mklynn/sos_pub, we provide two different versions of the function completeMatrix, which make different choices for these test values.

In the first version, we choose the entry with the fewest possibilities, whose value is not yet known. We select the first of the possible values, and test that value. The idea behind this choice is that with fewer possibilities at that entry, we’ll have fewer test values to cycle through, so this should help make the algorithm faster, compared to testing a coordinate with more possible values.

In the second version, we choose our test value based on combinatorial arguments about the frequency of each color in the matrix. This often involves using knowledge about smaller types of consistently signed intercalate matrices. As an example, suppose we would like to know if there exists a sums-of-squares formula of type . In fact, whether or not a formula of this type exists remains an open question. In refining our search for this formula, we can make use of the following observations.

###### Proposition 6.1.

If there is a consistently signed matrix of type , then:

1. Some color must occur at least times.

2. Each color must occur at least times.

###### Proof.
1. We have entries, and choices for colors. This means that each color will occur an average of . So, some color will need to occur at least times.

2. For this result, we use the fact that there is no sums-of-squares formula of type .

Suppose some color occurs or fewer times in a consistently signed intercalate matrix of type . Now, imagine we delete the (or fewer) columns where this color occurs. This results in a matrix, which has no entries with color . So, the number of colors in this matrix is at most , and the matrix is a consistently signed intercalate matrix of type . But such a matrix cannot exist, so we have a contradiction, and every color must occur at least times.

By permuting the colors, we may assume that the colors are in decreasing order of frequency. That is, is the most prevalent color, and is the least prevalent color. We can use information we deduce about the frequency of each color to form a minimum signature.

###### Definition 6.2.

A minimum signature for a consistently signed intercalate matrix of type is a function such that the number of times the color appears in the matrix is at least .

So, a minimum signature provides a lower bound for the frequency of each color. In the second version of completeMatrix, we first focus on meeting these minimum frequencies. Given a minimum signature , the function finds the first color such that fewer than entries have been assigned color . The function finds the first entry where is still a possibility, and that is chosen as the test value and coordinate. If there is no entry where is still a possibility, there is no consistently signed intercalate matrix with the desired form and minimum signature, and so the function will return . Once there is no color with frequency less than , we revert to the method for choosing test values from the original completeMatrix function: choosing the first value in the entry with the fewest possibilities.

Comparisons of the run times for these versions is included in 7.

## 7. Run Times

In this section, we compare run times of our two versions of the function completeMatrix with a brute force search. The results are recorded in the following table. Each time is the average over 10 runs of the program, on MacBook Pro with 3.1 GHz Intel Core i7 processor and 16 GB 1867 MHz DDR3, except for times marked with an asterisk, which were run only once.

For these tests, our initial assignments were to fill in the first row as and to fill in ’s along the diagonal, as discussed in 5. For version 2, we require that every color occur at least once; note that this is suboptimal, as discussed in 6. We run these tests for several types where .

On the right, we have a table of run times for types , where is the largest integer such that a formula of type does not exist. Below, we give a table of run times for types , where is the smallest value such that a formula of type exists.

 Typev. 1v. 2brute(2,3,4)<.01s<.01s1.50s(2,4,4)<.01s<.01s60s(2,5,6)<.01s<.01s−(2,6,6)<.01s<.01s−(2,7,8)<.01s<.01s−(2,8,8)<.01s<.01s−(2,9,10)<.01s<.01s−(3,3,4)<.01s<.01s632s(3,4,4)<.01s<.01s−(3,5,7).02s.02s−(3,6,8)<.01s<.01s−(3,7,8)<.01s.01s−(3,8,8).01s.01s−(3,9,11)15s6.99s−(4,4,4)<.01s<.01s−(4,5,8).01s.01s−(4,6,8).02s.01s−
 Typev. 1v. 2brute(4,7,8).02s.02s−(4,8,8).02s.02s−(4,9,12).04s.04s−(5,5,8).02s.02s−(5,6,8).02s.02s−(5,7,8).02s.03s−(5,8,8).03s.03s−(6,6,8).02s.03s−(6,7,8).03s.03s−(6,8,8).03s.03s−(7,7,8).04s.04s−(7,8,8).04s.04s−(7,9,15)−.31s−(8,8,8).05s.05s−(8,9,16).23s.20s−(9,9,16).21s.23s−

For many of the test cases that we run, the run times for the two versions of the algorithm are very close. Recall that, in the second version of the algorithm, once the minimum frequencies are met, it reverts to choosing test values based on the heuristic of the first version. For small types, the minimum frequencies are often achieved with the initial assignments, so the two versions are functionally identical. For larger types such as

and , we observe clear differences in the run times. However, neither version consistently produces faster running times. Future improvements to the algorithm could be made by exploring different ways to choose test values, and how they affect the run times.

In an appendix, we include a table of known values of , the smallest value of such that a sums-of-squares formula of type exists over the integers. In many cases, the exact value of is not known, and we instead include an upper bound (indicated with an asterisk). These tables are included as guides for which types would be interesting to explore using our algorithms.

## References

• [1] Hurwitz, A. Über die Komposition der Quadratischen Formen von Beliebig Vielen Variabeln. Nachrichten von der Gesellschaft der Wissenshcaften zu Göttingen, Mathematisch-physikalische Klasse, (1898), 309-316.
• [2] Hurwitz, A. Über die Komposition der Quadratischen Formen. Mathematische Annalen, 88 (1923), 1-25.
• [3] Lynn, M. Sum-of-Squares Formulas over Arbitrary Fields. PhD Thesis (2016). Available at escholarship.org/uc/item/9v03g1kh
• [4] Shapiro, Daniel B. Compositions of Quadratic Forms. Degruyter Expositions in Mathematics (2000).
• [5] Yuzvinsky, S. Orthogonal Pairings of Euclidean Spaces. Michigan Mathematical Journal, 28 (1981), 131-145.

## Appendix A Values of r∗Zs

We include tables of known values of , the smallest number such that there is a sums-of-squares formula of type over the integers. In the cases where only an upper bound is known, we include that upper bound with an asterisk. These tables are compiled from those in [4].

for

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
2 2 4 4 6 6 8 8 10 10 12 12 14 14 16 16 18
3 4 4 7 8 8 8 11 12 12 12 15 16 16 16 19
4 4 8 8 8 8 12 12 12 12 16 16 16 16 20
5 8 8 8 8 13 14 15 16 16 16 16 16 21
6 8 8 8 14 14 16 16 16 16 16 16 22
7 8 8 15 16 16 16 16 16 16 16 23
8 8 16 16 16 16 16 16 16 16 24
9 16 16 16 16 16 16 16 16 25
10 16 26 26 27 27 28 28
11 26 26 28 28 30 30
12 26 28 30 32 32 32
13 28 32 32 32 32
14 32 32 32 32
15 32 32 32
16 32 32
17 32

for and

18 19 20 21 22 23 24 25 26 27 28 29 30
1 18 19 20 21 22 23 24 25 26 27 28 29 30
2 18 20 20 22 22 24 24 26 26 28 28 30 30
3 20 20 20 23 24 24 24 27 28 28 28 31 32
4 20 20 20 24 24 24 24 28 28 28 28 32 32
5 22 23 24 24 24 24 24 29 30 31 32 32 32
6 22 24 24 24 24 24 24 30 30 32 32 32 32
7 24 24 24 24 24 24 24 31 32 32 32 32 32
8 24 24 24 24 24 24 24 32 32 32 32 32 32
9 26 27 28 29 30 31 32 32 32 32 32 32 32
10 30 30 32 32 32 32 32 32 32 32
11
12 32 32 32
13 32
14 32
15 32
16 32
17 32
18
19
20
21
22
23
24
25
26
27
28
29
30