Reachability and Distances under Multiple Changes

04/23/2018 ∙ by Samir Datta, et al. ∙ Chennai Mathematical Institute TU Dortmund 0

Recently it was shown that the transitive closure of a directed graph can be updated using first-order formulas after insertions and deletions of single edges in the dynamic descriptive complexity framework by Dong, Su, and Topor, and Patnaik and Immerman. In other words, Reachability is in DynFO. In this article we extend the framework to changes of multiple edges at a time, and study the Reachability and Distance queries under these changes. We show that the former problem can be maintained in DynFO(+, ×) under changes affecting O( n/ n) nodes, for graphs with n nodes. If the update formulas may use a majority quantifier then both Reachability and Distance can be maintained under changes that affect O(^c n) nodes, for fixed c ∈N. Some preliminary results towards showing that distances are in DynFO are discussed.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In today’s databases, data sets are often large and subject to frequent changes. In use cases where only a fixed set of queries has to be evaluated on such data, it is not efficient to re-evaluate queries after each change, and therefore dynamic approaches have been considered. The idea is that when a database is modified by changing a set of tuples then the result of a query is recomputed by using its result on , the set , and possibly other previously computed auxiliary data.

One such dynamic approach is the dynamic descriptive complexity approach, formulated independently by Dong, Su, and Topor [9], as well as Patnaik and Immerman [21]. In their framework the query result and the auxiliary data are represented by relations, and updates of the auxiliary relations are performed by evaluating first-order formulas. The class of queries that can be maintained in this fashion constitutes the class DynFO. The motivation to use first-order logic as the vehicle for updates is that its evaluation is highly parallelizable and, in addition, that it corresponds to the relational algebra which is the core of SQL. Hence, if a query result can be maintained using a first-order update program, this program can be translated into equivalent SQL queries.

While it is desirable to understand how to update query results under complex changes , the focus of dynamic descriptive complexity so far has been on single tuple changes. The reason is that for many queries our techniques did not even suffice to tackle this case.

In recent years, however, we have seen several new techniques for maintaining queries. The Reachability query — one of the main objects of study in dynamic descriptive complexity — has been shown to be in DynFO using a linear algebraic method and a simulation technique [6]. The latter has been advanced into a very powerful tool: for showing that a query can be maintained in DynFO, it essentially suffices to show that it can be maintained for many change steps after initializing the auxiliary data by an pre-computation111Readers not familiar with the circuit class may safely think of LOGSPACE pre-computations. [7], where is the size of the database’s (active) domain. This tool has been successfully applied to show that all queries expressible in monadic second order logic can be maintained in DynFO on structures of bounded treewidth.

Those new techniques motivate a new attack on more complex changes . But what are reasonable changes to look at? Updating a query after a change that replaces the whole database by a new database is essentially equivalent to the static evaluation problem with built-in relations: the stored auxiliary data has to be helpful for every possible new database, and therefore plays the role of built-in relations. Thus changes should be restricted in some way. Three approaches come to mind immediately: to only allow changes of restricted size; to restrict changes structurally; or to define changes in a declarative way.

In this article we focus on the first approach. Before discussing our results we shortly outline the other two approaches.

There is a wide variety of structural restrictions. For example, the change set could only change the database locally or in such a way that the changes affect auxiliary relations only locally, e.g., if edges are inserted into distinct connected components it should be easier to maintain reachability. Another option is to restrict to be of a certain shape, examples studied in the literature are cartesian-closed changes [9] and deletions of anti-chains [8].

A declarative mechanism for changing a database is to provide a set of parameterised rules that state which tuples should be changed depending on a parameter provided by a user. For example, a rule could state that all edges shall be inserted into a graph such that and are connected to the parameter . First-order logic as a declarative mean to change databases has been studied in [22], where it was shown that undirected reachability can be maintained under insertions defined by first-order formulas, and single tuple deletions.

In this article we study changes of small size with a focus on the Reachability and Distance queries. As can be seen from the discussion above, the former query has been well-studied in diverse settings of dynamic descriptive complexity, and therefore results on its maintainability under small changes serve as an important reference point.

There is another reason to study Reachability under non-constant size changes. Recall that Reachability is complete for the static complexity class NL. The result that Reachability is in DynFO does not imply , as DynFO is only known to be closed under very weak reductions, called bounded first-order reductions, under which Reachability is not NL-complete [21]. In short, these reductions demand that whenever a bit of an instance is changed, then only constantly many bits change in the image of the instance under the reduction. When a query such as Reachability is maintainable under larger changes, then this restriction may be relaxed and might yield new maintainability results for other queries under single edge changes.

In this work we show that Reachability can be maintained under changes of non-constant size. Since our main interest is the study of changes of non-constant size, we assume throughout the article that all classes come with built-in arithmetic and denote, e.g., by DynFO the class of queries that can be maintained with first-order updates in the presence of a built-in linear addition and multiplication relations. How our results can be adapted to classes without built-in arithmetic is discussed towards the end of Section 3.

 Theorem 1.

Reachability can be maintained in DynFO under changes that affect nodes of a graph, where is the number of nodes of the graph.

The distance query was shown to be in DynFO+Maj by Hesse [14], where the class DynFO+Maj allows to specify updates with first-order formulas that may include majority quantifiers (equivalently, updates can be specified by uniform computations). We generalize Hesse’s result to changes of size polylogarithmic in the size of the domain.

 Theorem 2.

Reachability and Distance can be maintained in under changes that affect nodes of a graph, where is fixed and is the number of nodes of the graph.

One of the important open questions of dynamic descriptive complexity is whether distances can be maintained in DynFO, even under single edge changes. We contribute to the solution of this question by discussing how distances can be maintained in a subclass of that is only slightly stronger than .

Organization

After recapitulating notations in Section 2, we adapt the dynamic complexity framework to bulk changes in Section 3. Our main results, maintainability of reachability and distances under multiple changes, are proved in Section 4 and Section 5. We conclude with a discussion in Section 6.

2 Preliminaries

In this section we review basic definitions and results from finite model theory and databases.

We consider finite relational structures over relational signatures , where each is a relational symbol of arity . A -structure consists of a finite domain and relations over of arity , for each . The active domain of a structure contains all elements used in some tuple of . Since the motivation to study dynamic complexity originates from database theory, we use terminology from this area. In particular we use the terms “relational structure” and “relational database” synonymously.

We study the queries Reachability and Distance. Reachability asks, given a directed graph , for all pairs of nodes such that there is a path from to in . Distance asks for the length of the shortest path between any pair of reachable nodes.

We assume familiarity with first-order logic FO and refer to [17] for an introduction. The logic FO+Maj extends FO by allowing majority quantifiers. Such quantifiers can ask whether more than half of all elements satisfy a given formula. We write FO and FO+Maj to denote that formulas have access to built-in relations which are interpreted as linear order, addition and multiplication on the domain of the underlying structure. We note that FO and FO+Maj are equal to the circuit classes (DLOGTIME-)uniform and , respectively [3].

In , each tuple encodes a number from . We will henceforth identify tuples over the domain and numbers.

It is well-known that supports arithmetic on numbers with polylog bits. Furthermore, iterated addition and multiplication for polylog many numbers with polylog bits can be expressed in . More precisely:

[cf. [15, Theorem 5.1]] Suppose is a formula that defines polylog bit numbers , then there are formulas and that define the sum and product of , respectively.

Due to these facts, many calculations can be defined in . In particular, primes can be identified, and numbers of bits each can be encoded and decoded in bit numbers.

Suppose are primes whose product is . Then each number can be uniquely represented as a tuple where . The tuple is called Chinese remainder representation (CRR) of . The number can recovered from via , where , is the inverse of modulo , and [15, p. 702]. Due to Lemma 2, in one can encode and decode bit numbers into their CRR defined by primes with bits.

In this article we use basic notions and results from linear algebra which are introduced when they are needed. Throughout the article, a matrix with rows and columns and entries in will be represented by a relation that contains a tuple if and only if the value at row and column is .

3 Dynamic Framework for Multiple Changes

We briefly repeat the essentials of dynamic complexity, closely following [23], and discuss generalisations due to changes of non-constant size.

The goal of a dynamic program is to answer a given query on an input database subjected to changes that insert or delete tuples. The program may use an auxiliary data structure represented by an auxiliary database over the same domain. Initially, both input and auxiliary database are empty; and the domain is fixed during each run of the program.

Changes

In previous work, changes of single tuples have been represented as explicit parameters for the formulas used to update the auxiliary relations. Non-constant size changes cannot be represented in this fashion. An alternative is to represent changes implicitly by giving update formulas access to the old input database as well as to the changed input database [12]. Here, we opt for this approach.

For a database over domain and schema , a change consists of sets and of tuples for each relation symbol . The result of an application of the change to is the input database where is changed to . The size of is the total number of tuples in relations and and the set of affected elements is the (active) domain of tuples in .

Dynamic Programs and Maintenance of Queries

A dynamic program consists of a set of update rules that specify how auxiliary relations are updated after changing the input database. An update rule for updating an -ary auxiliary relation after a change is a first-order formula over schema with free variables, where is the schema of the auxiliary database. After a change , the new version of is where is the old input database and is the current auxiliary database. Note that a dynamic program can choose to have access to the old input database by storing it in its auxiliary relations.

For a state of the dynamic program with input database and auxiliary database we denote the state of the program after applying a change sequence and updating the auxiliary relations accordingly by .

The dynamic program maintains a -ary query under changes that affect elements (under changes of size , respectively) if it has a -ary auxiliary relation that at each point stores the result of applied to the current input database. More precisely, for each non-empty sequence of changes that affect elements (changes of size , respectively), the relation in and coincide, where is an empty input structure, is the auxiliary database with empty auxiliary relations over the domain of , and is the input database after applying .

If a dynamic program maintains a query, we say that the query is in DynFO. Similarly to DynFO one can define the class of queries DynFO that allows for three particular auxiliary relations that are initialised as a linear order and the corresponding addition and multiplication relations. Other classes are defined accordingly.

For many natural queries , in order to show that can be maintained, it is enough to show that the query can be maintained for a bounded number of steps. Intuitively, this is possible for queries for which isolated elements do not influence the query result, if there are many such elements. Formally, a query is almost domain-independent if there is a such that for all structures and sets with .

A query is -maintainable, for some complexity class and some function , if there is a dynamic program and a -algorithm such that for each input database over a domain of size , each linear order on the domain, and each change sequence of length , the relation in and coincide, where .

The following theorem is a slight adaption of Theorem 3 from [7] and can be proved analogously.

Every -maintainable, almost domain-independent query is in DynFO.

The Role of the Domain and Arithmetic

In order to focus on the study of changes of non-constant size, we choose a simplified approach and include arithmetic in our setting. We state our results for DynFO and according classes to make it clear that we assume the presence of a linear order, addition and multiplication relation on the whole domain at all times.222Different assumptions have been made in the literature. In [20], Patnaik and Immerman assume only a linear order to be present, while full arithmetic is assumed in [21]. Etessami observed that arithmetic can be built up dynamically, and therefore subsequent work usually assumed initially empty auxiliary relations, see e.g. [6, 7]. In the setting of first-order incremental evaluation systems usually no arithmetic is assumed to be present [9].

We shortly discuss the consequences of not assuming built-in arithmetic on our results. For single tuple changes, the presence of built-in arithmetic essentially gives no advantage.

Proposition ([6, Theorem 4], formulation from [7, Proposition 2]).

If a query under single-tuple changes is almost domain-independent, then also .

This result relies on the fact that one can maintain a linear order and arithmetic on the activated domain in DynFO under single-tuple changes [10], that is, on all elements that were in the active domain at some point of time. Under larger changes this is a priori not possible, as then one has to express in FO a linear order and arithmetic on the elements that enter the active domain.

An alternate approach to assuming the presence of built-in arithmetic is to demand that changes provide additional information on the changed elements, for example, that they provide a linear order and arithmetic on the domain of the change. Using this approach, our results can be stated in terms of DynFO and DynFO+Maj with the sole modification that sizes of changes are given relative to the size of the activated domain instead of with respect to the size of the whole domain. In this fashion our results also translate to the setting of first-order incremental evaluation systems of Dong, Su, and Topor [9], where the domain can grow and shrink.

4 Reachability under Multiple Changes

In this section we prove that Reachability can be maintained under multiple changes.

See 1

The approach is to use the well-known fact that Reachability can be reduced to the computation of the inverse of a matrix, and to invoke the Sherman-Morrison-Woodbury identity (cf. [13]) to update the inverse. This identity essentially reduces the update of inverses after a change affecting nodes to the computation of an inverse of a matrix.

The challenge is to define the updates in . The key ingredients here are to compute inverses with respect to many primes, and throw away primes for which the inverse does not exist. As, by Theorem 3, it suffices to maintain the inverse for many steps for some to be fixed later (see proof of Theorem 4.1), some primes remain valid if one starts from sufficiently – but polynomially – many primes. We show that the inverse of matrices over can be defined in for .

Theorem 1 in particular generalizes the result that Reachability can be maintained under single edge changes [6]; our proof is an alternative to the proof presented in the latter work. In [6], maintenance of Reachability is reduced to the question whether a matrix

has full rank, and it was shown that the rank can be maintained by storing and updating an invertible matrix

and a matrix from which the rank can be easily extracted, such that .

4.1 Reachability and Matrix Inverses

There is a path from to in a graph of size with adjacency matrix if and only if the --entry of the matrix is non-zero. This follows from the equation and the fact that counts the number of paths from to of length . Notice that is invertible as matrix over for every adjacency matrix since it is strictly diagonally dominant [16, Theorem 6.1.10].

When applying a change to that affects nodes, the adjacency matrix of is updated by adding a suitable change matrix with at most non-zero rows and columns to . Thus Theorem 1 follows from the following proposition333Due to lack of space some details are hidden here. The described reduction maps the empty graph to the matrix whose diagonal entries are . Values of the inverse for this matrix cannot be determined in FO, and thus one does not immediately get the desired result for Reachability. This issue can be circumvented by mapping to matrices with only some non-zero entries on the diagonal, and studying the inverse of the matrices induced by non-zero diagonal entries..

When takes values polynomial in and is assumed to stay invertible over , then non-zeroness of entries of can be maintained in DynFO under changes that affect rows and columns.

Each change affecting rows and columns can be partitioned into constantly many changes that affect rows and columns. We therefore concentrate on such changes in the following.

The change matrix for a change affecting rows and columns has at most non-zero rows and columns and can therefore be decomposed into a product of suitable matrices and , where , , and have dimensions , , and , respectively.

Fix a ring . Suppose with non-zero rows and columns . Then with and where

  1. is obtained from by removing all-zero rows and columns.

  2. where

  3. where

Here, denotes the

-th unit vector.

By the Sherman-Morrison-Woodbury identity (cf. [13]), the updated inverse can therefore be written as

The inverse of a matrix in with entries that are polynomial in is a matrix in with entries that may involve numbers exponential in . In particular computations cannot be performed in directly. For this reason all computations will be done modulo many primes, and non-zeroness of entries of is extracted from these values.

Let us first see how to update modulo a prime under the assumption that both and are invertible. Observe that is a matrix and therefore an essential prerequisite to compute is to be able to define the inverse of such small matrices. That this is possible follows from the following lemma and the fact that for invertible . Here denotes the -th entry of a matrix and denotes the submatrix obtained by removing the -th row and the -th column.

Fix a domain of size and a prime . The value of the determinant of a matrix for can be defined in .

The technical proof of this theorem is deferred until the next Subsection 4.2.

That can defined in using Equation 4.1 now is a consequence of a straightforward analysis of the involved matrix operations.

Proposition .

Fix a domain of size and a prime . Given the inverse of a matrix and a matrix with at most non-zero rows and columns, one can determine whether is invertible in and, if so, the inverse can be defined.

Proof.

A decomposition of the matrix into with and can be defined in using the characterization from Lemma 4.1. A simple analysis of the right hand side of Equation 4.1 – taking the dimensions of and into account – yields that and therefore are matrices. Furthermore, is an matrix that has at most non-zero rows and columns.

The only obstacle to invertibility is that the inverse of may not exist in . This is the case if and only if which can be tested using Theorem 4.1. If is invertible, then its inverse can be defined by invoking Theorem 4.1 twice and using .

Finally, if one knows how to compute , each entry in can be defined by adding products of two numbers, and similarly for . This can be done in due to Lemma 2. ∎

It remains to show how to maintain non-zeroness of entries of . Essentially a dynamic program can maintain a Chinese remainder representation of and extract whether an entry is non-zero from this representation. An obstacle is that whenever does not exist for a prime during the update process, then this prime becomes invalid for the rest of the computation. The idea to circumvent this is simple: with each change, only a small number of primes become invalid. However, since the determinant can be computed in (cf. [5]), using Theorem 3 we only need to be able to maintain a correct result for many steps. Thus starting from sufficiently many primes will guarantee that enough primes are still valid after steps.

We make these numbers more precise in the following.

Proof (of Theorem 4.1)..

By Theorem 3 and since values of the inverse of a matrix are almost domain-independent, it suffices to exhibit a dynamic program444Actually we only describe a program that works correctly for sufficiently large . However, small can be easily dealt with separately. that maintains non-zeroness of entries of for changes of size . The dynamic program maintains for each of the first many primes , which, by the Prime Number Theorem, can be found among the first numbers. Denote by the set of the first primes. The initialization procedure computes for each prime in . The update procedure for a change is simple:

  1. For each prime :

    1. If is not invertible then remove from .

    2. If is invertible then update .

  2. Declare if there is a prime with .

The Steps 1a and 1b can be performed in due to Proposition 4.1.

It remains to argue that the result from Step 2 is correct. Observe that the values of entries of are at most at all times, and therefore for large enough . Thus, since over by assumption, there are at most primes such that , for all reached after a sequence of changes.

In particular, is not invertible — equivalently, does not exist — for at most primes . Hence, each time Step 1 is executed, at most primes are declared invalid and removed from . All in all this step is executed at most times, and therefore not more than primes are removed from . Thus for the remaining valid primes, the inverses are computed correctly.

Each entry of is, again, bounded by , so if there are at most primes with . So, the result declared in Step 2 is correct. ∎

4.2 Defining the Determinant of Small Matrices

In this subsection we prove Theorem 4.1. The symbolic determinant of a sized matrix is a sum of monomials and therefore cannot be naïvely defined in . Here we use the fact that can easily convert bit numbers into their Chinese remainder presentation and back, and show how the determinant can be computed modulo bit primes.

It is easy to verify whether the value of a determinant modulo a bit prime is zero in by guessing a linear combination witnessing that the rank is less than full. We aim for a characterization that allows to reduce the verification of determinant values to such zeroness tests. To this end we use the self-reducibility and multilinearity of determinants. Assume and that the determinant of is also non-zero. Then the determinant can be written as for some and . By finding an such that the determinant is zero when is replaced by in we gain . Repeating this step recursively for — which is the determinant of a smaller matrix — one obtains a procedure for determining the value of the determinant that can be parallelized.

The following lemma is a preparation for deriving the characterization. We denote by the matrix obtained from a matrix by removing all rows and columns larger than .

Suppose is a non-singular matrix over a field . Then there is a permutation such that for :

and for all

Proof.

In the Laplacian expansion of with respect to the -th row there must be at least on non-zero term; say, the -th term. Then and . Thus if is the matrix obtained by swapping the -th and -th columns of then and, if , . Proceed inductively with the matrix , and combine the column swaps into a permutation . ∎

The following proposition characterizes the determinant of a matrix. We will see that this characterization allows for parallel computation of the determinant of small matrices.

Proposition .

Suppose is a matrix over a field such that and for all . Let be the matrix obtained from by replacing by for some . Then there are unique and such that

  1. ,

  2. , and

for . Furthermore, it holds that .

Proof.

Clearly, . We inductively show that the exist and are unique. The values are then determined by (b), and we prove that for . Suppose this has been ensured for . Expanding the determinant of with respect to the -th row and splitting the sum into the term for the -th column and the term for all other columns yields

with .

Similarly the determinant expands to . Since there is a unique such that . With this , we have that , and plugging this into Equation 4.2 yields that . ∎

Finally we show that the characterization from the previous proposition can be used to define the determinant of small matrices in .

Proof (of Theorem 4.1)..

Suppose is a matrix with and . The idea is to define in Chinese remainder representation for primes . A simple calculation shows that primes each of bits suffice. The Chinese remainder representation can be defined from and the value can be recovered from the values in due to Lemma 2. Thus let us show how to define for a prime of bits.

The idea is to first test whether the determinant is zero. If not, the fact that it is not zero is used to define the determinant using Proposition 4.2.

If is singular then there exists a non-trivial linear combination of the columns that yields the all zero vector. Such a linear combination is determined by specifying one bit number for each of the columns. It can thus be encoded in bits, and therefore existentially quantified by a first-order formula. Such a “guess” can be decoded (i.e., the numbers of length can be extracted) in , see Section 2. Checking if a guessed linear combination is zero requires to sum small numbers and is hence in due to Lemma 2.

Now, for defining the determinant when is non-singular, a formula can guess a permutation of and verify that it satisfies the conditions from Lemma 4.2. Note that such a permutation can be represented as a sequence of pairs of numbers of bits each, and hence be stored in bits. The verification of the conditions from Lemma 4.2 requires the zero-test for determinants explained above. After fixing , the values as well as from Proposition 4.2 can be guessed and verified. Again, these numbers can be stored in bits. For verifying the conditions from Proposition 4.2 on the determinants of , the zero-test for determinants is used. ∎

5 Distances under Multiple Changes

In this section we extend the techniques from the previous section to show how distances can be maintained under changes that affect polylogarithmically many nodes with first-order updates that may use majority quantifiers. Afterwards we discuss how the techniques extend to other dynamic complexity classes.

See 2

The idea is to use generating functions for counting the number of paths of each length, following Hesse [14]. Fix a graph with adjacency matrix and a formal variable . Then is a matrix of formal power series from such that if then is the number of paths from to of length . In particular, the distance between and is the smallest such that is non-zero. Note that if such an exists, then .

Similarly to the corresponding matrix from the previous section, the matrix is invertible over and can be written as (cf. [11, Example 3.6.1]). The maintenance of distances thus reduces to maintaining for a matrix , for each entry , the smallest such that the th coefficient is non-zero.

Suppose stays invertible over . For all one can maintain the smallest such that the th coefficient of the -entry of is non-zero in under changes that affect nodes, for fixed .

The idea is the same as for Reachability. When updating to then one can decompose the change matrix into for suitable matrices and , and apply the Sherman-Morrison-Woodbury identity 4.1, this time over the field of fractions (see the appendix for a short recollection of this field).

Of course computing with inherently infinite formal power series is not possible in . However, as stated in Theorem 5, in the end we are only interested in the first coefficients of power series. We therefore show that it suffices to truncate all occurring power series at the -th term and use ’s ability to define iterated sums and products of polynomials [15].

Formally, we have to show that no precision for the first coefficients is lost when computing with truncated power series. This motivates the following definition. A formal power series is an -approximation of a formal power series , denoted by , if for all . This notion naturally extends to matrices over : a matrix is an -approximation of a matrix if each entry of is an -approximation of the corresponding entry of . The notion of -approximation is preserved under all arithmetic operations that will be relevant.

Fix an .

  1. Suppose with and . Then

    1. ,

    2. , and

    3. whenever and are normalized.

  2. Suppose with and . Then

    1. ,

    2. ,

    3. If is invertible over then so is , and .

Here, a formal power series is normalized if .

An approximation of the inverse of a matrix can be updated using the Sherman-Morrison-Woodbury identity.

Proposition .

Suppose is invertible over , and is an -approximation of . If is invertible over and can be written as with and , then

Proof.

This follows immediately from the Sherman-Morrison-Woodbury identity and Lemma 5. ∎

As already discussed in Section 4, the Sherman-Morrison-Woodbury identity involves inverting matrices, which reduces to computing the determinant of such matrices. We show that this is possible in FO+Maj for matrices of polynomials for .

Fix a domain of size and . The determinant of a matrix , with entries of degree polynomial in , can be defined in for .

Proof.

We show that the value can be computed in uniform , which is as powerful as [3].

Computing the determinant of an matrix is equivalent to computing the iterated matrix product of matrices of dimension at most [5], and this reduction is a uniform -reduction as can be seen implicitly in [19, p. 482]. Thus the lemma statement follows from the fact that iterated products of matrices with can be computed in uniform , which can be proven like in [1, p. 69]. The full proof can be found in the appendix. ∎

Proof (of Theorem 5)..

The dynamic program maintains an -approximation of that truncates at degree . When is updated to then:

  1. is decomposed into suitable and ;

  2. is updated via ;

  3. All entries of are truncated at degree .

The steps can be defined in due to Lemma 4.1, Lemma 5, and the fact that iterated addition and multiplication of polynomials can be defined in , see [15]. The maintained matrix is indeed an -approximation of due to Proposition 5. ∎

From the proof of Theorem 5 it is clear that the main obstacle towards maintaining distances for changes that affect a larger set of nodes is to compute determinants of larger matrices. Since distances can be computed in NL, only classes below NL are interesting from a dynamic perspective. As an example we state a result for the circuit class .

Reachability and Distance can be maintained in under changes that affect nodes.

Here denotes the smallest number such that -fold application of yields a number smaller than . The corollary follows by plugging Lemma 5 into the proof above.

Fix a domain of size . The determinant of a matrix , with entries of degree polynomial in , can be computed in uniform for .

6 Conclusion

For us it came as a surprise that Reachability can be maintained under changes of non-constant size, without any structural restrictions. In contrast, the dynamic program for Reachability from [6] can only deal with changing many outgoing edges of single nodes (or, symmetrically, many incoming edges; a combination is not possible). For that program it is essential that only single rows of the adjacency matrix are changed.

It would be interesting to improve our results for to changes of size . The obstacle is the computation of determinants of matrices of this size, which we can only do for size matrices. Yet in principle our approach can deal with certain changes that affect more nodes: the matrices and in the Sherman-Morrison-Woodbury identity can be chosen differently, as long as all computations involve only adding numbers.

One of the big remaining open questions in dynamic complexity is whether distances are in DynFO. Our approach sheds some light on this question. It can be adapted so as to maintain information within from which shortest distances can be extracted in . The technical proof of this result is deferred to the appendix.

Distances can be defined by a query from auxiliary relations that can be maintained in under changes that affect nodes.

References

  • [1] Manindra Agrawal and V Vinay. Arithmetic circuits: A chasm at depth four. In Foundations of Computer Science, 2008. FOCS’08. IEEE 49th Annual IEEE Symposium on, pages 67–75. IEEE, 2008.
  • [2] Eric Allender. Arithmetic circuits and counting complexity classes. In Complexity of Computations and Proofs, Quaderni di Matematica, pages 33–72, 2004.
  • [3] David A. Mix Barrington, Neil Immerman, and Howard Straubing. On uniformity within NC. J. Comput. Syst. Sci., 41(3):274–306, 1990.
  • [4] Ashok K. Chandra, Larry J. Stockmeyer, and Uzi Vishkin. Constant depth reducibility. SIAM J. Comput., 13(2):423–439, 1984. URL: https://doi.org/10.1137/0213028, doi:10.1137/0213028.
  • [5] Stephen A. Cook. A taxonomy of problems with fast parallel algorithms. Information and Control, 64(1-3):2–21, 1985. URL: https://doi.org/10.1016/S0019-9958(85)80041-3, doi:10.1016/S0019-9958(85)80041-3.
  • [6] Samir Datta, Raghav Kulkarni, Anish Mukherjee, Thomas Schwentick, and Thomas Zeume. Reachability is in DynFO. In Magnús M. Halldórsson, Kazuo Iwama, Naoki Kobayashi, and Bettina Speckmann, editors, Automata, Languages, and Programming - 42nd International Colloquium, ICALP 2015, Kyoto, Japan, July 6-10, 2015, Proceedings, Part II, volume 9135 of Lecture Notes in Computer Science, pages 159–170. Springer, 2015.
  • [7] Samir Datta, Anish Mukherjee, Thomas Schwentick, Nils Vortmeier, and Thomas Zeume. A strategy for dynamic programs: Start over and muddle through. In Ioannis Chatzigiannakis, Piotr Indyk, Fabian Kuhn, and Anca Muscholl, editors, 44th International Colloquium on Automata, Languages, and Programming, ICALP 2017, July 10-14, 2017, Warsaw, Poland, volume 80 of LIPIcs, pages 98:1–98:14. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2017. URL: https://doi.org/10.4230/LIPIcs.ICALP.2017.98, doi:10.4230/LIPIcs.ICALP.2017.98.
  • [8] Guozhu Dong and Chaoyi Pang. Maintaining transitive closure in first order after node-set and edge-set deletions. Inf. Process. Lett., 62(4):193–199, 1997. URL: http://dx.doi.org/10.1016/S0020-0190(97)00066-5, doi:10.1016/S0020-0190(97)00066-5.
  • [9] Guozhu Dong, Jianwen Su, and Rodney W. Topor. Nonrecursive incremental evaluation of datalog queries. Ann. Math. Artif. Intell., 14(2-4):187–223, 1995. URL: http://dx.doi.org/10.1007/BF01530820, doi:10.1007/BF01530820.
  • [10] Kousha Etessami. Dynamic tree isomorphism via first-order updates. In Proceedings of the Seventeenth ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems (PODS), pages 235–243, 1998.
  • [11] Chris D. Godsil. Algebraic combinatorics. Chapman and Hall mathematics series. Chapman and Hall, 1993.
  • [12] Erich Grädel and Sebastian Siebertz. Dynamic definability. In Alin Deutsch, editor, 15th International Conference on Database Theory, ICDT ’12, Berlin, Germany, March 26-29, 2012, pages 236–248. ACM, 2012. URL: http://doi.acm.org/10.1145/2274576.2274601, doi:10.1145/2274576.2274601.
  • [13] Harold V Henderson and Shayle R Searle. On deriving the inverse of a sum of matrices. Siam Review, 23(1):53–60, 1981.
  • [14] William Hesse. The dynamic complexity of transitive closure is in DynTC. Theor. Comput. Sci., 296(3):473–485, 2003. URL: https://doi.org/10.1016/S0304-3975(02)00740-5, doi:10.1016/S0304-3975(02)00740-5.
  • [15] William Hesse, Eric Allender, and David A. Mix Barrington. Uniform constant-depth threshold circuits for division and iterated multiplication. J. Comput. Syst. Sci., 65(4):695–716, 2002. URL: https://doi.org/10.1016/S0022-0000(02)00025-9, doi:10.1016/S0022-0000(02)00025-9.
  • [16] Roger A Horn and Charles R Johnson. Matrix analysis. Cambridge university press, 2012.
  • [17] Neil Immerman. Descriptive complexity. Graduate texts in computer science. Springer, 1999.
  • [18] Hermann Jung. Depth efficient transformations of arithmetic into boolean circuits. In Fundamentals of Computation Theory, FCT ’85, pages 167–174, London, UK, UK, 1985. Springer-Verlag. URL: http://dl.acm.org/citation.cfm?id=647892.739608.
  • [19] Meena Mahajan and V. Vinay. Determinant: Old algorithms, new insights. SIAM J. Discrete Math., 12(4):474–490, 1999. URL: https://doi.org/10.1137/S0895480198338827, doi:10.1137/S0895480198338827.
  • [20] Sushant Patnaik and Neil Immerman. Dyn-fo: A parallel, dynamic complexity class. In Victor Vianu, editor, Proceedings of the Thirteenth ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems, May 24-26, 1994, Minneapolis, Minnesota, USA, pages 210–221. ACM Press, 1994. URL: http://doi.acm.org/10.1145/182591.182614, doi:10.1145/182591.182614.
  • [21] Sushant Patnaik and Neil Immerman. Dyn-FO: A parallel, dynamic complexity class. J. Comput. Syst. Sci., 55(2):199–209, 1997.
  • [22] Thomas Schwentick, Nils Vortmeier, and Thomas Zeume. Dynamic complexity under definable changes. In Michael Benedikt and Giorgio Orsi, editors, 20th International Conference on Database Theory, ICDT 2017, March 21-24, 2017, Venice, Italy, volume 68 of LIPIcs, pages 19:1–19:18. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2017. URL: https://doi.org/10.4230/LIPIcs.ICDT.2017.19, doi:10.4230/LIPIcs.ICDT.2017.19.
  • [23] Thomas Schwentick and Thomas Zeume. Dynamic complexity: recent updates. SIGLOG News, 3(2):30–52, 2016. URL: http://doi.acm.org/10.1145/2948896.2948899, doi:10.1145/2948896.2948899.

Appendix

7 Background on Formal Power Series

Recall that is an integral domain and has the only units and . By we denote the ring of formal power series over , i.e. the ring with elements and natural addition an multiplication. An element is normalized if . Since is an integral domain, all normalized elements of have an inverse. The integral domain can be embedded into its field of fractions . We denote the subring of consisting of all finite polynomials by ; and the field of fractions of by .

A matrix is invertible over if there is a matrix with . The matrix is invertible over if and only if it is invertible in and the constant term of is a unit of , i.e. it is or .

For a polynomial we abbreviate its degree by and write for the value of its largest coefficient. The degree of a representation of an element of is the maximum of the degrees of and , and similarly for the largest coefficient. Degree and maximal coefficient are defined similarly for matrices over and

If is of the form for some matrix then is invertible over as exists and is a normalized polynomial.

8 Proofs of Section 5

Proof (of Lemma 5)..

The first two parts of (a) are straightforward. For the last part suppose that and . Further write and as and where and . Then it is easy to see that and for all . Similarly and . Solving both systems of equations yields for .

The first two parts of (b) follow immediately from (a). For the third part, recall that is invertible over if and only if and is normalized. This translates, via (a), to the matrix . Furthermore