1.1 The RL vs. L Problem
One of the central problems in computational complexity is to understand the power of randomness for efficient computation. There is strong evidence that randomness does not provide a substantial savings for standard algorithmic problems, where the input is static and the algorithm has time to access it in entirety. Indeed, under widely believed complexity assumptions (e.g. that SAT has circuit complexity ), it is known that every randomized algorithm can be made deterministic with only a polynomial slowdown (, ) and a constant-factor increase in space (, ) [IW, KvM].
A major challenge is to prove such derandomization results unconditionally, without relying on unproven complexity assumptions. In the time-bounded case, it is known that proving that requires progress on circuit lower bounds [IKW, KI]. But for the space-bounded case, there are no such barriers, and we can hope for an unconditional proof that . Indeed, Nisan [Nis] gave an unconditional construction of a pseudorandom generator for space-bounded computation with seed length , which was used by Saks and Zhou [SZ1] to prove that . Unfortunately, despite much effort over the past two decades, this remains the best known upper bound on the deterministic space complexity of .
For many years, the most prominent example of a problem in not known to be in was Undirected S-T Connectivity, which can be solved in randomized logarithmic space by performing a polynomial-length random walk from the start vertex and accepting if the destination vertex is ever visited [AKL]. In 2005, Reingold [Rei] gave a deterministic logspace algorithm for this problem. Since Undirected S-T Connectivity is complete for (symmetric logspace) [LP], this also implies deterministic logpsace algorithms for several other natural problems, such as Bipartiteness [JLL, AG].
Reingold’s algorithm provided hope for renewed progress on the general vs. problem. Indeed, it was shown in [RTV] that solving S-T connectivity on directed graphs where the random walk is promised to have polynomial mixing time (and where has noticeable stationary probability) is complete for (generalized to promise problems). While [RTV] was also able to generalize Reingold’s methods to Eulerian directed graphs, the efforts to handle all of with this approach stalled. Thus, researchers turned back to constructing pseudorandom generators for more restricted models of space-bounded computation, such as various types of constant-width branching programs [BRRY, BV, KNP, SZ2, GMR, IMZ, Ste, RSV, SVW].
1.2 Our Work
In this paper, we restart the effort to obtain deterministic logspace algorithms for increasingly rich graph-theoretic problems beyond those known to be in . In particular, we provide a nearly logarithmic space algorithm for approximately solving Undirected Laplacian Systems, which also implies nearly logarithmic-space algorithms for approximating hitting times, commute times, and escape probabilities for random walks on undirected graphs.
Our algorithms are obtained by combining the techniques from recent nearly linear-time Laplacian solvers [ST1, PS] with methods related to Reingold’s algorithm [RV]. The body of work of time-efficient Laplacian solvers has recently been extended to Eulerian directed graphs [CKP1], which in turn was used to obtain algorithms to approximate stationary probabilities for arbitrary directed graphs with polynomial mixing time through recent reductions [CKP2].
This raises the tantalizing possibility of extending our nearly logarithmic-space Laplacian solvers in a similar way to prove that , since approximating stationary probabilities on digraphs with polynomial mixing time suffices to solve all of [RTV, CRV], and Reingold’s algorithm has been extended to Eulerian digraphs [RTV, RV].
1.3 Laplacian system solvers
Given an undirected multigraph with adjacency matrix and diagonal degree matrix , the Laplacian of is the matrix . Solving systems of equations in the Laplacians of graphs (and in general symmetric diagonally dominant systems) arises in many applications throughout computer science. In a seminal paper, Spielman and Teng gave a nearly linear time algorithm for solving such linear systems [ST1], which was improved in a series of follow up works [KMP1, KMP2, KOSZ, PS, LS1] including recent extensions to directed graphs [CKP2, CKP1]. These methods have sparked a large body of work using Laplacian solvers as an algorithmic primitive for problems such as max flow [CKM, KLOS], randomly sampling spanning trees [KM], sparsest cut [She], graph sparsification [SS]
, as well as problems in computer vision[KMT].
Recent works have started to study the space complexity of solving Laplacian systems. Ta-Shma [TS] gave a quantum logspace algorithm for approximately solving general linear systems that are suitably well-conditioned. Doron, Le Gall, and Ta-Shma [DLT] showed that there is a randomized logspace algorithm for approximately solving Laplacian systems on digraphs with polynomial mixing time.
1.4 Main Result
We give a nearly logarithmic-space algorithm for approximately solving undirected Laplacian systems:
There is a deterministic algorithm that, given an undirected multigraph specified as a list of edges, a vector
specified as a list of edges, a vectorin the image of ’s Laplacian , and an approximation parameter , finds a vector , such that
for some such that , in space
where is the bitlength of the input .
In particular, for , the algorithm uses space . Note that since the algorithm applies to multigraphs (represented as edge lists), we can handle polynomially large integer edge weights. Known reductions [CKP2]
imply the same space bounds for estimating hitting times, commute times, and escape probabilities for random walks on undirected graphs. (See Section8.)
The starting point for our algorithm is the undirected Laplacian solver of Peng and Spielman [PS], which is a randomized algorithm that uses polylogarithmic parallel time and a nearly linear amount of work. (Interestingly, concurrently and independently of [Rei], Trifonov [Tri] gave a -space algorithm for Undirected S-T Connectivity by importing techniques from parallel algorithms, like we do.) It implicitly111Nearly linear-time algorithms for Laplacian system solvers do not have enough time to write done a full approximate pseudoinverse, which may be dense; instead they compute the result of applying the approximate matrix to a given vector. computes an approximate pseudoinverse of a graph Laplacian (formally defined in Section 2.2), which is equivalent to approximately solving Laplacian systems. Here we will sketch their algorithm and how we obtain a space-efficient analogue of it.
By using the deterministic logspace algorithm for Undirected S-T Connectivity [Rei], we may assume without loss of generality that our input graph is connected (else we find the connected components and work on each separately). By adding self-loops (which does not change the Laplacian), we may also assume that is regular and nonbipartite. For notational convenience, here and through most of the paper we will work with the normalized Laplacian, which in the case that is regular,222In an irregular graph, the normalized Laplacian is defined to be , where is the diagonal matrix of vertex degrees. When is -regular, we have , so . equals , where is the transition matrix for the random walk on and now we are using for the number of vertices in
. Since the uniform distribution is stationary for the random walk on a regular graph, the all-ones vectoris in the kernel of . Because is connected, there is no other stationary distribution and is of rank . Thus computing amounts to inverting on the space orthogonal to .
The Peng–Spielman algorithm is based on the following identity:
where is the matrix with all entries . That is,
where is the multigraph whose edges correspond to walks of length 2 in .
This gives rise to a natural algorithm for computing , by recursively computing and applying Equation (1). After squaring the graph times, we are considering all walks of length , which is beyond the mixing time of regular graphs, and hence is approximately , and we can easily approximate the pseudoinverse as , the Laplacian of the complete graph with self loops on every vertex.
However, each time we square the graph, the degree also squares, which is too costly in terms of computational complexity. Thus, to obtain nearly linear time, Peng and Spielman [PS] sparsify at each step of recursion. Specifically, they carefully combine elements of randomized sparsification procedures from [ST2, OV, ST3] that retains edges from and provides a spectral -approximation to in the sense that for every vector , we have . The recursion of Equation (1) behaves nicely with respect to spectral approximation, so that if we replace with such an -approximation at each step, we obtain a -approximation over the levels of recursion provided . Thus, we can take and obtain a good spectral approximation overall, using graphs with edges throughout and thereby maintaining a nearly linear amount of work.
There are two issues with trying to obtain a deterministic -space algorithm from this approach. First, we cannot perform random sampling to sparsify. Second, even if we derandomize the sampling step in deterministic logarithmic space, a direct recursive implementation would cost space for each of the levels of recursion, for a space bound of .
For the first issue, we show that the sparsification can be done deterministically using the derandomized square of Rozenman and Vadhan [RV], which was developed in order to give a simpler proof that Undirected S-T Connectivity is in . If is a -regular graph, a derandomized square of is obtained by using an expander on vertices to select a subset of the walks of length 2 in . If has degree , then the corresponding derandomized square of will have degree . In [RV], it was shown that the derandomized square improves expansion (as measured by spectral gap) nearly as well actual squaring, to within a factor of , where
is the second largest eigenvalue ofin absolute value. Here we show that derandomized squaring actually provides a spectral -approximation to . Consequently, we can use derandomized squaring with expanders of second eigenvalue and hence degree in the Peng–Spielman algorithm.
Next, to obtain a space-efficient implementation, we consider what happens when we repeatedly apply Equation (1), but with true squaring replaced by derandomized squaring. We obtain a sequence of graphs , where and is the derandomized square of , a graph of degree . We recursively compute spectral approximations of the pseudoinverses as follows:
Opening up all levels of this recursion, there is an explicit quadratic matrix polynomial where each of the (noncommuting) variables appears at most twice in each term, such that
Our task is to evaluate this polynomial in space . First, we use the fact, following [Rei, RV], that a careful implementation of -fold derandomized squaring using explicit expanders of degree can be evaluated in space . (A naive evaluation would take space .) This means that we can construct each of the matrices in space .333We follow the usual model of space-bounded computation, where algorithms can have output (to write-only memory) that is larger than their space bound. We review the standard composition lemma for such algorithms in Section 6.1. Next, we observe that each term in the multilinear polynomial multiplies at most of these matrices together, and hence can be computed recursively in space . Since iterated addition is also in logspace, we can then sum to evaluate the entire matrix polynomial in space .
To obtain an arbitrarily good final approximation error , we could use expanders of degree for our derandomized squaring, but this would yield a space complexity of at least . To obtain the doubly-logarithmic dependence on claimed in Theorem 1.1, we follow the same approach as [PS], computing a constant factor spectral approximation as above, and then using “Richardson iterations” at the end to improve the approximation factor. Interestingly, this doubly-logarithmic dependence on is even better than what is achieved by the randomized algorithm of [DLT], which uses space .
2.1 Graph Laplacians
Let be an undirected multigraph on vertices. The adjacency matrix is the matrix such that entry contains the number of edges from vertex to vertex in . The degree matrix is the diagonal matrix such that is the degree of vertex in . The Laplacian of is defined to be
is a symmetric, positive semidefinite matrix with non-positive off-diagonal entries and diagonal entries equaling the sum of the absolute values of the off-diagonal entries in that row. Each row of a Laplacian sums to 0 so the all-ones vector is in the kernel of every Laplacian.
We will also work with the normalized Laplacian of regular graphs where is the transition matrix of the random walk on the graph .
Let be a regular undirected graph with transition matrix . Then we define
The spectral gap of is
The spectral gap of a multigraph is a good measure of its expansion properties. It is shown in [RV] that if is a -regular multigraph on vertices with a self loop on every vertex then .
Throughout the paper we let denote the matrix with in every entry. So is the normalized Laplacian of the complete graph with a self loop on every vertex. We use to denote the all ones vector.
2.2 Moore-Penrose pseudoinverse
Since a normalized Laplacian is not invertible, we will consider the Moore-Penrose pseudoinverse of .
Let be a real-valued matrix. The Moore-Penrose pseudoinverse of is the unique matrix satisfying the following:
For a real-valued matrix , has the following properties:
If is a nonzero scalar then
If is symmetric with eigenvalues , then
has the same eigenvectors asand eigenvalues where
Next we show that solving a linear system in can be reduced to computing the pseudoinverse of and applying it to a vector.
If has a solution, then the vector is a solution.
Let be a solution. Multiplying both sides of the equation by gives
The final equality shows that is a solution to the system. ∎
2.3 Spectral approximation
We want to approximate a solution to . Our algorithm works by computing an approximation to the matrix and then outputting as an approximate solution to . We use the notion of spectral approximation of matrices first introduced in [ST2].
Let be symmetric real matrices. We say that if
where for any two matrices we write if is positive semidefinite.
Below are some additional useful facts about spectral approximation that we use throughout the paper.
Proposition 2.6 ([Ps]).
3 Main Theorem and Approach
Our Laplacian solver works by approximating . The main technical result is:
Given an undirected, connected multigraph with Laplacian and , there is a deterministic algorithm that computes such that that uses space where is the bitlength of the input.
Approximating solutions to Laplacian systems follows as a corollary and is discussed in Section 8. To prove Theorem 3.1 we first reduce the problem to the task of approximating the pseudoinverse of the normalized Laplacian of a regular, aperiodic, multigraph with degree a power of 2. Let be the Laplacian of an undirected multigraph . We can make -regular with a power of 2 and aperiodic by adding an appropriate number of self loops to every vertex. Let be the diagonal matrix of self loops added to each vertex in . Then is the Laplacian of the regular, aperiodic multigraph (self loops do not change the unnormalized Laplacian) and is the normalized Laplacian. Recalling that completes the reduction.
Our algorithm for computing the pseudoinverse of the normalized Laplacian of a regular, aperiodic multigraph is based on the Laplacian solver of Peng and Spielman [PS]. It works by using the following identity:
If is the normalized Laplacian of an undirected, connected, regular, aperiodic, multigraph on vertices then
Recall that squaring the transition matrix of a regular multigraph yields the transition matrix of , which is defined to be the graph on the same vertex set as whose edges correspond to walks of length 2 in . So the identity from Proposition 3.2 reduces the problem of computing the pseudoinverse of the normalized Laplacian of to computing the pseudoinverse of the normalized Laplacian of (plus some additional matrix products). Repeatedly applying the identity from Proposition 3.2 and expanding the resulting expression we see that for all integers
where for all
Since is connected and aperiodic, as grows, the term approaches . This is because for all entry is the probability that a random walk from of length ends at . The graph is regular, so its stationary distribution is uniform and if is larger than the mixing time of the graph then a random walk of length starting at is essentially equally likely to end at any vertex in the graph. The eigenvalues of are all 0 or 1, so is its own pseudoinverse. Undirected multigraphs have polynomial mixing time, so setting to and then replacing in the final term of the expansion with should give a good approximation to without explicitly computing any pseudoinverses.
Using the identity directly to approximate in space is infeasible because raising a matrix to the th power by repeated squaring takes space , which is for .
To save space we use the derandomized square introduced in [RV] in place of true matrix squaring. The derandomized square improves the connectivity of a graph almost as much as true squaring does, but it does not square the degree and thereby avoids the space blow-up when iterated.
4 Approximation of the pseudoinverse
In this section we show how to approximate the pseudoinverse of a Laplacian using a method from Peng and Spielman [PS]. The Peng Spielman solver was originally written to approximately solve symmetric diagonally dominant systems, which in turn can be used to approximate solutions to Laplacian systems. We adapt their algorithm to compute the pseudoinverse of a graph Laplacian directly. The approximation proceeds in two steps. The first achieves a constant spectral approximation to the pseudoinverse of a Laplacian matrix. Then the constant approximation is boosted to an approximation through rounds of Richardson iteration.
Theorem 4.1 (Adapted from [Ps]).
Let and . Let be symmetric matrices such that are positive semidefinite for all and for all and . Then
where for all
The proof of this theorem is adapted from [PS] and we include it here for completeness.
First we show that for all ,
Then Parts 2 and 3 of Proposition 2.6 tell us that
for and let . Notice that
So we want to show that
We will prove by backwards induction on that
The base case of follows by assumption. Supposing the claim holds for we show it also holds for . By the inductive hypothesis and our assumption about we have,
From Proposition 2.6 Part 6, 2, and 3 we have
Applying Proposition 2.6 Part 5 then gives
as desired ∎
Our algorithm works by using Theorem 4.1 to compute a constant approximation to and then boosting the approximation to an arbitrary -approximation. Our main tool for this is the following lemma which shows how an approximate pseudoinverse can be improved to a high quality approximate pseudoinverse. This is essentially a symmetric version of a well known technique in linear system solving known as preconditioned Richardson iteration. It is a slight modification of Lemma 31 from [LS2].
Let be symmetric invertible positive definite matrices such that for some it is the case that . Then for all we have that is a symmetric matrix satisfying
Let and so that . Note that and therefore . As this implies we have
Therefore, as we have
This yields that
and combining with the fact that yields the result. ∎
Note that we can apply the same iterative method to boost the approximation quality of a pseudoinverse (rather than true inverse) by carrying out all of the analysis in the space orthogonal to the kernel of the Laplacian.
Let for . Then
is a matrix satisfying
We have that . Multiplying by gives . Taking and from Lemma 4.2 and we get that for all
Multiplying by completes the proof. ∎
5 Derandomized Squaring
To approximate the pseudoinverse of a graph Laplacian we will use Theorem 4.1 to get a constant approximation and then use Corollary 4.3 to boost the approximation. In order to achieve a space efficient and deterministic implementation of Theorem 4.1, we will replace every instance of matrix squaring with the derandomized square of Rozenman and Vadhan [RV]. Before defining the derandomized square we define two-way labelings and rotation maps.
[RVW] A two-way labeling of a -regular undirected multigraph is a labeling of the edges in such that
Every edge has two labels in , one as an edge incident to and one as an edge incident to .
For every vertex the labels of the edges incident to are distinct.
In [RV], two-way labelings are referred to as undirected two-way labelings. In a two-way labeling, each vertex has its own labeling from to for the edges incident to it. Since every edge is incident to two vertices, each edge receives two labels, which may or may not be the same. It is convenient to specify a multigraph with a two-way labeling by a rotation map:
[RVW] Let be a -regular multigraph on vertices with a two-way labeling. The rotation map Rot is defined as follows: Rot if the th edge incident to vertex leads to vertex and this edge is the th edge incident to .
Note that Rot is its own inverse, and that any function Rot that is its own inverse defines a -regular multigraph on vertices with a two-way labeling. Recall that the edges in correspond to all of the walks of length 2 in . This is equivalent to placing a -clique with self loops on every vertex on the neighbor set of every vertex in . The derandomized square picks out a subset of the walks of length 2 by placing a small degree expander on the neighbor set of every vertex rather than a clique.
Definition 5.3 ([Rv]).
Let be a -regular multigraph on vertices with a two-way labeling, let be a -regular multigraph on vertices with a two-way labeling. The derandomized square is a -regular graph on vertices with rotation map Rot defined as follows: For , and , we compute Rot as
It can be verified that Rot is its own inverse and hence this indeed defines a -regular multigraph on vertices. The main idea behind the derandomized square is that it improves the connectivity of the graph (as measured by the second eigenvalue) without squaring the degree.
Theorem 5.4 ([Rv]).
Let be a -regular undirected multigraph with a two-way labeling and and be a -regular graph with a two-way labeling and . Then is a -regular undirected multigraph and
The output of the derandomized square is an undirected multigraph with a two-way labeling. So the operation can be repeated on the output. In our algorithm for computing an approximate pseudoinverse, we use Identity 3 but replace every instance of a squared transition matrix with the derandomized square. To prove that this approach indeed yields an approximation to the pseudoinverse, we want to apply Theorem 4.1. For this we need to prove two properties of the derandomized square: that the derandomized square is a good spectral approximation of the true square and that iterating the derandomized square yields an approximation of the complete graph. The latter property follows from the corollary and lemma below.
Let be a -regular undirected multigraph on vertices with a two-way labeling and and be -regular graphs with two-way labelings where has vertices and . For all let . If and then .
Let and for all , let . If then
It follows that until is driven above we have . Since , setting will result in . ∎
If is the normalized Laplacian of a -regular, undirected, 1/2-lazy, multigraph , and then if and only if .
We prove in Theorem 5.7 that if is an expander then the derandomized square of a multigraph with is a good spectral approximation to and thus can be used for the approximation in Theorem 4.1. The idea is that the square of a graph can be interpreted as putting -clique on the neighbors of each vertex in , and the derandomized square can be interpreted as putting a copy of on each vertex. By Lemma 5.6, spectrally approximates , and we can use this to show that the Laplacian of spectrally approximates .
Let be a -regular, undirected, aperiodic multigraph on vertices and be a regular multigraph on vertices with . Then
Following the proof from [RV] that the derandomized square improves connectivity, we can write the transition matrix for the random walk on as , where each matrix corresponds to a step in the definition of the derandomized square:
is an matrix that maps any -vector to (where denotes the
-coordinate uniform distribution). This corresponds to “lifting” a probability distribution overto one over where the mass on each coordinate is divided uniformly over coordinates in the distribution over . That is if and 0 otherwise where the rows of are ordered .
is the permutation matrix corresponding to the rotation map for . That is if Rot and 0 otherwise.
is , where is the identity matrix and is the transition matrix for .
is the matrix that maps any -vector to an -vector by summing all the entries corresponding to the same vertex in . This corresponds to projecting distributions on back down to a distribution over . That is if and 0 otherwise where the columns of are ordered . Note that .
Let be the degree of and let be the complete graph on vertices with self loops on every vertex. Lemma 5.6 says that where and are the normalized Laplacians of and , respectively. It follows that -approximates by Proposition 2.6 Part 8. is the Laplacian for disjoint copies of and is the Laplacian for disjoint copies of . Applying the matrices and on the left and right place these copies on the neighborhoods of each vertex of .
Note that is symmetric because Rot is an involution. In other words, for all and , Rot. This also implies that . Also note that and . Applying these observations along with Proposition 2.6 part 6 we get