# Massively Parallel Dynamic Programming on Trees

Dynamic programming is a powerful technique that is, unfortunately, often inherently sequential. That is, there exists no unified method to parallelize algorithms that use dynamic programming. In this paper, we attempt to address this issue in the Massively Parallel Computations (MPC) model which is a popular abstraction of MapReduce-like paradigms. Our main result is an algorithmic framework to adapt a large family of dynamic programs defined over trees. We introduce two classes of graph problems that admit dynamic programming solutions on trees. We refer to them as "(polylog)-expressible" and "linear-expressible" problems. We show that both classes can be parallelized in O( n) rounds using a sublinear number of machines and a sublinear memory per machine. To achieve this result, we introduce a series of techniques that can be plugged together. To illustrate the generality of our framework, we implement in O( n) rounds of MPC, the dynamic programming solution of graph problems such as minimum bisection, k-spanning tree, maximum independent set, longest path, etc., when the input graph is a tree.

## Authors

• 7 publications
• 17 publications
• 9 publications
• 31 publications
• 47 publications
• ### Composing dynamic programming tree-decomposition-based algorithms

Given two integers ℓ and p as well as ℓ graph classes H_1,...,H_ℓ, the p...
04/29/2019 ∙ by Julien Baste, et al. ∙ 0

• ### Diverse M-Best Solutions by Dynamic Programming

Many computer vision pipelines involve dynamic programming primitives su...
03/15/2018 ∙ by Carsten Haubold, et al. ∙ 0

• ### UCT-ADP Progressive Bias Algorithm for Solving Gomoku

12/11/2019 ∙ by Xu Cao, et al. ∙ 0

• ### DPMC: Weighted Model Counting by Dynamic Programming on Project-Join Trees

We propose a unifying dynamic-programming framework to compute exact lit...
08/20/2020 ∙ by Jeffrey M. Dudek, et al. ∙ 2

• ### Improved Parallel Cache-Oblivious Algorithms for Dynamic Programming and Linear Algebra

For many cache-oblivious algorithms for dynamic programming and linear a...
09/25/2018 ∙ by Yan Gu, et al. ∙ 0

• ### OpenMP Parallelization of Dynamic Programming and Greedy Algorithms

Multicore has emerged as a typical architecture model since its advent a...
01/20/2020 ∙ by Claude Tadonki, et al. ∙ 0

• ### A Graph Theoretic Framework of Recomputation Algorithms for Memory-Efficient Backpropagation

Recomputation algorithms collectively refer to a family of methods that ...
05/28/2019 ∙ by Mitsuru Kusumoto, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

With the inevitable growth of the size of datasets to analyze, the rapid advance of distributed computing infrastructure and platforms (such as MapReduce, Spark [45], Hadoop [44], Flume [19], etc.), and more importantly the availability of such infrastructure to medium- and even small-scale enterprises via services at Amazon Cloud and Google Cloud, the need for developing better distributed algorithms is felt far and wide nowadays. The past decade has seen a lot of progress in studying important computer science problems in the large-scale setting, which led to either adapting the sequential algorithms to distributed settings or at times designing from scratch distributed algorithms for these problems [3, 4, 17, 21, 18].

Despite this trend, we still have limited theoretical understanding of the status of several fundamental problems when it comes to designing large-scale algorithms. In fact, even simple and widely used techniques such as the greedy approach or dynamic programming seem to suffer from an inherent sequentiality that makes them difficult to adapt in parallel or distributed settings on the aforementioned platforms. Finding methods to run generic greedy algorithms or dynamic programming algorithms on MapReduce, for instance, has broad applications. This is the main goal of this paper.

Model. We consider the most restrictive variant of the Massively Parallel Computations () model which is a common abstraction of MapReduce-like frameworks [31, 26, 10]. Let denote the input size and let denote the number of available machines which is given in the input.222As standard, we assume there exists a small constant for which the number of machines is always greater than . At each round, every machine can use a space of size and run an algorithm that is preferably linear time (but at most polynomial time) in the size of its memory.333We assume that the available space on each machine is more than the number of machines (i.e., and hence ). It is argued in [4] that this is a realistic assumption since each machine needs to have at least the index of all the other machines to be able to communicate with them. Also, as argued in [29], in some cases, it is natural to assume the total memory is for a small . Machines may only communicate between the rounds, and no machine can receive or send more data than its memory.

Im, Moseley, and Sun [29] initiated a principled framework for simulating sequential dynamic programming solutions of Optimal Binary Search Tree, Longest Increasing Subsequence, and Weighted Interval Selection problems on this model. This is quite an exciting development, however, it is not clear whether similar ideas can be extended to other problems and in particular to natural and well-known graph problems.

In this paper, we give an algorithmic framework that could be used to simulate many natural dynamic programs on trees. Indeed we formulate the properties that make a dynamic program (on trees) amenable to our techniques. These properties, we show, are natural and are satisfied by many known algorithms for fundamental optimization problems. To illustrate the generality of our framework, we design round algorithms for well-studied graph problems on trees, such as, minimum bisection, minimum -spanning tree, maximum weighted matching, longest path, minimum vertex cover, maximum independent set, facility location, -center, etc.

### 1.1 Related Work

Though so far several models for MapReduce have been introduced (see, e.g., [22, 4, 25, 26, 30, 38, 40]), Karloff, Suri, and Vassilvitskii [31] were the first to present a refined and simple theoretical model for MapReduce. In their model, Karloff et al. [31] extract only key features of MapReduce and bypass several message-passing systems and parameters. This model has been extended since then [10, 4, 40] (in this paper we mainly use the further refined model by Andoni, Nikolov, Onak, and Yaroslavtsev [4]) and several algorithms both in theory and practice have been developed in these settings, often using sketching [27, 30], coresets [9, 35] and sample-and-prune [32] techniques. Examples of such algorithms include -means and -center clustering [6, 9, 28], general submodular function optimization [20, 32, 36] and query optimization [10].

Thanks to their common application to data mining tasks, MapReduce algorithms on graphs have received a lot of attention. In [33] Lattanzi, Moseley, Suri and Vassilvitskii, propose algorithms for several problems on dense graphs. Subsequently other authors study graph problems in the MapReduce model [3, 2, 5, 16, 34, 7, 11, 12] but almost all the solutions apply only to dense graphs. In contrast in this paper we present a framework to solve optimization problems on trees, which are important special cases of sparse graphs.

Another related area of research is the design of parallel algorithms. Parallel, and in particular PRAM algorithms, instead of distributing the work-load into different machines, assume a shared memory is accessible to several processors of the same machine. The model is more powerful since it combines parallelism with sequentiality. That is, the internal computation of the machines is free and we only optimize the rounds of communication. In this work, we use this advantage in a crucial way to design a unified and general framework to implement a large family of sequential dynamic programs on . We are not aware of any similar framework to parallelize dynamic programs in the purely parallel settings.

##### Further developments.

A remarkable series of subsequent works [14, 24, 37, 15, 13] to this paper show how to design even sublogarithmic round algorithms for graph problems such as maximal matching, maximal independent set, approximate vertex cover, etc., using memory per-machine. This is precisely the regime of parameters considered in this paper. However, these problems mostly enjoy a locality property (i.e., they admit fast LOCAL algorithms) that does not hold for the problems considered in this paper.

### 1.2 Organization

We start with an overview of our results and techniques in Section 2. We give a formalization of dynamic programming classes in Section 3. Then, we describe a main building block of our algorithms, the tree decomposition method, in Section 4. Next, in Section 5, we show how to solve a particular class of problems which we call -expressible problems. In Section 6 we show how to solve a more general class of problems, namely, linear-expressible problems.

## 2 Main Results & Techniques

We introduce a class of dynamic programming problems which we call -expressible problems. Here, is a function and we get classes such as -expressible problems or linear-expressible problems. Hiding a number of technical details, the function equals the number of bits of information that each node of the tree passes to its parent during the dynamic program. Thus, linear-expressible problems are generally harder to adapt to the model than -expressible problems.

##### (polylog)-Expressible problems.

Many natural problems can be shown to be -expressible. For example, the following graph problems are all -expressible if defined on trees: maximum (weighted) matching, vertex cover, maximum independent set, dominating set, longest path, etc. Intuitively, the dynamic programming solution of each of these problems, for any vertex , computes at most a constant number of values. Our first result is to show that every -expressible problem can be efficiently444Here and throughout the paper, we consider a polylogarithmic round algorithm efficient. solved in . As a corollary of that, all the aforementioned problems can be solved efficiently on trees using the optimal total space of .

[left=0pt, right=0pt, top=0pt, bottom=0pt, colback=gray!20, colframe=gray!20, width=enlarge left by=0mm, boxsep=5pt, arc=0pt,outer arc=0pt, ]

###### Theorem 1.

For any given , there exists an algorithm to solve any -expressible problem in rounds of using

machines, such that with high probability each machine uses a space of size at most

and runs an algorithm that is linear in its input size.

##### Proof sketch.

The first problem in solving dynamic programs on trees is that there is no guarantee on the depth of the tree. If the given tree has only logarithmic depth one can obtain a logarithmic round algorithm by simulating a bottom-up dynamic program in parallel, where nodes at the same level are handled in the same round simultaneously. This is reminiscent of certain parallel algorithms whose number of rounds depends on the diameter of the graph.

Unfortunately the input tree might be quite unbalanced, with superlogarithmic depth. An extreme case is a path of length . In this case we can partition the path into equal pieces (despite not knowing a priori the depth of each particular node), handling each piece independently, and then stitching the results together. Complications arise, because the subproblems are not completely independent. Things become more nuanced when the input tree is not simply a path.

To resolve this issue, we adapt a celebrated tree contraction method to our model. The algorithm decomposes the tree into pieces of size at most (i.e., we can fit each component completely on one machine), with small interdependence. Omitting minor technical details, the latter property allows us to almost independently solve the subproblems on different machines. This results in a partial solution that is significantly smaller than the whole subtree; therefore, we can send all these partial solutions to a master machine in the next round and merge them.

##### Linear-Expressible problems.

Although many natural problems are indeed -expressible, there are instances that are not. Consider for example the minimum bisection problem. In this problem, given an edge-weighted tree, the goal is to assign two colors (blue and red) to the vertices in such a way that minimizes the total weight of the edges between blue vertices and red vertices while half of the vertices are colored red and the other half are blue. In the natural dynamic programming solution of this problem, for a subtree of size , we store different values. That is, for any , the dynamic program stores the weight of the optimal coloring that assigns blue to vertices and red to the rest of vertices. This problem is not necessarily -expressible unless we find another problem specific dynamic programming solution for it. However, it can be shown that minimum bisection, as well as many other natural problems, including -spanning-tree, -center, -median, etc., are linear-expressible.

It is notoriously more difficult to solve linear-expressible problems using a sublinear number of machines and a sublinear memory per machine. However, we show that it is still possible to obtain the same result using a more involved algorithm and a slightly more total memory.

[left=0pt, right=0pt, top=0pt, bottom=0pt, colback=gray!20, colframe=gray!20, width=enlarge left by=0mm, boxsep=5pt, arc=0pt,outer arc=0pt, ]

###### Theorem 2 (Main Result).

For any given , there exists an algorithm to solve any linear-expressible problem that is splittable in rounds of using machines, such that with high probability each machine uses a space of size .

##### Proof sketch.

Recall that linear-expressibility implies that the dynamic programming data on each node, can be as large as the size of its subtree (i.e., even up to ). Therefore, even by using the tree decomposition technique, the partial solution that is computed for each component of the tree can be linear in its size. This means that the idea of sending all these partial data to one master machine, which worked for -expressible problems, does not work here since when aggregated, they take as much space as the original input. Therefore we have to distribute the merging step among the machines.

Assume for now that each component that is obtained by the tree decomposition algorithm is contracted into a node and call this contracted tree. The tree decomposition algorithm has no guarantee on the depth of the contracted tree and it can be super-logarithmic; therefore a simple bottom-up merging procedure does not work. However, it is guaranteed that the contracted tree itself (i.e., when the components are contracted) can be stored in one machine. Using this, we send the contracted tree to a machine and design a merging schedule that informs each component about the round at which it has to be merged with each of its neighbours. The merging schedule ensures that after phases, all the components are merged together. The merging schedule also guarantees that the number of neighbours of the components, after any number of merging phases, remains constant. This is essential to allow (almost) independent merging for many linear-expressible problems such as minimum bisection.

Observe that after a few rounds, the merged components grow to have up to nodes and even the partial data of one component cannot be stored in one machine. Therefore, even merging the partial data of two components has to be distributed among the machines. For this to be possible, we use a splitting technique of independent interest that requires a further splittability property on linear-expressible problems. Indeed we show that the aforementioned linear-expressible problems, such as minimum bisection and -spanning tree, have the splittability property, and therefore Theorem 2 implies they can also be solved in rounds of .

## 3 Dynamic Programming Classes

To understand the complexity of different dynamic programs in the

model, we first attempt to classify these problems. The goal of this section is to introduce a class of problems which we call

-expressible problems.

We say a problem is defined on trees, if the input to is a rooted tree , where each vertex of may contain data of size up to , denoted by v.data.555Consider weighted trees as an example of the problems that define an additional data on the nodes (in case the edges are weighted, each vertex stores the weight of its edge to its parent).

Dynamic programming is effective when the solution to the problem can be computed recursively. On trees, the recursion is usually defined on the nodes of the tree, where each node computes its data using the data provided by its children. Let denote a dynamic program that solves a problem that is defined on trees. Also let be an instance of and denote its input by . We use () to denote the solution of and use (v) to denote the dynamic data that computes for a vertex of . The first property that we define is “binary adaptability”.

To illustrate when this property holds, consider any vertex of an input tree and a given ordering of its children and let denote the dynamic data of vertex . As long as the operation to compute the dynamic data of could be viewed as an aggregation of binary operations over the dynamic data that is provided by its children in the given order, i.e., , this property is satisfied. (For example, could be viewed as ). Most of the natural dynamic programs satisfy this property.

To formally state this property, we first define “binary extensions” of trees and then define a problem to be binary adaptable if there exists a dynamic programming solution that given any binary extension of an input, calculates the solution of the original tree.

###### Definition 3.1 (Binary Extension).

A binary tree is a binary extension of a tree if there is a one-to-one (and not necessarily surjective) mapping such that is an ancestor of in if is an ancestor of in . We assume the data of each vertex in is also stored in its equivalent vertex, in .

Intuitively, a binary extension of a tree is a binary tree that is obtained by adding auxiliary vertices to in such a way that ensures any ancestor of each vertex remains its ancestor.

Let be a problem that is defined on trees. Problem is binary adaptable if there exists a dynamic program where for any instance of with as its input tree and for any binary extension of , the output of () is a solution of . We say is a binary adapted dynamic program for .

We are now ready to define -expressiveness. We will mainly consider ()-expressible and linear-expressible problems in this paper, however the definition is general.

##### f-Expressiveness.

This property is defined on the problems that are binary adaptable. Roughly speaking, it specifies how large the data that we store for each subtree should be to be able to merge subtrees efficiently.

###### Definition 3.3 (f-Expressiveness).

Let be a binary adaptable problem and let be its binary adapted dynamic program. Moreover, let be a binary extension of a given input to . For a function , we say is -expressible if the following conditions hold.

1. The dynamic data of any given vertex of has size at most where denotes the number of descendants of .

2. There exist two algorithms (compressor), and (merger) with the following properties. Consider any arbitrary connected subtree of and let denote the root vertex of . If for at most a constant number of the leaves of the dynamic data is unknown (call them the unknown leaves of ), algorithm returns a partial data of size at most for (without knowing the dynamic data of the unknown leaves) in time such that if this partial data and the dynamic data of the unknown leaves are given to , it returns (v) in time .

3. For two disjoint subtrees and that are connected to each other with one edge, and for subtree , if there are at most a constant number of the leaves of that are unknown, then returns in time .

We remark that graph problems such as maximum (weighted) matching, vertex cover, maximum independent set, dominating set, longest path, etc., when defined on trees, are all -expressible. Also, problems such as minimum bisection, -spanning tree, -center, and -median are linear-expressible on trees.

## 4 Tree Decomposition

The goal of this section is to design an algorithm to decompose a given tree with vertices into “components” of size at most , such that each component has at most a constant number of “outer edges” to the other components. To do this, we combine a tree contraction (see [39, 1, 23]) method of traditional parallel algorithms with a number of new ideas.

##### Overview.

To motivate the need to design an involved decomposition algorithm, we first give hints on why trivial approaches do not work. Perhaps the simplest algorithm that comes to mind for this task, is random sampling. That is, to choose a set of randomly selected edges (or vertices) and temporarily remove them so that the tree is decomposed into a number of smaller components. Note that the main difficulty is that we need to bound the size of the largest component with a high probability. It could be shown that this trivial algorithm, even for the simple case of full binary trees fails. More precisely, it could be shown that the size of the component that contains the root is of size (instead of ) with a constant probability. In general, naive random partitioning does not perform well.

In contrast, we employ the following algorithm. First, we convert the tree into a binary tree. To do this, we first have to compute the degree of each vertex and then add auxiliary vertices to the graph. After that, our algorithm proceeds in iterations (not rounds). In each iteration, we merge the vertices of the tree based on some local rules and after iterations, we achieve the desired decomposition. More precisely, in each iteration, we select a set of joint vertices according to a number of local rules (e.g., the degree of a vertex, status of its parent, etc.) and merge each vertex to its closest selected ancestor. We prove in fact these local rules guarantee that the size of the maximum component is bounded by with high probability.

The input, as mentioned before, is a rooted tree , with vertices numbered from 1 to (we call these numbers the indexes of the vertices).666The assumption that the vertices are numbered from 1 to is just for the simplicity of presentation. Our algorithms can be adopted to any arbitrary naming of the vertices. Each machine initially receives a subset of size of the vertices of where each vertex object contains the index of , denoted by , and the index of its parent, denoted by .

In Section 4.1 we show how universal hash functions allow us to efficiently distribute and access objects in our model. In Section 4.2 we introduce an algorithm to convert the given tree into a “binary-extension” of it. Finally, in Section 4.3 we provide an algorithm to decompose the binary extension.

### 4.1 Load Balancing via Universal Hash Families

We start with the definition of universal hash families, that was first proposed by [43].

###### Definition 4.1.

A family of hash functions is -universal if for any hash function that is chosen uniformly at random from and for any distinct keys

are independent and uniformly distributed in

. A -universal hash function is a hash function that is chosen uniformly at random from a -universal hash family.

The following lemma shows that we can generate a -universal hash function in constant rounds and store it on all the machines. Roughly speaking, the idea is to generate a small set of random coefficients on one machine and share it with all other machines and then use these coefficients to evaluate the hash function for any given input.

###### Lemma 4.2.

For any given where , there exists an algorithm that runs in rounds of and generates the same hash function on all machines where function is chosen uniformly at random from a -universal hash family, and calculating in a machine takes time and space.

We mainly use the universal hash functions to distribute objects into machines. Assuming that an object has an integer index denoted by , we store object in machine where is the universal hash function stored on all machines. This also allows other machines to know in which machine an object with a given index is stored.

We claim even if the objects to be distributed by a -universal hash function have different sizes (in terms of the memory that they take to be stored), the maximum load on a machine will not be too much with high probability. To that end, we give the following definitions and then formally state the claim.

###### Definition 4.3.

A set is a weighted set if each element has an associated non-negative integer weight denoted by . For any subset of , we extend the notion of such that .

###### Definition 4.4.

A weighted set is a distributable set if is and the maximum weight among its elements (i.e., ) is . not needed

###### Definition 4.5.

We call a hash function a distributer if is a weighted set and is the set of all machines. We define the load of for a machine to be . Moreover we define the maximum load of to be . not needed

###### Lemma 4.6.

Let be a hash function chosen uniformly at random from a -universal hash function where is a distributable set and is the set of machines (i.e., is a distributer). The maximum load of is with probability at least where .

The full proof of Lemma 4.6 is differed to Appendix A. The general idea is to first partition the elements of into subsets of size such that the elements are grouped based on their weights (i.e., contains the elements of with the lowest weights, contains the elements in with the lowest weights, and so on). Then we show that the maximum load of the objects in any set is where is the weight of the object in with the maximum weight. Finally using the fact that the objects are grouped based on their weights, we show that , which is the total load is .

A technique that we use in multiple parts of the algorithm is to distribute the vertices of among the machines using a -universal hash function.

###### Definition 4.7.

Let be a given tree and let be a mapping of the vertex indexes of to the machines. We say is distributed by if any vertex of is stored in machine .

As a corollary of Lemma 4.2:

###### Corollary 4.8.

One can distribute a tree by a hash function that is chosen uniformly at random from a -universal hash family in rounds in such a way that each machine can evaluate in time and space. not needed

### 4.2 Conversion of T to a Binary Tree

The first step towards finding a decomposition of with the desired properties is to convert it into a binary tree that preserves the important characteristics of the original tree (e.g., the ancestors of each vertex must remain to be its ancestors in the binary tree too).

The definition of an extension of a tree is as follows. By the end of this section, we prove it is possible to find a binary extension of in constant rounds.

###### Definition 4.9.

A rooted tree is an extension of a given rooted tree if and there exists a mapping function such that for any , and if is an ancestor of in , is also an ancestor of in . Suree rooted binary extension

The following lemma proves it is possible to find the degree of all vertices in constant rounds.

###### Lemma 4.10.

There exists a randomized algorithm to find the degree of each vertex of a given rooted tree in rounds of using machines, such that with high probability, each machine uses a space of size at most and runs an algorithm that is linear in its memory size.

###### Proof.

The first step is to distribute the vertices of using a -universal hash function . By Corollary 4.8 this takes rounds. Define the local degree of a vertex on a machine to be the the number of children of that are stored in and denote it by . We know there are at most vertices with a non-zero local degree in each machine. Hence in one round, every machine can calculate and store the local degree of every such vertex. In the communication phase, every machine , for any vertex such that , sends to machine . Then in the next round, for any vertex , machine will receive the local degree of on all other machines and can calculate its total degree.

We claim with high probability, no machine receives more than data after the communication phase.

Define the weight of a vertex , denoted by , to be . The weight of a vertex is an upper bound on the size of communication that machine receives for vertex . To see this, observe that a machine sends the local degree of to only if this local degree is non-zero, therefore the total communication size for cannot exceed its degree. On the other hand, the total number of machines is , hence machine cannot receive more than different local degrees for vertex .

We first prove that is a distributable set (Definition 4.4) based on weight function . To see this, note that first, by definition of , the maximum value that gets is , which indeed is (recall that in the model we assumed is less than the space on each machine and hence ), and second, the total weight of all vertices is since .

By Lemma 4.6, since is a distributable set based on and since is chosen from a -independent hash family, the maximum load of , i.e., the maximum communication size of any machine is with high probability. ∎

We are now ready to give a binary extension of .

###### Lemma 4.11.

give prover references to this lemma There exists a randomized algorithm to convert a given rooted tree to a binary tree , an extension of , in rounds of using machines, such that with high probability, each machine uses a memory of size at most and runs an algorithm that is linear in its memory size.

###### Proof.

Our algorithm for converting to a binary extension of it done in two phases. In the first phase, we convert to an extension of it, , with a maximum degree of . Then in the next phase, we convert to a binary extension.

###### Claim 4.12.

There exists a randomized algorithm to convert a given rooted tree to a tree , with maximum degree that is an extension of in rounds of such that with high probability, each machine uses a memory of size at most and runs an algorithm that is linear in its memory size.

The detailed proof of Claim 4.12 and the pseudo-code to implement it is given in Appendix A. Intuitively, after calculating the degree of each vertex, we send the index of all vertices with degree more than to all machines (call them high degree vertices). This is possible because there are at most high degree vertices and is assumed to be less than the space of each machine. To any high degree vertex , in the next step, we add a set of at most auxiliary children and set the parent of any previous child of to be one of these auxiliary vertices that is chosen uniformly at random. The detailed proof of why no machine violates its memory size during the process and how we assign indexes to the auxiliary vertices is differed to Appendix A.

The next phase is to convert this bounded degree tree to a binary tree.

###### Claim 4.13.

There exists a randomized algorithm to convert a tree , with maximum degree , to a binary extension of it , in rounds of using machines, such that with high probability, each machine uses a memory of size at most and runs an algorithm that is linear in its memory size.

Again, the detailed proof of Claim 4.13 and a pseudo-code for implementing it in the desired setting is given in Appendix A. Roughly speaking, since the degree of each vertex is at most , we can store all the children of any vertex in the same machine (the parent may be stored in another machine). Having all the children of a vertex in the same machine allows us to add auxiliary vertices and locally reduce the number of children of each vertex to at most 2. Figure 1 illustrates how we convert the tree to its binary extension. See Appendix A for more details.

By Claim 4.12 and Claim 4.13, for any given tree, there exists an algorithm to construct a binary extension of it in round of , which with high probability uses time and space. ∎

### 4.3 Decomposing a Binary Tree

###### Definition 4.14.

We define a decomposition of a binary tree to be a set of components, where each component contains a subset of the vertices of that are connected, and each vertex of is in exactly one component. A component , in addition to a subset of (which we denote by ) and their “inner edges”, i.e., the edges between the vertices in , contains their “outer edges” (i.e., the edges between a vertex in and a vertex in another component) too. The data stored in each component, including the vertices and their inner and outer edges should be of size up to . SUREE

Since the vertices in any component are connected, there is exactly one vertex in that its parent is not in , we call this vertex the root vertex of and denote it by . We also define the index of a component to be equal to the index of its root; i.e., (recall that we assume every vertex of has a unique index). Moreover, by contracting a component we mean contracting all of its vertices and only keeping their outer edges to other components. Since the vertices in the same component have to be connected by definition, the result of contraction will be a rooted tree. We may use this fact and refer to the components as nodes and even say a component is the parent (child, resp.) of if after contraction, is indeed the parent (child, resp.) of .

###### Theorem 3.

There exists a randomized algorithm to find a decomposition of a given binary tree with vertices using machines, such that with high probability the algorithm terminates in rounds of , and each machine uses a memory of size at most and runs an algorithm that is linear in its memory size.

Algorithm 1 gives a sequential view of the decomposition algorithm we use. We first prove some properties of the algorithm, and then using these properties, we prove it can actually be implemented in the desired setting.

The algorithm starts with components where each component contains only one vertex of . Then it merges them in several iterations until there are only components left. At the end of each iteration, if the size of a component is more than , we mark it as a completed component and never merge it with any other component. This enables us to prove that the components’ size does not exceed . In the merging process, at each iteration, we “select” some of the components and merge any unselected component to its closest selected ancestor. We select a component if at least one of the following conditions hold for it. 1. It is the root component. 2. It has exactly two children. 3. Its parent component is marked as complete. In addition to these conditions, if a component has exactly one child, we select it with an independent probability of . Figure 2 illustrates the iterations of Algorithm 1.

The intuition behind the selecting conditions is as follows. The first condition ensures every unselected component has at least one selected ancestor to be merged to. The second condition ensures no component will have more than two children after the merging step. (To see this, let be a component with two children, and . Also let have two children, and . If only is not selected and is merged to , then the resulting component will have three children , and .) The third condition ensures the path of a component to its closest selected ancestor does not contain any completed component. This condition is important to keep the components connected. Finally, randomly selecting some of the components with exactly one child ensures the size of no component exceeds . (Otherwise, a long chain of components may be merged to one component in one iteration.) We also prove at most a constant fraction of the components are selected at each iteration. This proves the total number of iterations, which directly impacts the number of rounds in the parallel implementation, is logarithmic in the number of vertices.

#### 4.3.1 Properties of Algorithm 1

In this section, we prove some properties of Algorithm 1. These properties will be useful in designing and analyzing the parallel implementation of the algorithm.

Many of these properties are defined on the iterations of the while loop at line 6 of Algorithm 1. Any time we say an iteration of Algorithm 1, we refer to an iteration of this while loop. We start with the following definition.

###### Definition 4.15.

We denote the total number of iterations of of Algorithm 1 by . Moreover, for any , we use and to respectively denote the value of variables (which contains all components) and (which contains the completed components) at the start of the -th iteration. Analogously, we use to denote the selected components () at the end of the -th iteration. We also define to be the rooted component tree that is given by contracting all components in (e.g., ).

###### Claim 4.16.

For any , each component is obtained by merging a connected subset of the components in .

###### Proof.

Observe that if in a step a component is the closest selected ancestor of component , it is also the closest selected ancestor of every other component in the path from to (denote the set of these components by ). We claim if is merged to , every component in will also be merged to in the same step. To see this, note that if there exists a component in that is in or , the closest selected ancestor of would not be . Hence all of these components will be merged to and the resulting component will be obtained by merging a connected subset of the components in . ∎

The following is a rather general fact that is useful in proving some of the properties of Algorithm 1.

###### Fact 4.17.

For a given binary tree , let denote the number of vertices of with exactly children. Then .

We claim the components’ tree, in all iterations of the while loop, is a binary tree.

###### Claim 4.18.

For any , the component tree is a binary tree.

###### Proof.

We use induction to prove this property. We know by Definition 4.15, that and since is a binary tree, the condition holds for the base case .

Assuming that is a binary tree, we prove is also a binary tree. Fix any arbitrary component in , we prove it cannot have more than two children. Let be the components in that were merged with each other to create . For any component let denote the number of children of that are in and let denote the total number of children of (no matter if they are in or not). For any child of , there is a component in with a child that is outside of , therefore to prove cannot have more than two children it suffices to prove

 ∑cj∈U(child(cj)−childU(cj))≤2. (1)

We first prove . Note that at most one of the components in has two children in since all vertices with two children are selected in line 12 of Algorithm 1 and if two components are merged at least one of them is not selected (line 17 of Algorithm 1). Therefore other than one vertex in which might have two children in , others have at most one children. Hence .

Next, we prove . We know by Claim 4.16 that the components in are connected to each other. Therefore there must be at least edges within them and each such edge indicates a parent-child relation in . Hence .

Combining and we obtain Equation 1 always holds and therefore is a binary tree. ∎

The following property bounds the number of unselected ancestors of a component that is going to be merged to its closest selected ancestor.

###### Claim 4.19.

With probability at least , for any given and , where and , there are at most components in the path from to its closest selected ancestor.

###### Proof.

We first show with probability , there are at most components in the path from to its closest selected ancestor. To see this, let denote the set of ancestors of that have distance at most to . We prove with probability at least one component in is selected. Since, all the children of any component in is in , If , then . Otherwise, any component in is selected independently with probability at least and the probability that is .

In addition we prove that , so that we can use union bound over all the components in . To prove this it suffices to prove that any vertex is at most the root vertex of one of the components in . To prove this, let denote the first iteration that is the root vertex of a component in . In this iteration we merge this component with one of its ancestors, so it can not be root vertex of any other component in . Therefore using union bound over all the components in , we obtain that with probability at least for any given and , where and , there are at most components in the path from to its closest selected ancestor. ∎

The following property implies that w.h.p. at each round at most a constant fraction of the incomplete components are selected.

###### Claim 4.20.

There exists a constant number such that for any , with probability at least ,

###### Proof.

Let denote the set of components in with exactly children. Since each component in is selected independently with probability , by Chernoff bound, in round ,

 Pr[|K1(Ci)∩Si|>34|K1(Ci)|]≤e−n/24.

The components that are selected in each round are as follows. With probability at least we choose less than three forths of the components in , the root component, all the components in , and at most two components for any component in . Therefore, with probability at least ,

 |Si|<1+|K2(Ci)|+34|K1(Ci)|+2|Fi|. (2)

By Fact 4.17, = , and by Lemma 4.18, is a binary tree, so Therefore,

 1+|K2(Ci)|+34|K1(Ci)|<12(|K2(Ci)|+|K0(Ci)|+1)+34|K1(Ci)|≤34|Ci|. (3)

By equations 2 and 3 with probability at least , Therefore to complete the proof, it suffices to prove that there exists a constant number such that equation

 34|Ci|+2|Fi|≤c⋅|Ci−Fi|,

which is equivalent to

 2−cc−3/4≤|Ci||Fi|

holds. Since the size of any component in is at least , . In addition, by line 3 of Algorithm 1, , therefore . It is easy to see that the equation

 2−cc−3/4≤14≤|Ci||Fi|

holds for , which is a constant number in (0, 1). ∎

The following property is a direct corollary of Claim 4.20.

###### Corollary 4.21.

With high probability the while loop at line 6 of Algorithm 1, has at most iterations.

We are now ready to prove each component has at most vertices.

###### Claim 4.22.

The size of each components at any iteration of Algorithm 1 is bounded by with high probability.

###### Proof.

By Claim 4.16 any component is obtained by merging a connected subset of the components in . Let denote this subset, and let denote the component tree given by contracting all components in . We first prove that .

By Claim 4.18, is a binary tree, and by line 16 of Algorithm 1 . Note that, any component in has at most 1 other component as a child, and has at most 2 children. In addition, by Claim 4.19, with high probability for any the distance between and is . Therefore, with high probability , which is .

In addition, note that , and any component in has size at most , hence with high probability

## 5 (polylog)-Expressible Problems

The goal of this section is to prove and give examples of Theorem 1 which we restate below.

Theorem 1. For any given , there exists an algorithm to solve any -expressible problem in rounds of using machines, such that with high probability each machine uses a space of size and runs an algorithm that is linear in its input size.

###### Proof.

Assume we are given a -expressible problem . We first convert the given tree into a binary extension of it using Lemma 4.11 in rounds and then decompose the binary extension using Theorem 3 in rounds. Note that since any component has at most two children (Claim 4.18), there are at most 2 leaves in any component that is unknown. Therefore we can use the compressor algorithm that is guaranteed to exist since is assumed to be -expressible and store a data of size at most for each component. Then in the next step, since there are at most components, we can send the partial data of all components into one machine. Then in only one round, we are able to merge all this partial data since the merger algorithm (that is guaranteed to exist since is -expressible) takes polylogarithmic time and space to calculate the answer of the dynamic programming for each of the components and it suffices to start from the leave components and compute the value of the dynamic programming for each of them one by one (all of this is done in one step because we have the data of all the components in one machine). ∎

We first formally prove why Maximum Weighted Matching is a -expressible problem and therefore as a corollary, it can be solved in rounds.

##### Maximum Weighted Matching.

An edge-weighted tree is given in the input. (Let denote the weight of an edge .) The problem is to find a sub-graph of such that the degree of each vertex in is at most one and the weight of (which is ) is maximized.

At first, we explain the sequential DP that given any binary extension (see Definition 4.9) of an input tree , finds a maximum matching of and then prove this DP is -expressible.

Define a subset of the edges of to be an extended matching if it has the following properties. 1. For any original vertex in , at most one of its edges is in . 2. For any auxiliary vertex of , at most one of the edges between and its children is in . 3. The edge between an auxiliary vertex and its parent is in if and only if an edge between and one of its children is in .

Consider an edge between a vertex and its parent in . If is an auxiliary vertex, we define to be 0 and if is an original vertex, we define to be the weight of the edge between and its parent where is the vertex in that is equivalent to . Furthermore, we define the weight of an extended matching of to be .

Let be an extended matching of and let be a matching of ; we say and are equivalent if for any edge , every edge between the equivalent vertices of and in is in and vice versa. Let denote the equivalent vertices of respectively. If is the parent of in , all the other vertices in the path between and are auxiliary by definition of binary extensions. This means all the edges in this path, except for the edge between and its parent (which is equal to ), have weight 0. Therefore the weight of any extended matching of is equal to its equivalent matching of . Hence, to find the maximum weighted matching of , one can find the extended matching of with the maximum weight.

Now, we give a sequential DP that finds the extended matching of with the maximum weight. To do so, for any vertex of , define to be the maximum weight of an extended matching of the subtree of where at least one edge of to its children is part of the extended matching (if no such extended matching exists for set to be . Also define to be the maximum weight of an extended matching of the subtree of where no edge of to its children is part of the extended matching.

The key observation in updating and for a vertex is that if is connected to one of its original children , then should not be connected to its children; if is

To update and , if has no children, then and , also if has only one child , then and . W.l.o.g. we consider the following three cases for the case where has two children and .

• If and are both original vertices, then

 C(v)=max{wv,u1+C′(u1)+C(u2),wv,u2+C′(u2)+C(u1)},
 C′(v)=max{C(u1),C′(u1)}+max{C(u2),C′(u2)}.
• If and are both auxiliary vertices, then

 C(v)=max{wv,u1+C(u1)+C′(u2),wv,u2+C(u2)+C′(u1)},
 C′(v)=max{C′(u1),C′(u2)}.
• If is auxiliary and is an original vertex, then

 C(v)=max{wv,u1+C(u1)+max{C′(u2),C(u2)},wv,u2,+C′(u2)+C′(u1)}},
 C′(v)=C′(u1)+max{C(u2),C′(u2)}.

Now we prove that the -expressiveness property holds for the proposed DP. Consider a subtree of and let and expressiveness is defined for constant leaves not only two denote the leaves of with the unknown DP values. We apply the proposed DP rules and calculate the DP values as functions of , , , and instead of plain numbers. We claim that for any vertex of there exist functions such that

 C(v)=maxai∈{0,1}{a0C(u1)+a1C′(u1)+a2C(u2)+a3C′(u2)+fv(a0,a1,a2,a3)},

and

 C′(v)=maxai∈{0,1}{a0C(u1)+a1C′(u1)+a2C(u2)+a3C′(u2)+f′v(a0,a1,a2,a3)}.

Therefore it suffices to store the values of functions and for the root vertex of to be able to evaluate and . Since and take only 16 different input combinations, there are only different output values to be stored.

###### Corollary 5.1.

For any given , there exists an algorithm to solve the Maximum Weight Matching problem on trees in rounds of using machines, such that with high probability each machine uses a space of size and runs an algorithm that is linear in its input size.

##### Other problems.

The dynamic programming solutions of Maximum Independent Set, Minimum Vertex Cover, Longest Path and many other problems are similar to the proposed DP for Maximum Weighted Matching. For example, for the Maximum Independent Set problem, we only need to keep two dynamic data, whether the root vertex of a subtree is part of the independent set in its subtree or not and for each of these cases what is the size of the maximum independent set in the subtree. Roughly speaking, the only major difference in the DP is that we have to change to sum. More intuitions for other problems.

###### Corollary 5.2.

For any given , there exists algorithms to solve the Maximum Independent Set, Minimum Vertex Cover, Longest Path and Dominating Set problems on trees in rounds of using machines, such that with high probability each machine uses a space of size and runs an algorithm that is linear in its input size.

## 6 Solving Linear-Expressible Problems

### 6.1 The Splittability Property

The main goal of this section is to define a property called “splittability” for linear-expressible problems, that indeed holds for the aforementioned linear-expressible problems and prove, by using a total memory of , we are able to parallelize linear-expressible problems that are splittable in rounds. The main reason behind the need to define this extra property for linear-expressible problems, is that in contrast to -expressible problems, the partial data of the subtrees in linear-expressible problems might have a relatively large size, hence when merging two subtrees, we are not able to access their partial data in one machine. As a result, we need to be able to distribute the merging step among different machines. It is worth mentioning that we prove the total memory of is in some sense tight, unless we limit the number of machines to be too small, which defeats the purpose of MapReduce, which is to have maximum possible parallelization.

Intuitively, a problem instance is splittable if it is possible to represent it with a set of objects such that the output of the problem depends only on the pair-wise relations of these objects. In the DP context, the splittability property is useful for linear-expressible problems. The splittability property ensures that these partial data can be effectively merged using the splitting technique.

###### Definition 6.1 (Splittability).

Consider a linear-expressible problem and a binary adapted dynamic program for it. We know since is a linear-expressible problem, for any subtree of with vertices, a compressor function returns a partial data of size at most if at most a constant number of the leaves of are unknown. Denote this partial data by . We say problem is splittable, if for any given connected subtree of , the partial data

can be represented as a vector

of at most elements such that for any connected subtrees and of that are connected with an edge (with being ) the following condition holds. There exist two algorithms (sub unifier) and (unifier) such that for any consecutive partitioning of and any consecutive partitioning of ,

1. Algorithm returns a vector of size at most in time such that ea