Linear-time Erasure List-decoding of Expander Codes

We give a linear-time erasure list-decoding algorithm for expander codes. More precisely, let r > 0 be any integer. Given an inner code C_0 of length d, and a d-regular bipartite expander graph G with n vertices on each side, we give an algorithm to list-decode the expander code C = C(G, C_0) of length nd from approximately δδ_r nd erasures in time n ·poly(d2^r / δ), where δ and δ_r are the relative distance and the r'th generalized relative distance of C_0, respectively. To the best of our knowledge, this is the first linear-time algorithm that can list-decode expander codes from erasures beyond their (designed) distance of approximately δ^2 nd. To obtain our results, we show that an approach similar to that of (Hemenway and Wootters, Information and Computation, 2018) can be used to obtain such an erasure-list-decoding algorithm with an exponentially worse dependence of the running time on r and δ; then we show how to improve the dependence of the running time on these parameters.

Authors

• 5 publications
• 33 publications
• 10 publications
02/08/2022

List Decoding of Quaternary Codes in the Lee Metric

We present a list decoding algorithm for quaternary negacyclic codes ove...
09/13/2019

LDPC Codes Achieve List Decoding Capacity

We show that Gallager's ensemble of Low-Density Parity Check (LDPC) code...
01/04/2021

Binary Dynamic Time Warping in Linear Time

Dynamic time warping distance (DTW) is a widely used distance measure be...
06/15/2021

Improving the List Decoding Version of the Cyclically Equivariant Neural Decoder

The cyclically equivariant neural decoder was recently proposed in [Chen...
09/24/2021

Punctured Large Distance Codes, and Many Reed-Solomon Codes, Achieve List-Decoding Capacity

We prove the existence of Reed-Solomon codes of any desired rate R ∈ (0,...
11/15/2021

Improved Decoding of Expander Codes

We study the classical expander codes, introduced by Sipser and Spielman...
06/19/2020

List decoding of Convolutional Codes over integer residue rings

A convolutional code over [D] is a [D]-submodule of [D] where [D] stand...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In coding theory, the problem of list-decoding is to return all codewords that are close to some received word ; in algorithmic list-decoding, the problem is to do so efficiently. While there has been a great deal of progress on algorithmic list-decoding in the past two decades [GS99, PV05, GR06b, GW17, GX12, GX13, Kop15, GK16, HRW19, KRSW18], most work has relied crucially on algebraic constructions, and thus it is interesting to develop combinatorial tools to construct efficiently list-decodable codes with good parameters.

In this work, we consider the question of list-decoding expander codes, introduced by Sipser and Spielman in [SS96]. We define expander codes formally in Section 2, but briefly, the expander code is a linear code constructed from a -regular bipartite expander graph and a linear inner code . A codeword of

is a vector in

which is a labeling of edges in . The constraints are that, for each vertex of , the labels on the edges incident to form a codeword in .

Expander codes are notable for their very efficient unique decoding algorithms [SS96, Zém01, LMSS01, SR03, AS05, BZ02, BZ05, BZ06, RS06, HOW15]. However, very little is known about the algorithmic list-decodability of expander codes, and it is an open problem to find a family of expander codes that admit fast linear-time list-decoding algorithms with good parameters. Motivated by this open problem, our main contribution is a linear-time algorithm for list decoding expander codes from erasures.

Erasure list-decoding.

Erasure-list-decoding is a variant of list-decoding where the received word may have some symbols which are “” (erasures), and the goal is to recover all codewords consistent with . More formally, let be a binary code of length . For , define

 ListC(z):={c∈C:ci=zi % whenever zi≠⊥}.

We say that is erasure-list-decodable from erasures with list size if for any with at most symbols equal to , .

Erasure list-decoding has been studied before [Gur03, GI02, GI04, GR06a, DJX14, HW18, BDT18], motivated both by standard list-decoding and as an interesting combinatorial and algorithmic question in its own right. It is known that the erasure-list-decodability of a linear code is precisely captured by its generalized distances. The ’th (relative)111Throughout this paper, we will work with the relative generalized distances (that is, measured as a fraction of coordinates). We will omit the adjective “relative” to describe these quantities in the future. generalized distance of a linear code is the minimum fraction of coordinates which are not identically zero in an -dimensional subspace of , that is,

 δr=1NminV∣∣{i:∃v∈V,vi≠0}∣∣,

where the minimum is taken over all linear subspaces of dimension . Thus, coincides with the traditional (relative) distance of the code, which for linear codes equals the minimum relative weight of any nonzero codeword. The generalized distances of a linear code characterize its erasure list-decodability:

Lemma 1.1 ([Gur03]).

Let be a linear code. Then is erasure-list-decodable from erasures with list size if and only if , where .

If is linear, then it can be erasure list-decoded in polynomial time by solving a linear system. Thus, the combinatorial result of Lemma 1.1 comes with a polynomial-time algorithm.

Our goal in this paper is twofold. First, we aim to develop algorithms to erasure list-decode expander codes beyond the minimum distance of the code with small list size. Second, we aim to do so in linear time, faster than the straightforward algorithm described above.

Our Results.

Our main result is a linear-time erasure list-decoding algorithm for expander codes beyond the (designed) minimum distance.

Theorem 1.2.

Let be a linear code with distance and ’th generalized distance . Let be the double cover 222The double cover of a graph is the bipartite graph defined as follows. Let and be two copies of ; there is an edge between and if and only if (see Section 2). of a -regular expander graph on vertices with expansion . Let be the expander code that results. Let , and suppose that

Then there is an algorithm ListDecode which, given a received word with at most erasures, runs in time and returns a matrix and a vector so that where satisfies

Because for any non-trivial linear code (any code of dimension ), the radius that Theorem 1.2 achieves is beyond the (designed) minimum distance of , which is approximately . To the best of our knowledge, this is the first linear-time list-decoding algorithm for expander codes that achieves this with a non-trivial list size.

In light of Lemma 1.1, the ultimate result we can hope for is an algorithm that list-decodes up to fraction of erasures with list size for any . The quantity in Theorem 1.2 may suggest it plays the role of a ’designed’ ’th generalized distance, especially since for it does (up to an term) coincide with the expander designed distance. However, we cannot expect to be a general lower bound on the ’th generalized distance of an expander code, which implies in particular that the list-size in Theorem 1.2 has to be larger than . Indeed, already in the special case of tensor codes (i.e., when the graph is the complete bipartite graph that has perfect expansion), the generalized distance has been shown [WY93, Sch00] to be a complicated quantity that can be lower than : in the general expander case, finding a reasonable description of the worst-case behavior of generalized distances seems quite challenging.

Note however that our results do imply a weak bound on the generalized distances of an expander code, namely that is approximately at least . Moreover, for the special case of , we are able to show the following bound on the second generalized distance of an expander code.

Lemma 1.3.

Let be a linear code with distance and second generalized distance , and let be the double-cover of a -regular expander graph with expansion . Let , and suppose that . Then the expander code has second generalized distance at least

Note that under the mild assumption that (satisfied by any code that has two minimum-weight codewords with disjoint support), the above lemma gives a lower bound of approximately on the second generalized distance of expander codes.

Finally, note that while we do not know if the list size returned by our algorithm can be generally improved, our algorithm can still list-decode an expander code from up to a fraction of erasures with list size for some values of : If is such that for some , our algorithm will run in linear time and return a list of size given a fraction of erasures.

1.1 Technical Overview

In this section, we give a brief overview of our approach. The basic idea is similar to the approach in [HW18]; however, as we discuss more in Section 1.2 below, in that work the goal was list-recovery, a generalization of list-decoding. In this work we can do substantially better by restricting our attention to list-decoding, as well as by tightening the analysis of [HW18].

Let be the double-cover of a -regular expander graph, and let be a linear code with distance and -th generalized distance . Since the inner code is linear and has ’th generalized distance , there is an -time algorithm to erasure list-decode from up to erasures. Our first step will be to do this at every vertex that we can, to produce a list at each such vertex.

In order to “stitch together” these lists, we define a notion of equivalence between edges, similar to the notion in [HW18]. Suppose that and are edges incident to a vertex , so that there is some so that for any , . Then, even if we have not pinned down a symbol for or , we know that for any legitimate codeword , assigning a symbol for one of these edges implies an assignment for the other. In this case, we say that . Because the lists are actually affine subspaces, there are not many equivalence classes at each vertex (and in particular substantially fewer equivalence classes than in the approach used in [HW18]).

With these equivalence classes defined, we actually give two algorithms, SlowListDecode and ListDecode. As the name suggests, SlowListDecode is a warm-up that has a worse dependence on and , but is easier to understand. We describe SlowListDecode (given in Section 3, Figure 2) here first, and then describe the changes that need to be made to arrive at our final algorithm, ListDecode (given in Section 4, Figure 4).

The main idea of SlowListDecode is to choose large equivalence classes and generate a list of all possible labelings for these equivalence classes. For each such labeling, we now hope to uniquely fill in the rest of the codeword, to arrive at a list of size . One might hope that labeling these large equivalence classes would leave a fraction of unlabeled symbols less than the designed distance of , allowing us to immediately use the known linear-time erasure unique decoding algorithm for the expander code. Unfortunately, this is not in general the case. However, we show that there are many vertices so that the number of unlabeled edges incident to is at most . Thus, we may run the unique decoder for (in time ) at each such vertex to generate yet more labels. It turns out that at this point, there are enough labels to run ’s unique decoding algorithm and finish off the labeling.

Naively, the algorithm described above runs in time at least , since we must loop over all possibilities. This is exponential in and and doubly-exponential in . The idea behind our final algorithm ListDecode is to take advantage of the linear structure of the lists to find a short description of all of the legitimate labelings. We will show in Section 4 how to do this in time by leveraging the sparsity of ’s parity-check matrix.

1.2 Related Work

Work on list-decoding expander codes.

The work that is perhaps the most related to ours is [HW18], which seeks to list-recover expander codes in the presence of erasures in linear time.333We note that other works, such as [GI04], have also had this goal, but to the best of our knowledge [HW18] obtains the best known results, so we focus on that work here. List-recovery is a variant of list-decoding which applies to codes over a large alphabet : instead of receiving as input a vector , the decoder receives a collection of lists, , and the goal is to return all codewords so that for all . In the setting of erasures, some lists have size , in which case we may as well replace the whole list by a symbol.

List decoding from erasures is a special case of list-recovery with erasures, where the that are not have size one. However, existing list-recovery algorithms will not immediately work in our setting, as we consider binary codes: list-recovery is only possible for codes with large alphabets.

Our first observation is that the approach of [HW18] for erasure list-recovery can be used to obtain an algorithm for erasure list-decoding in linear time, even for binary codes. As described above, our first step is to erasure list-decode at each vertex, leaving us with lists that need to be “stitched together.” The approach of [HW18] does precisely this, although in their context the lists that they are stitching together come from list-recovering the inner code.

However, the results of [HW18] about stitching together lists do not immediately yield anything meaningful for erasure list-decoding. More precisely, those results imply that an expander code formed from a graph with expansion and an inner code with distance and ’th generalized distance is list-decodable from up to a fraction of erasures in time . In particular, the fraction of erasures that those results tolerate is smaller than the distance of the expander code, yielding only trivial results in this setting.

Thus, while we use the same ideas as [HW18], our analysis is different and significantly tighter. This allows us to obtain a meaningful result in our setting, corresponding to the algorithm SlowListDecode. Moreover, as described above, we are able to take advantage of the additional linear structure in our setting to improve the dependence on in the running time.

To the best of our knowledge, there is no algorithmic work on list-decoding expander codes from errors (rather than erasures) in linear time with good parameters. We note that [MRR19] recently showed that there are expander codes which are combinatorially near-optimally list-decodable from errors, but this work is non-constructive and does not provide efficient list decoding algorithms.

Work on erasure list-decoding more generally.

It is known that, non-constructively, there are erasure-list-decodable codes of rate which can list-decode up to a fraction of erasures, with list sizes  [Gur03]. However, this proof is non-constructive and does not provide efficient algorithms, and it has been a major open question to achieve these results efficiently. Recent progress has been made by [BDT18], who provided a construction (although no decoding algorithm) with parameters close to this for which is polynomially small in .

Our work is somewhat orthogonal to this line of work on erasure list-decoding for several reasons. First, that line of work is mostly concerned with low-rate codes that are list-decodable from a large fraction of erasures (approaching ), while expander codes tend to perform best at high rates. Second, we are less concerned with the trade-off between rate and erasure tolerance and more concerned with efficiently erasure-list-decoding an arbitrary expander code as far beyond its (designed) distance as possible. Finally, much of the line of work described above has focused on getting the list size down to , which is known to be impossible for linear codes, where the best list size possible is [Gur03]. Since the expander codes we consider are linear, we do not focus on that goal in our work.

Organization.

In Section 2, we formally introduce the notation and definitions we will need. In Section 3, we introduce our preliminary algorithm SlowListDecode, while in Section 4 we describe the final algorithm that has better dependence on and in the running time. This proves our Main Theorem 1.2. We conclude in Section 5 with the proof of Lemma 1.3, showing a bound on the second generalized distance of expander codes.

2 Preliminaries

Expander Graphs.

Let be a bipartite graph.444In this paper we only consider undirected graphs. For a vertex , let denote the set of vertices adjacent to . For and , let denote the set of edges with endpoints in , and for , let .

Let be a (not necessarily bipartite) -regular graph on vertices. The expansion of is where

are the eigenvalues of the adjacency matrix of

. The double-cover of is the bipartite graph defined as follows. Let and be two copies of ; there is an edge between and if and only if . If is an expander graph, then obeys the Expander Mixing Lemma:

Theorem 2.1 (Expander Mixing Lemma, see e.g. [Hlw06]).

Suppose that is the double cover of a -regular expander graph on vertices with expansion . Then for any and ,

 ∣∣∣E(S,T)−dn|S||T|∣∣∣≤λ√|S||T|.

Expander Codes.

Let be the double cover of a -regular expander graph on vertices, as above. Let be a linear code, called the inner code. Fix an order on the edges incident to each vertex of , and let denote the ’th neighbor of .

The expander code is defined as the set of all labelings of the edges of that respect the inner code . More precisely, we have the following definition.

Definition 2.2 (Expander Code).

Let be a linear code, and let be the double cover of a -regular expander graph on vertices. The expander code is a linear code of length , so that for , if and only if, for all ,

 (c(v,Γ1(v)),c(v,Γ2(v)),…,c(v,Γd(v)))∈C0.

By counting constraints, it is not hard to see that if is a linear code of rate , then is a linear code of rate at least . Moreover, it is known that expander codes have good distance:

Lemma 2.3 ([Ss96, Zém01]).

Let be a linear code with distance , and let be the double cover of a -regular expander graph with expansion . Then the expander code has distance at least .

Moreover, can be uniquely decoded up to this fraction of erasures in linear time.

Lemma 2.4.

Let be a linear code with distance , and let be the double cover of a -regular expander graph on vertices with expansion . Let , and suppose that . Then there is an algorithm UniqueDecode which uniquely decodes the expander code from up to erasures in time .

The above lemma is by now folklore, but for completeness, we include a proof in Appendix A.

3 A preliminary algorithm

For clarity of exposition, we begin the proof of our Main Theorem 1.2 by proving the following weaker theorem.

Theorem 3.1.

Let be a linear code with distance and ’th generalized distance . Let be the double cover of a -regular expander graph on vertices with expansion . Let be the expander code that results. Let , and suppose that Let . Then there is an algorithm SlowListDecode which erasure-list-decodes from erasures with list size at most in time .

Theorem 3.1 still provides a linear-time algorithm (provided are all constant), but the dependence on , , is not very good. We will prove Theorem 3.1 in this section to illustrate the main ideas, and then in Section 4, we will show how to adapt the algorithm to achieve the running times advertised in Theorem 1.2.

A formal description of our algorithm SlowListDecode is given in Figure 2. Roughly, the first step is to list decode the inner codes to obtain an inner list at each vertex . The second and main step then is to label large equivalence classes by iterating over all possible assignments to representatives from these classes. In the third and final step we complete any such possible assignment, by first uniquely decoding at inner codes where sufficient number of edges are already labeled, followed by global unique decoding to recover the rest of the unlabeled edges. Below we elaborate on each of these steps.

In what follows, suppose that is a received word with at most symbols that are , and let be the set of codewords of that are consistent with .

3.1 List decoding inner codes

The first step is to list decode all inner codes with not too many erasures. Specifically, let be the set of bad vertices so that has more than erasures incident to .

 B={v∈L∪R:z(v,u)=⊥ for more than δrd % vertices u}. (1)

Then by our assumption on the number of erasures in ,

 |B∩L|δrd≤(1−ε)δδrnd

and the same for , which implies that

 |B∩L|,|B∩R|≤(1−ε)δn. (2)

The first step of the algorithm will be to list-decode the inner code at every vertex . For all such , let

 Lv:=ListC0((z(v,Γ1(v)),z(v,Γ2(v)),…,z(v,Γd(v)))). (3)

Next we shall use the following notion of local equivalence relation to assign labels to many of the edges. To define this notion, note first that since has ’th generalized distance , for any , is an affine subspace of of dimension . Let and be such that

 Lv={Gvx+bv:x∈Frv2}.

Notice that each row of corresponds to an edge adjacent to .

Next we define, for any vertex , a local equivalence relation at the vertex .

Definition 3.2 (Local equivalence relation).

Suppose that . For , say that if the row of corresponding to is the same as the row of corresponding to .

Notice that Definition 3.2 depends on both and ; we suppress the dependence on in the notation. We make the following observations.

Observation 3.3.

Suppose that .

1. If , then for any , is determined by .

2. There are at most local equivalence classes at , because there are at most possible vectors in that could appear as rows of the matrices .

3.2 Labeling large equivalence classes

The next step is to assign labels to large global equivalence classes, defined below. For this, we first define a new edge set by first throwing out all edges touching , and then repeatedly throwing out edges whose local equivalence classes are too small. Specifically, define to be the output of the following Algorithm FindHeavyEdges, given in Figure 1.

Next we define a global equivalence relation on the edges in as follows.

Definition 3.4 (Global equivalence relation).

Suppose that . We say that if there is a path so that , and for any pair of adjacent edges , on the path it holds that .

The following lemma shows that is partitioned into a small number of large global equivalence classes. Consequently, one can assign labels to all edges in by iterating over all possible assignments for a small number of representatives from these classes.

Lemma 3.5.

Any global equivalence class in has size at least . In particular, is partitioned into at most different equivalence classes.

Proof.

Let be a global equivalence class in , and let and denote the left and right vertices touching , respectively. By the definition of , any vertex is incident to at least edges in . Thus by the Expander Mixing Lemma (Theorem 2.1),

 ε2δ22r+3d√|S||T|≤|F|≤dn|S||T|+λ√|S||T|,

and rearranging

 n(ε2δ22r+3−λd)≤√|S||T|.

This implies in turn that

 |F|≥ε2δ22r+3d√|S||T|≥ε2δ22r+3(ε2δ22r+3−λd)dn,

which gives the final claim by our choice of . ∎

Finally, by (A) in Observation 3.3, choosing a symbol on an edge determines all the symbols in that edge’s equivalence class. Thus, we will exhaust over all choices of symbols for the equivalence classes in ; this leads to possibilities. Next we show that any such choice determines a unique codeword in .

3.3 Completing the assignment

To complete the assignment we first show that many of the vertices have at least incident edges in . For any such vertex, the inner codeword at this vertex is completely determined by the assignment to edges in , and so can be recovered by uniquely decoding locally at this vertex. We then recover the small number of remaining edges using global unique decoding. Specifically, let

 B′={v∈L∪R:(v,u)∉E′ for more than % δd vertices u}. (4)

The next lemma bounds the size of , and the number of edges in .

Lemma 3.6.

The following hold:

1. .

2. .

Proof.

For the first item, let be the subset of vertices so that more than edges incident to are removed on Step 1 of FindHeavyEdges, and let be the subset of vertices so that more than edges incident to are removed on Step 2 of FindHeavyEdges. Note that , so it suffices to show that , and similarly for . By (2), . Claims 3.7 and 3.8 below show that each of the sets has size at most which gives the desired conclusion.

For the second item, note that by the first item and the Expander Mixing Lemma,

 ∣∣E(B′)∣∣ ≤dn(δn(1−ε2))2+λδn(1−ε2) ≤(1−ε2)(δ+λd)δnd ≤(1−ε4)(δ−λd)δnd,

where the last inequality follows by our choice of . ∎

Proof.

By the description of FindHeavyEdges, is the set of all vertices that are incident to more than vertices of . Thus by the Expander Mixing Lemma,

 |B1∩L|(1−ε2)δd ≤|E(B1∩L,B∩R)| ≤dn|B1∩L||B∩R|+λ√|B1∩L||B∩R| ≤dn|B1∩L|nδ(1−ε)+λ√|B1∩L|nδ(1−ε),

where the last inequality follows by (2).

Rearranging, we have

 √|B1∩L|≤λ√nδ(1−ε)dδε/2,

and

 |B1∩L|≤4n(λd)2⋅1δε2≤ε8δn,

where the last inequality follows by our choice of . As the same holds for , we conclude that has size at most . ∎

Proof.

Since there are at most local equivalence classes at each vertex , the algorithm FindHeavyEdges performs at most iterations at Step 2. At each such iteration, at most edges are removed, and so the total number of edges removed at Step 2 of FindHeavyEdges is . Finally, by averaging this implies that there are at most vertices so that more than edges incident to are removed at this step. ∎

Next observe that for any vertex , the choices for symbols on uniquely determine the codeword of that belongs at the vertex . This is because has distance , and at least edges incident to have been labeled. Note that since is a linear code of length , this unique codeword can be found in time by solving a system of linear equations. Once this is done, the only edges that do not have labels are those in . By Item (2) of Lemma 3.6, there are at most such edges. By Lemma 2.4, these edges can be recovered using global unique decoding in time . In this way, we can recover the entire list .

The algorithm described above is given as SlowListDecode in Figure 2. This algorithm runs in time , which proves Theorem 3.1. We will show how to speed it up in Section 4, where we will conclude the proof of Theorem 1.2.

4 Final algorithm

The algorithm SlowListDecode runs in time , but the constant inside the is exponential in , since there are equivalence classes, and we exhaust over all possible assignments to representatives from these classes. In this section, we will show how to do significantly better and obtain a running time that depends polynomially on , finishing the proof of Theorem 1.2. The basic idea is as follows. Instead of exhausting over all possible ways to assign values to the edges , we will set up and solve a linear system to find a description of the ways to assign these values that will lead to legitimate codewords. Specifically, we prove the following lemma.

Lemma 4.1.

There is an algorithm FindList which, given the state of SlowListDecode at the end of Step 2, runs in time and returns , , , and so that

 ListC(z)={Ax+b:x=^A^x+^b % for some ^x∈Fa2 },

where satisfies .

The above lemma immediately implies Theorem 1.2: We first run Steps 1 and 2 in SlowListDecode in order to find the set and its partition into equivalence classes. As before, this takes time . Next, we run FindList in order to find a linear-algebraic description of the list , which we return. This second step takes time , for a total running time of . Plugging in our definition of proves Theorem 1.2. The formal description of the final algorithm ListDecode is given in Figure 4. The rest of this section is devoted to the proof of Lemma 4.1.

First, note that every value determined by SlowListDecode is some affine function of the labels on . That is, there is some matrix and some vector so that the list generated by SlowListDecode is

 {Ax+b:x∈Fs2},

where . Our goal in FindList will thus be to find this and efficiently, as well as to find a description of the ’s so that is actually a codeword in . An overview of the algorithm FindList is given in Figure 3, and the steps are described below.

4.1 Finding A and b

The first step of the algorithm will be to find and . To find this efficiently, we will mirror the decoding algorithm in SlowListDecode, except we will do it while keeping the choices of as variables. As we will see below, this can be done in time . For this, we shall find a series for , where as follows.

Finding A(0) and b(0).

First, let , and let and such that

 (A(0)x+b(0))e=ye,

where , and is as in Step (3a) in Algorithm SlowListDecode. Note that has rows which are -sparse, and that can be created in time given the matrices and vectors . Further note that, for any , .

Finding A(1) and b(1).

Recalling from (4), let . Note that for each , the label on can be determined in an affine way from the labels on edges in . More precisely, there is some vector of weight at most and some so that for any ,

 ce=(f(e))T⋅c|E0+h(e),

and moreover and can be found in time by inverting a submatrix of one of the matrices .

Let be the matrix with the as rows, let be the vector with entries , and let

 A(1):=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣A(0)FA(0)⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦andb(1):=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝|b(0)||Fb(0)+h|⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠.

Note that can be created in time given , , , and . Further note that for any , .

Finding A(t) and b(t) for t=2,…,T.

At this point, by the analysis above (following from Lemma 3.6), we know that there are at most edges which are not in . If we had labels for the edges in , then by Lemma 2.4 we can use the algorithm UniqueDecode to recover the rest.

The algorithm UniqueDecode is given in Appendix A in Figure 5. The basic idea is to iteratively decode at vertices in , then , then , and so on, to arrive at a unique assignment for all of the edges. In order to do this with matrices, we will continue as above, creating from for larger just as we did for . Note that the sets in UniqueDecode play the same role that they do here; represents the set of edges for which a label can be assigned in step .

More precisely, suppose that at step , UniqueDecode has assigned labels to , and suppose that we have and so that for any ,

 c|Et−1=A(t−1)c|{e(1),…,e(s)}+b(t−1).

At the next step, UniqueDecode would have assigned labels to edges in . We note that the total amount of time (over all iterations) to determine the edges in is the same as in UniqueDecode, which, with the right bookkeeping, is .

Then as above, for every , there is some vector