PURE: Scalable Phase Unwrapping with Spatial Redundant Arcs

04/19/2018 ∙ by Ravi Lanka, et al. ∙ 0

Phase unwrapping is a key problem in many coherent imaging systems, such as synthetic aperture radar (SAR) interferometry. A general formulation for redundant integration of finite differences for phase unwrapping (Costantini et al., 2010) was shown to produce a more reliable solution by exploiting redundant differential estimates. However, this technique requires a commercial linear programming solver for large-scale problems. For a linear cost function, we propose a method based on Dual Decomposition that breaks the given problem defined over a non-planar graph into tractable sub-problems over planar subgraphs. We also propose a decomposition technique that exploits the underlying graph structure for solving the sub-problems efficiently and guarantees asymptotic convergence to the globally optimal solution. The experimental results demonstrate that the proposed approach is comparable to the existing state-of-the-art methods in terms of the estimate with a better runtime and memory footprint.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Phase unwrapping is the process of recovering unambiguous phase values from multi-dimensional phase data that are measured modulo (wrapped data) and is affected by random noise and systematic disturbances. The difference between the measured and the actual phase has an ambiguity is integer multiples of . For unwrapping purposes it is usually assumed that the sampling rate is adequate over most of the data set so that aliasing is avoided. In other words, the true absolute phase difference between two neighboring data points is generally less than . This reduces the phase unwrapping problem into that of integration of the phase difference between neighboring data points under certain paths (in noisy conditions). This problem is representative of a class of important imaging techniques such as interferometric synthetic aperture radar  (Allen, 2008), optical interferometry  (Pandit et al., 1994), magnetic resonance imaging  (Lan et al., 2008), and diffraction tomography  (Devaney, 1984). The advent of synthetic aperture radar interferometry (InSAR) (Rosen et al., 2000) and numerous other applications spurred interest in developing reliable two-dimensional (2D) phase unwrapping algorithms. The most widely used technique for 2D phase unwrapping relies on network programming approaches, which formulates the problem as a Minimum Cost Flow (MCF) (Costantini, 1998) and is applicable only for planar graphs.

Some of the factors that influence the quality of phase unwrapping for SAR interferometry include (1) signal interference due to atmospheric conditions; (2) temporal decorrelation due to the changes of the scattering characteristics at different times; and (3) geometric decorrelation due to different imaging geometries arising from the long distances between the repeated orbits of the same satellite. To address these factors, several techniques (Hooper and Zebker, 2007;Costantini et al., 2010; Shanker and Zebker, 2010) were developed to process multiple conventional interferograms collected at different times from the same area. A general formulation for redundant integration technique for multi-temporal phase unwrapping (Costantini et al., 2010

) exploits redundant information obtained from any pairs of points (typically close together but not necessarily nearest neighbors), making it possible to obtain a solution robust to outliers and noise both in 2D and multidimensional cases. The general formulation includes standard phase unwrapping and finite difference integration techniques as special cases.

Edgelist algorithm (Shanker and Zebker, 2010) utilizes the information from the temporal dimension by replacing the MCF formulation with its dual formulation, where the closed loops are replaced by reliable edges as the base construct. This variant formulation enables the inclusion of external geodesic measurements as constraints. A step-wise 3D algorithm (Hooper and Zebker, 2007) solves the multi-temporal unwrapping in multiple stages, first temporally and subsequently uses relaxed temporal solution to constrain the solution in spatial dimensions.

Similar to MCF formulation (Costantini, 1998), Edge list algorithm and Linear Programming (LP) based redundant arcs formulation (Costantini et al., 2010) exhibit total unimodularity which enables us to solve the integer programming problem efficiently. However, for large-scale interferograms, despite using commercial solvers, these formulations do not scale better than MCF formulation (section 3). Also, the publicly available LP solvers perform poorly when compared against commercial solvers in handling large datasets (Meindl and Templ, 2012). In the rest of the paper, we refer to the LP based redundant arcs formulation as LPRA.

Decomposition methods employ message-passing between sub-problems to iteratively improve the solution of large-scale optimization problems (Santos, 2009). However, their use is unexplored for phase unwrapping techniques to the best of our knowledge. Our contributions are: (1) We show how decomposition methods (Rush and Collins, 2012) that break the given problem into tractable subproblems, can be employed to solve the LPRA efficiently (2) We propose a decomposition that asymptotically converges to a globally optimal solution and shows a significant runtime improvements over LPRA. Further, the decomposed sub-problems only requires an MCF solver for which numerous efficient implementations are publicly available (Kiraly and Kovacs, 2012). In this paper, we focus on 2D interferograms with spatial redundant arcs.

The remainder of the paper is organized as follows: The background and related work is discussed in section (2). An introduction to decomposition method based on Lagrangian relaxation and the necessary condition for asysmptotic optimality is described in sections (2.4) and (2.6) respectively. The validation methods and results are described in section (3). Conclusion with future directions are discussed in section (4).

2 Background and Related work

The general phase unwrapping formulation is reviewed in section (2.1). A brief description of the conditions necessary for solving phase unwrapping as a MCF in section (2.2). LPRA construction as a totally uni-modular formulation is described in section (2.3).

2.1 Phase Unwrapping Formulation

Consider a set of points in 2D and let the phase measured at a point be . Phase Unwrapping involves extraction of the unwrapped phase value, , from , which are related by

(1)

where represent the integer number of the cycles that must be added to to obtain the corresponding . In addition, it is convenient to define a directed graph , whose nodes are the set of grid points and an edge that connects the grid points and . We define a new variable to represent the integer flow along the directed edge such that

(2)

The phase unwrapping can then be stated as,

(3)
subject to:
(4)

where the positive exponent m defines the selected metric, and representing the reliability of the preliminary estimates. When (), the above problem reduces to a linear integer (quadratic) programming problem. The value of are determined only up to an additive constant because an additive constant leaves constraint (4) unaltered.

2.2 Phase Unwrapping as Network Programming

When the graph considered in equation (3) is planar, the constraint matrix is totally unimodular. In addition, the problem can be transformed to an equivalent MCF formulation in the dual network. Let be the faces in the planer embedding of . Then, the constraints of the dual network is obtained by the set as follows. For each face , the edges constituting are traversed according to an uniformly chosen orientation and the constraints corresponding to the traversed edges are integrated together. As an example, consider the face shown below, comprised by the vertices . Let the orientation be counter-clockwise.

The constraints of the edges are

(5)

Traversing the edges in the order and integrating the constraints, we get the reformulated constraint for MCF as

(6)

The integration path becomes

(7)

Many different strategies (Kiraly and Kovacs, 2012) exist to solve MCF formulation. Supplementary constraints could be used to reflect a priori considerations, or could be useful in initializing the algorithm for improving the runtime.

2.3 Redundant arcs construction as LP

MCF formulation is applicable only to planar graphs. In the case of a general graph , Costantini et al., 2010 showed that it is sufficient to choose the cycle space of as the basis and to sum the constraints of each cycle (in an uniformly chosen orientation) to transform the constraint matrix to a totally unimodular matrix. A numerically efficient way to construct this constraint space is by using a spanning tree. If the spanning tree is of , then each of the edges, are called backedges. Each such backedge forms a unique cycle in called fundamental cycle.

This construction enables an optimal relaxation to the linear domain (for ) through the use of totally unimodular property. Computationally efficient algorithms exist (Vanderbei, 2001) to solve redundant LP method, although slower than MCF. Our contribution in this work uses this construction to develop an iterative technique that is computationally more effective.

2.4 Our Approach

This section is organized as follows. A brief review of Dual Decomposition technique is presented in section (2.5). In section (2.6) and section (2.7), our method with the properties of the sub-problems for runtime improvement and asymptotic optimality are discussed respectively.

2.5 Dual Decomposition

We will first describe the general framework to illustrate methodology behind dual decomposition and then describe how this technique leads to an efficient way to solve phase unwrapping on a non-planar graph. Consider the following problem

(8)

where Q is a closed convex set,

is a vector and

is any function of x. Suppose, the objective function is linearly separable (as in equation (3)), that is . Then, the above problem can be transformed using auxiliary public variable as,

(9)

where is a closed convex set, such that . Vector corresponding to each sub-problem (indexed by ) is the local copy of the public vector . is a binary matrix that maps the public vector into variables corresponding to each sub-problem . We can decouple the objective functions by relaxing the coupling constraint using Lagrangian multipler () to form the following dual function:

(10)

Differentiating with respect to yields the condition . The dual function can now be solved independently for each given

(11)

The dual of the equation (8) then becomes

(12)

The equation (12) is convex and can be solved with the projected subgradient method (Bertsekas, 2010). We refer to equation (12) as the dual decomposition master problem (DDMP). The projected subgradient algorithm is used to iteratively solve the DDMP algorithm, where at each iteration we first perform the decentralized optimization of the subproblems (in equation  11) followed by a sub-gradient update of the lagrangian multipliers () as follows:

(13)

Here, is the subgradient of the objective function with respect to and denotes the step size along the direction of positive subgradient. In our case, the subgradient of is simply , which is the optimal point in equation (11). The lagrangians are then projected onto the constraint space in equation 12 given by:

(14)

where, denotes the projection function onto the set . This process is repeated until convergence of the DDMP objective.

2.6 Definitions

In this section, we define the transformed phase unwrapping following the LPRA construction and the Lagrangian relaxation for the phase unwrapping problem more formally.

Without loss of generality, assume that the graph is connected. Let be a sequence of arcs forming a closed path, and be a set of independent closed paths that span the whole cycle-space. Then, LPRA (Costantini et al., 2010) provides a numerically efficient way to solve the original phase unwrapping problem, by summing for each the equation 4 that corresponds to arcs belonging to that transforms the constraints space to the following set of equations, denoted :

(15)
Definition 1.

The transformed phase unwrapping on with cycle-space , denoted , is:

The linear minimization problem, denoted , is obtained by linear relaxation of the integer flows:

Definition 2.

With a little abuse of exponent notation, let be a decomposition of a graph , where is the cardinality of the set that covers the graph such that and . Let be the set of all subgraphs that contain edge . Let be the constraint space defined on each subgraph , with cycle space (as in equation 15).

Assume that we introduce local flow variables and the corresponding cost function for each subproblem , with additional consistency constraints and . We can then define the Lagrangian relaxation for the by introducing a vector of lagrangian variable for every consistency constraint on , denoted , as:

We can now state the theorem 1 behind LPRA algorithm’s construction, which appears as Theorem 3.4c in  (Kavitha et al., 2009).

Theorem 1.

(Tight Relaxation): Constraint matrix defined on the cycle space is totally unimodular. Thus, if either or has a finite optimal value, then so does the other, and their optimal values coincide.

Theorem 1 enables us to solve the original phase unwrapping on a non-planar graph as a linear programming problem. In other words, this theorem states that the linear relaxation is tight. We now develop an iterative algorithm for phase unwrapping.

2.7 Phase Unwrapping via Dual Decomposition

In our approach, we exploit the strong duality properties of any linear program to develop an iterative algorithm for

. Assume that the constraints of any linear program can be divided into "easy" and "hard" constraints. More specifically in the context of phase unwrapping, the constraints that break the planarity property of the underlying graph is classified as "hard" constraint. We can then use

to transfer some of the constraints into the objective function. Thus, the easy constraints define . This leads us to the following theorem:

Theorem 2.

(Cycle Decomposition): For any graph decomposition that covers , . Furthur, the equality strictly holds when .

We refer the reader to the Appendix A for a proof. Theorem 2 allows us to break the thereby enabling us to use the underlying planarity structure and still retain asympthotic optimality. The asymptotic nature of the algorithm stems from the fact that the lagrangian variables in the problem is iteratively updated through sub-gradient ascent (Rush and Collins, 2012).

In other words, to guarantee asymptotic optimality, we have to ensure that for each vertex pair belonging to an edge, there exists a path along a chosen spanning tree in atleast one of the subgraphs. We show an example decomposition of a non-planar subgraph in figure 1 in which the chosen spanning tree is shown in red. This ensures that the decomposition works on the formulation of the original problem (Costantini et al., 2010), thus inheriting asymptotic optimality. In addition, the planarity of each subgraph enables us to solve them efficiently as an MCF.

() () () ()
Figure 1: is an example of phase unwrapping on a non-planar graph. The nodes and the edges of the graph correspond to the measured wrapped phase and the constraints respectively. The sub-graphs , and show an example decomposition of the non-planar graph. Red colored edges in and each of subgraph , and corresponds to the chosen spanning tree.
1 Initialize: dual variables that satisfies (e.g., ) 2 while Stopping criteria not met do          Optimize each subproblem separately using MCF 3       for  to  do 4             5       end for 6                 Update the dual variables 7       for  to  do 8             for  do 9                   10             end for 11             12       end for 13       14 end while Algorithm 1 Phase Unwrapping with Redundant Arcs via Dual Decomposition

The constraint on the cost function (in definition  2) can be satisfied by distributing the cost () uniformly among subgraphs. The algorithm (1) describes the pseudo-code for Dual Decomposition based phase unwrapping. It can be easily shown that the algorithm (1) corresponds to a projected subgradient ascent algorithm for solving problem.

The algorithm (1) consists of two main steps:

  • Line 4: In the first step, the lagrangian variables are fixed to optimize each subproblem () in definition 2) as a MCF independently.

  • Line 8-9: In the second step, the dual variables are updated using subgradient ascent, such that the consistency constraint (as in definition 2) for local flow variables belonging to each subproblem is satisfied.

These steps are repeated until convergence. We use the relative change in the dual objective value as the convergence criteria for the algorithm (1), since it is expected to plateau near the dual optimum.

3 Results and Discussions

(a) Phase Image A
(b)
(c)
(d) Phase Image B
(e)
(f)
(g) Phase Image C
(h)
(i)
(j) Phase Image D
(k)
(l)
Figure 14: The first column above shows the wrapped phase images (A,B,C & D) without noise used in all our experiments. The plots in the second and third column show the running time (in seconds) comparison between PURE and LPRA for variable noise levels in the input. The value r corresponds to the redundant arcs {1, 2}. In all of the plots above, the dotted and thick line correspond to running time with redundant arcs 1 & 2 respectively.

Methods Compared: We first evaluate the performance of our method (henceforth referred to as PURE) using two different MCF algorithms, namely cost-scaling  (Goldberg and Kennedy, 1995) and the network simplex algorithm  (Orlin, 1997). The cost-scaling algorithm is a primal-dual method that applies a successive approximation scheme by scaling the cost. The network simplex algorithm is a specialized version of the simplex algorithm for linear programming that exploits the network structure of the MCF problem and performs the basic operations directly on the graph representation. RelaxIV  (Bertsekas, 1994), a dual ascent based algorithm is another notable solving technique for MCF. However RelaxIV is not directly applicable to PURE, since it assumes an integral cost.

Our primary empirical results is the comparison of PURE against LPRA with variable redundant arcs. In order for the evaluation to be implementation independent, we report the running time for each algorithm as the time taken by the respective solvers. For completeness, we also present the quality improvements we obtain by using the redundant arcs compared to LPRA (with redundant arcs distance = 2).

Experimental Setup: We used a commercially solver Gurobi (Version 6.5.1) to solve the LPRA formulation. For solving MCF, we used the Cost-scaling code of Goldberg and Cherkassky, an efficient authoritative implementation of the algorithm. In addition, we used the Network scaling algorithm available in the MCFClass project  (Frangioni and Manca, 2006), a common C++ interface for several MCF solvers implementation.

The cost functions (in definition 2) is used to direct the placement of phase cycle jumps using quality measures derived from the data itself. The cost function is a crucial parameter that directly influences the quality of the solution. For our experiments, we used the smooth cost function as defined in SNAPHU (Chen and Zebker, 2001). In all our experiments on PURE, we scaled the cost in the objective function between [0, 1] and used a constant learning rate ( in 1) until change in objective value is below a threshold of and a decaying learning rate () from there on with a convergence threshold of .

(a) (a)
(b) (b)
Figure 17:

Figure (a) shows the Running time (in seconds) comparison of PURE with cost-scaling and simplex MCF algorithm. Figure (b) shows comparison of percentage inconsistency with unit noise variance using LPRA (with redundant arcs 2) as reference on setX. Value r in the legend corresponds to the redundant arcs length.

Datasets: The experiment is conducted on a set of four SAR interferograms. For each of the interferogram, we generated different instance, instances each for every noise level from to with an increment of . This allows us to study the running time performance of the algorithm with varying noise levels. We conduct the same set of experiments on two different resolutions & (henceforth referred to as setX, setY respectively) to study the effect of constraint set size on the running time. The number of constraints which depend on redundant arcs ranges from   to over for setX and from   million to over million for setY, which can be quite challenging to solve even for a commercial solver as we show later.

Main Results: In the first set of experiments, we conduct an empirical evaluation of the running time for PURE using cost-scaling and simplex algorithms. Figure  (a)a depicts the running time comparison as we vary the noise level for both redundant arcs {1,2} on setX. This plot is representative of the typical performance obtained in the setY instances. We observed that the performance of cost-scaling algorithm was more time efficient when compared to simplex algorithm for all our instances on setX. We are not aware of a theoretical interpretation of the performance difference at the time of writing the paper. For all our future experiments, we use PURE with the cost-scaling algorithm for solving the MCF subproblems.

In the second set of experiments, we validate the scalability of PURE by comparing the running time with LPRA. For this experiment, we predefined the planar decomposition for each redundant arcs level {1, 2} that satisfies the condition for asymptotic optimality as in Theorem 2. First, we note that although all the problem instances in setX were solved by both the algorithms, PURE was several magnitudes faster in running time compared to LPRA. Secondly, LPRA failed to solve any of the instances in setY due to the large constraint set. To the best of our knowledge, PURE is the first algorithm to have solved constraint set of this scale for phase unwrapping with redundant arcs.

Figure 18: Evolution of dual objective value for an instance of wrapped phase B in setX.

In order to further quantitatively evaluate the performances of different phase unwrapping methods, we also present the comparison of inconsistency measure in Figure (b)b with LPRA with redundant arcs distance = 2 as reference. The inconsistency measure is defined as the percentage number of reconstructed phase cycle that matches with ground truth phase cycle. As expected, LPRA and PURE show significant improvement over the MCF based approach. In addition, PURE is comparable in performance to that of LPRA with significantly improved running time performance.

4 Conclusion and Future Work

We proposed a scalable algorithm for phase unwrapping with redundant arcs and provide empirical evaluation of our performance on simulate SAR interferograms. In future, we aim to extend this work to the temporal dimension of the interferograms. The rate of convergence and its analysis (Rush and Collins, 2012) depend on the update schedule used by the algorithm. We would also like to analyze the rate of convergence under different variants of subgradient ascent techniques.

Acknowledgments

We would like to thank Piyush Agram for introducing us to Phase Unwrapping and Redundant Arcs technique and for the numerous discussion that shaped the problem definition. In addition, the authors would also like to thank BalaSanjeevi and Arun Viswanathan for reviewing the manuscript. This work was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. All rights reserved.

References

  • Allen [2008] C T Allen. Interferometric Synthetic Aperture Radar. IEEE Geoscience and Remote Sensing Society Newsletter, (96):6–13, 2008.
  • Bertsekas [1994] Dimitri P Bertsekas. RELAX-IV : A Faster Version of the RELAX Code for Solving Minimum Cost Flow Problems by. pages 1–18, 1994.
  • Bertsekas [2010] Dimitri P Bertsekas. Incremental Gradient , Subgradient , and Proximal Methods for Convex Optimization : A Survey. 2010:1–38, 2010.
  • Chen and Zebker [2001] Curtis W Chen and Howard A Zebker. Two-dimensional phase unwrapping with use of statistical models for cost functions in nonlinear optimization. 18(2):338–351, 2001.
  • Costantini [1998] M Costantini. A novel phase unwrapping method based on network programming. Proceedings of the IEEE Transactions on Geoscience and Remote Sensing, 36(3):813–821, 1998.
  • Costantini et al. [2010] Mario Costantini, Fabio Malvarosa, and Federico Minati. A general formulation for robust and efficient integration of finite differences and phase unwrapping on sparse multidimensional domains. Proceedings of European Space Agency Fringe 2009 Workshop, 2009(3):758–768, 2010.
  • Devaney [1984] A. J. Devaney. Geophysical diffraction tomography. IEEE Transactions on Geoscience and Remote Sensing, GE-22(1):3–13, 1984.
  • Frangioni and Manca [2006] Antonio Frangioni and Antonio Manca. A Computational Study of Cost Reoptimization for Min-Cost Flow Problems. INFORMS Journal on Computing, 18(1):61–70, 2006.
  • Goldberg and Kennedy [1995] Andrew V. Goldberg and Robert Kennedy. An efficient cost scaling algorithm for the assignment problem. Mathematical Programming, 71(2):153–177, 1995.
  • Hooper and Zebker [2007] Andrew Hooper and Howard A. Zebker. Phase unwrapping in three dimensions with application to InSAR time series. Journal of the Optical Society of America, 24(9):2737–2747, 2007.
  • Kavitha et al. [2009] Telikepalli Kavitha, Christian Liebchen, Kurt Mehlhorn, Dimitrios Michail, Romeo Rizzi, Torsten Ueckerdt, and Katharina A. Zweig. Cycle bases in graphs characterization, algorithms, complexity, and applications. Computer Science Review, 3(4):199–243, 2009.
  • Kiraly and Kovacs [2012] Z Kiraly and P Kovacs. Efficient implementations of minimum-cost flow algorithms. Acta Universitatis Sapientiae, Informatica 4, 1:67–118, 2012.
  • Lan et al. [2008] T. Lan, D. Erdogmus, S. J. Hayflick, and J. U. Szumowski. Phase unwrapping and background correction in mri. In

    2008 IEEE Workshop on Machine Learning for Signal Processing

    , pages 239–243, 2008.
  • Meindl and Templ [2012] B Meindl and M Templ. Analysis of commercial and free and open source solvers for linear optimization problems. 2012.
  • Orlin [1997] James B. Orlin. A polynomial time primal network simplex algorithm for minimum cost flows. Mathematical Programming, 78(2):109–129, 1997.
  • Pandit et al. [1994] S. M. Pandit, N. Jordache, and G. A. Joshi. Data-dependent systems methodology for noise-insensitive phase unwrapping in laser interferometric surface characterization. J. Opt. Soc. Am. A, 11(10):2584–2592, Oct 1994.
  • Rosen et al. [2000] P. A. Rosen, S. Hensley, I. R. Joughin, F. K. Li, S. N. Madsen, E. Rodriguez, and R. M. Goldstein. Synthetic aperture radar interferometry. Proceedings of the IEEE, 88(3):333–382, March 2000.
  • Rush and Collins [2012] Alexander M Rush and Michael Collins.

    A Tutorial on Dual Decomposition and Lagrangian Relaxation for Inference in Natural Language Processing.

    45:305–362, 2012.
  • Santos [2009] Dos Santos. Finding Approximate Solutions for Large Scale Linear Programs. 2009.
  • Shanker and Zebker [2010] A. Piyush Shanker and Howard A. Zebker. Edgelist phase unwrapping algorithm for time series InSAR analysis. Journal of the Optical Society of America. A, Optics, image science, and vision, 27(3):605–612, 2010.
  • Vanderbei [2001] Robert J. Vanderbei. Linear programming : foundations and extensions. International series in operations research & management science. Kluwer Academic, 2001.

Appendix

Appendix A Proofs for Theoretical Results

Theorem.

(Cycle Decomposition): For any graph decomposition that covers , . Furthur, the equality strictly holds when .

Proof.

Let us define to facilitate the proof. From definition 2, which implies . Let denote the additional set of constraints in such that, . Then can be transformed to the following form:

We can then define as follows:

We note that follows from the fact that the constraint space of is contained in the constraint space of that of . Equality constraint strictly holds when , which implies . For ease of notation, we use to denote . Let us know consider the dual of , denoted dual-:

Upon further simplifying, we have

Using linear programs strong duality property, . Let be the optimal solution of . Then evaluating at , denoted , we have

Thus, . We can also easily show that . Thus , when and it follows from theorem 1, .

A variant of this proof was presented in  Santos [2009], however to the best of our knowledege this is the first extension to TUM.