# Convex combination of alternating projection and Douglas-Rachford operators for phase retrieval

We present the convergence analysis of convex combination of the alternating projection and Douglas-Rachford operators for solving the phase retrieval problem. New convergence criteria for iterations generated by the algorithm are established by applying various schemes of numerical analysis and exploring both physical and mathematical characteristics of the phase retrieval problem. Numerical results demonstrate the advantages of the algorithm over the other widely known projection methods in practically relevant simulations.

## Authors

• 1 publication
• 1 publication
• 5 publications
06/02/2013

### Phase Retrieval using Alternating Minimization

Phase retrieval problems involve solving linear equations, but with miss...
12/15/2021

### Sample-Efficient Sparse Phase Retrieval via Stochastic Alternating Minimization

In this work we propose a nonconvex two-stage stochastic alternating min...
05/07/2015

### Fast Spectral Unmixing based on Dykstra's Alternating Projection

This paper presents a fast spectral unmixing algorithm based on Dykstra'...
03/09/2020

### UPR: A Model-Driven Architecture for Deep Phase Retrieval

The problem of phase retrieval has been intriguing researchers for decad...
12/04/2018

### Phase Retrieval by Alternating Minimization with Random Initialization

We consider a phase retrieval problem, where the goal is to reconstruct ...
06/14/2019

### A stochastic alternating minimizing method for sparse phase retrieval

Sparse phase retrieval plays an important role in many fields of applied...
05/15/2020

### Alternating Stationary Iterative Methods Based on Double Splittings

Matrix double splitting iterations are simple in implementation while so...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Phase retrieval is an inverse problem of recovering the phase of a complex signal from its measured amplitude. It appears in various modifications in many scientific and engineering fields, including astronomy imaging DaiFie87 ; HarTho00 , X-ray crystallography Mil90 ; Har93 , microscopy Arr99 ; KimZhoGodPop16 and adaptive optics Mugnier2006 ; AntVer15 ; VisVer13 ; VisBruVer16 . An important application of phase retrieval in optics is to quantify the properties of an imaging system via its generalized pupil function Jan02 ; BraDirJan02 ; DoeThaVer18 ; PisGupSolVer18 . The fundamental advantage of this approach compared to those using intensity point spread functions (PSFs) or intensity optical transfer functions is that it is modifiable and automatically takes the specific characteristics of the imaging system under investigation into account. In adaptive optics, one needs to know the phase of the optical field in the system aperture to be able to compensate for an optical aberration, and the phase retrieval is a basis for a wide class of focal-plane based wavefront sensors.

Since the fundamental work Say52 of Sayre in 1952, which reveals that the phase of a scattered wave can be recovered from the recorded images at and between Bragg peaks of a diffracted wavefront, a wide variety of solution methods for phase retrieval has been proposed and developed. For an overview of phase retrieval algorithms, we refer the reader to the papers Fie13 ; SheEldCohChaMiaSeg15 ; Luk17 ; LukSabTeb19 . Direct methods usually require insights about the crystallographic structure to recover the missing phase Hau86 . Such a structural information is not only costly in terms of computational complexity but also sensitive to noise and approximation, for example, due to physical limitation or model deviation. As a consequence, this approach lacks practicability and becomes less popular in practice. The second class of solution algorithms relies on the fact that phase retrieval problems can be reformulated as linear equations with rank and positive semidefinite constraints in higher dimensional spaces. Well known examples of this algorithm class are MaxCut GoeWil95 , PhaseCut WalAspMal15 and PhaseLift CanEldStrVor13 ; CanStrVor13 . This convex relaxation approach requires the matrix lifting step which is computationally demanding and hence not suitable for large-scale problems. The most popular class of phase retrieval methods is based on projections and pioneered by the work of Gerchberg and Saxton GerSax72 , which deals with phase retrieval given a single PSF image and the amplitude of the complex signal, which in the sequel will be referred to as the amplitude constraint in order to clearly differentiate it from the intensity constraints determined by data images. The need to deal with more and more phase retrieval models, for example, incorporating various types of a priori constraint Fie82 , being given multiple images and involving regularization schemes, has given rise to a wide range of solution methods in this class. It was recently observed by Luke et al. LukSabTeb19 that this class of methods actually outperforms the other classes of phase retrieval algorithms.

In light of LevSta84 ; BauComLuk02 ; LukBurLyo02 , phase retrieval can be interpreted as mathematical feasibility problems and, as a consequence, all algorithmic schemes for set feasibility can be adapted for phase retrieval. The current research is devoted to that topic. The alternating projection (AP) and the Douglas–Rachford (DR) algorithms are perhaps the most widely known solution methods for set feasibility and have served as a basis for a wide range of modifications and regularizations, see, for example, BauMou17 ; KruLukNgu18 . It has been observed that AP is stable, always convergent and to some extent able to suppress noise but it may get stuck at undesired local minima and the convergence speed can be very slow Fie82 . In contrast, DR can be faster in convergence and better in escaping from bad local minima but less robust against noise and model deviation Luk08 . As a result, this algorithm can not be naively applied to practical problems which intrinsically involve noise and model approximation. This fact has motivated a number of its efficient relaxation schemes such as the usage of the Krasnoselski–Mann relaxation, the Fienup’s hybrid input-output (HIO) algorithm Fie82 , the relaxed averaged alternating reflections (RAAR) algorithm Luk05 ; Luk08 and the DRAP algorithm Tha18 .

In this paper, we analyze the DRAP algorithm for solving the phase retrieval problem for the first time after having observed that it appears to be the most efficient algorithm for the problem setting under consideration, see Section 5. Interestingly, DRAP mathematically coincides with the convex combination of the AP and DR operators in the phase retrieval setting. As a result, DRAP admits two mathematically equivalent descriptions (see (19) and (20) in Section 3). The first one ensures that its computational complexity is only approximate to that of each of the constituent operators and thus it is used for numerical implementation. The second description as a convex combination of the AP and the DR operators exhibits a concrete connection to the fundamental projection algorithms and hence it is intuitively better situated on the map of projection methods (see Remark 6).

The main contribution of this paper is the convergence analysis of the DRAP algorithm for solving the phase retrieval problem. First, using the analysis approach initiated by Chen and Fannjiang CheFan18 , we establish a convergence criterion for DRAP (Theorem 4.1), which extends the convergence result of the DR algorithm formulated in that paper. It is worth mentioning here that extending a convergence criterion for DR to a corresponding one for its relaxations such as HIO, RAAR and DRAP algorithms is not trivial111For example, similar criterion for RAAR was proved in LiZho17 while the one for HIO remains unknown.. Proposition 2 extends the applicable scope of this type of convergence results222Including the criteria for DR and RAAR algorithms formulated in CheFan18 and LiZho17 , respectively. to cover also phase retrieval problems with amplitude constraint. Second, applying the analysis scheme developed by Luke et al. LukNguTam18 , we establish another convergence criterion for the DRAP algorithm (Theorem 4.2) by integrating the physical properties of the phase retrieval problem Luk08 into the earlier known results for DRAP Tha18 . Recall that the analysis of the latter article involves only abstract mathematical notions in the general setting of set feasibility. As a comparison, we make an attempt on connecting the two convergence criteria by linking their key mathematical assumptions to a single physical condition on the phase diversities which are the almost only adjustable figures of the phase retrieval problem (see Remark 15).

The paper is organized as follows. In the last part of this introductory section, we introduce the mathematical notation used in the paper. Section 2 is devoted to formulating the phase retrieval problem and addressing in details the key steps towards its solutions using projection algorithms. A discussion on projection methods for phase retrieval is presented in Section 3. In Section 4, convergence results of the DRAP algorithm are established using two different analysis approaches: 1) spectral analysis in Section 4.1 and 2) variational analysis in Section 4.2. Numerical simulation is presented in Section 5.

Mathematical notation. The underlying space in this paper is a finite dimensional Hilbert space denoted by . The element-wise multiplication is denoted by . The element-wise division , absolute value , square and square root operations are also frequently used but without need for extra notation. and denote the real and the imaginary parts of a complex object in th brackets, respectively. The imaginary unit is . denotes the identity mapping while denotes the identity matrix of size . The distance to a set is defined by

 dist(⋅,Ω):H→R+:x↦infw∈Ω∥x−w∥

and the set-valued mapping

 PΩ:H⇉Ω:x↦{w∈Ω|∥x−w∥=dist(x,Ω)} (1)

is the projector on . A selection is called a projection of on . The reflection operator associated with is accordingly defined by . Given a subset , the Fréchet and limiting normal cones to at a point are defined, respectively, as follows:

 ˆNΩ(^x):=⎧⎪⎨⎪⎩v∈H∣limsupxΩ→^x,x≠^x⟨v,x−^x⟩∥x−^x∥≤0⎫⎪⎬⎪⎭, NΩ(^x):=LimsupxΩ→^xˆNΩ(x):={v=limk→∞vk∣vk∈ˆNΩ(xk),xkΩ→^x},

where means that and . The set of fixed points of an operator is defined by . Our other basic notation is standard; cf. Mor06.1 ; VA . stands for the open ball with radius and center . For a linear subspace of ,

 V⊥:={u∈H∣∣⟨u,v⟩=0 for all v∈V}

is the orthogonal complement subspace of .

## 2 Problem formulation

### 2.1 Phase retrieval

Phase diversities and the Fourier transform are key ingredients of the phase retrieval problem studied in this paper. Recall that adding a phase diversity to the phase of a complex signal is a unitary transform and the (discrete) Fourier transform is also a unitary operator. Since unitary transforms are one-to-one represented as unitary matrices, the phase retrieval problem can be formulated in the form of matrix-vector-multiplication as follows. For an unknown complex object

, let be the propagation matrix which is normalized to be isometric, and be the measured data of . The phase retrieval problem is to find an (approximate) solution to the equation:

 r=|Mx|2+w,x∈Cn, (2)

where represents unknown noise333Dimension corresponds to the pixel totality of one image..

###### Remark 1

To formulate the phase retrieval problem in the matrix-vector-multiplication form (2) or any feasibility model in Section 2.2, we need to vectorize all array objects in a consistent manner and rewrite all linear mappings as matrix multiplication operations in higher dimensional spaces, see, for example, (DoeThaVer18, , section 2A). This one-to-one conversion allows us to do the theoretical analysis in the simple matrix-vector-multiplication formulation without loss of generality.

In this paper, we study the phase retrieval setting with several phase diversities, and the propagation matrix takes the following form:

 M=1√m⎛⎜ ⎜ ⎜⎝FD1FD2⋯FDm⎞⎟ ⎟ ⎟⎠∈CN×n, (3)

where is the number of data images,

is the unitary matrix representing the discrete Fourier transform, and

are unitary matrices representing the phase diversities which will be denoted by in the sequel . Note that .

###### Remark 2 (phase modulators versus out-of-focus measurements)

There are two widely used techniques of acquiring the PSF images for phase-diversity phase retrieval. First, a phase modulator is used for introducing phase diversities in the pupil plane corresponding to which the images are measured in the focal plane. Second, the images are registered in out-of-focus planes along the optical axis (i.e, parallel to the focal plane at some known distances) without the use of phase modulator. It is well known that the two techniques are mathematically equivalent Goodman05 .

When a priori knowledge of the solutions is available, that is for some known subset , one can expect more accurate phase retrieval. The formulation (2) is naturally modified as follows:

 r=|Mx|2+w,x∈χ. (4)

Following the background developed in GerSax72 ; Fie82 ; BauComLuk02 ; LukBurLyo02 , we are going to address the problem (4) using projection algorithms. The main steps for this solution process will be detailed next.

### 2.2 Feasibility models

Several feasibility models of phase retrieval have been formulated in either the physical domain444The unknown variable is the signal in the pupil plane. LukBurLyo02 ; LevSta84 or the Fourier domain555The unknown variable relates to the signal in the pupil plane via the Fourier transform. CheFan18 . Viewing the Fourier transform and phase-diversity addition as unitary transforms, we clarify the relationship between various feasibility models of the phase retrieval problem.

In the physical domain, for each , let us denote the measurement of the PSF image . Define the intensity constraint sets as follows BauComLuk02 ; LukBurLyo02 :

 Ωd:={x∈Cn∣(1/m)|FDd(x)|2=rd}(1≤d≤m). (5)

Then, the problem (4) can be approached via the following feasibility problem involving multiple sets:

 findx∈m⋂d=0Ωd, (6)

where captures a priori knowledge of the solutions.

###### Remark 3 (nonconvexity feasibility)

All the problem models appearing in this paper are nonconvex due to the nonconvexity of the intensity constraints defined in (5).

When addressing the phase retrieval problem with noise and model deviation, an appropriate averaging process is essential for suppressing noise. For this, we consider the following feasibility model in the product space:

 find u∈D∩Ω, (7)

where

 D:={(x,x,…,x)∈Cnm∣x∈χ} and Ω:=Ω1×Ω2×⋯×Ωm. (8)

The equivalence between (6) and (7) in the general setting of set feasibility finds its root in Pie84 . Without a priori constraint, i.e., , the set is the (-dimensional subspace) diagonal of the product space . The counterpart of (7) in the Fourier domain is as follows:

 find y∈A∩B, (9)

where

 A:=M(χ) and B:={y∈CN∣|y|2=r}. (10)

The 2-set feasibility models (7) and (9) allow us to adapt various algorithmic schemes including flexible relaxation and regularization for the phase retrieval problem.

The relationships between models (6), (7) and (9) in the noiseless setting are as follows.

###### Proposition 1 (equivalences of feasibility models)

Let and . The following statements are equivalent:

1. is a solution to (6);

2. is a solution to (7);

3. is a solution to (9).

###### Proof

The equivalence between and is widely known Pie84 , while the equivalence between and follows from the unitarity property of the matrix given in (3), that is, . ∎

###### Remark 4 (inconsistent feasibility)

In practical circumstances, for example, due to the presence of noise and model deviation, the intersection in (6), (7) and (9) is likely to be empty. There are natural interpretations of inconsistent feasibility in terms of minimization involving indicator and distance functions. For example, let us interpret the AP method for solving the (possibly inconsistent) feasibility (9) in terms of classical algorithms for minimization. The worrisome issue regarding the emptiness of the intersection would be eased when one associates (9) with the following minimization problem:

 miny∈Bf(y):=12dist2(y,A). (11)

In view of Proposition 2 (which is proved later in Section 4.1), the set defined in (10) can be assumed to be convex, and hence the objective function in (11) is differentiable with the gradient given by for every point PolRocThi00 . Then, alternating projection for solving (9) is precisely the projected gradient method for solving (11).

### 2.3 Projectors

The decisive step of solving the feasibility problem (9) by projection algorithms is to calculate the two projectors on the sets and defined in (10). Since is geometrically the product of a number of circles of the complex number plane, an explicit form of the projector , which is in general a set-valued mapping, is available BauComLuk02 ; LukBurLyo02 :

 PB(y)=√r⊙y|y|,∀y∈CN, (12)

with the convention that whenever , where denotes the complex unit circle666The subscript indicates the th entry of the object.. In numerical computation, the (single-valued) selection of corresponding whenever is sufficient.

###### Remark 5 (projector on regularized sets)

In view of Remark 4, the set can have no common point with the set . For ways of handling such a feasibility gap, one can think of regularizing or approximating the set . For example, Luke Luk12 proposed to enlarge the set to

 Bε:={y∈CN∣distϕ(y,b)≤ε,∀b∈B},

where can be viewed as the radius of enlargement, is the Bregman distance, associated with a strictly convex function which is differentiable on the interior of its domain, given by

 distϕ(y,z):=ϕ(|y|)−ϕ(|z|)−⟨∇ϕ(|y|),|y|−|z|⟩,∀y,z∈CN.

The function should be chosen in accordance with the statistical model of the noise in (4). More specifically, let us consider the Gaussian and Poisson models of noise, which are perhaps the most relevant to phase retrieval. The Bregman distance associated with the half energy kernel operator is simply the Euclidean norm, and it is appropriate for Gaussian noise. Let us define the function by

 ϕ(v):=N∑i=1f(vi),∀v∈RNwheref(t):=⎧⎨⎩tlogt−tift>0,0ift=0,∞ift<0. (13)

The Bregman distance associated with the function given by (13) is the Kullback-Leibler divergence, and it is appropriate for Poisson noise. The projector on the regularized set can be viewed as an approximation of the projector on , and hence it can be used in the framework of projection methods. The cyclic projection algorithm using approximate projectors of this type has been analyzed by Luke Luk12 , and in fact his idea can also be extended to other projection methods. However, since the projector on a regularized set is often much more complicated to be computed than the one on the original set, we can instead treat the latter one as an approximation of the former one (LukSabTeb19, , page 22). This insight about approximate projectors for inconsistent feasibility allows us to simply use the formula (12) for both analytical and numerical purposes without any worrisome issue.

The projector on the set can also be explicitly described777Note that convexity of is not required in Lemma 1.. We make use of the following notation:

 [χ]m:={[x]m∣x∈χ} where [x]m:=(x,x,…,x)m times.
###### Lemma 1

For the propagation matrix given in (3), it holds that

 PA(y)=MPχ(M∗y),∀y∈CN. (14)
###### Proof

Let us first define the unitary matrix based on the matrix as follows:

 U:=⎛⎜ ⎜ ⎜⎝FD10⋯00FD2⋯0⋯⋯⋯⋯000FDm⎞⎟ ⎟ ⎟⎠∈CN×N. (15)

This block diagonal matrix is unitary since all of its constituent blocks are so. By the structure of and , we have that

 A=M(χ)=1√mU([χ]m).

Since is unitary, it holds that

 PA(y)=P1√mU([χ]m)(y)=PU(1√m[χ]m)(y)=U(P1√m[χ]m(U∗y)). (16)

Since is a subspace containing , by the properties of the metric projection, we have that

 P1√m[χ]m=P1√m[χ]m∘P[Cn]m. (17)

We next calculate . Note that is also a block diagonal matrix whose blocks are the conjugate transpose of the corresponding blocks of . Let us denote the column vector whose entries taken from correspond to the block of , . We have that

 U∗y=⎛⎜ ⎜ ⎜ ⎜⎝(FD1)∗c1(FD2)∗c2…(FDm)∗cm⎞⎟ ⎟ ⎟ ⎟⎠.

Since is the -dimensional diagonal of the product space , we obtain by solving the minimizing problem (1) that

 P[Cn]m(U∗y)=1m[m∑k=1(FDk)∗ck]m=1√m[M∗y]m. (18)

Plugging (18) and (17) into (16) yields that

 PA(y) =U(P1√m[χ]m1√m[M∗y]m)=U(1√mP[χ]m[M∗y]m) =1√mU(P[χ]m[M∗y]m)=1√mU([Pχ(M∗y)]m)=MPχ(M∗y).

The proof is complete. ∎

The formula (14) shows that the complexity of heavily depends on that of .

## 3 Projection algorithms

Projection algorithms for phase retrieval can be considered as descendants of the well known Gerchberg–Saxton (GS) algorithm GerSax72 which deals with phase retrieval given the amplitude constraint and a single PSF image. Their introduction has been motivated by the rapidly growing application of phase retrieval originated from a wide variety of physical settings. For example, the famous input-output, output-output and hybrid-input-output algorithms Fie82 arose up when dealing with the support and the real and nonnegative constraints instead of the amplitude constraint as the GS method. Extensions for solving problems given multiple images and for obtaining better restoration have been among the main objectives of this class of phase retrieval algorithms. In light of the groundwork BauComLuk02 , in Section 2.2 we have interpreted the phase retrieval problem (4) as a feasibility problem in one of the equivalent forms (6), (7) and (9). Having calculated the projectors and in Section 2.3, we are now ready to discuss algorithmic schemes for the solutions. From now on, we analyze the feasibility model (9).

AP and DR are perhaps the most widely known solution methods for feasibility and have been the basis for a wide variety of modification and regularization schemes. We refer the reader to, for example, KruLukNgu18 ; BauMou17 for an overview of these basic methods in the setting of set feasibility. For an early discussion in the context of phase retrieval, we refer the reader to the surveys LukBurLyo02 ; BauComLuk02 . It has been observed that AP is stable, always convergent and to some extent able to suppress noise, but the convergence speed can be very slow Fie82 . In contrast, DR can be fast in convergence, but sensitive to noise and model deviation Luk08 . Indeed, only relaxations of DR can be used for problems in the presence of noise and model mismatch.

The use of the Krasnoselski–Mann relaxation is perhaps the most widely known. Mathematically, it is the convex combination of the DR operator and the identity mapping:

 TKMDR:=βTDR+(1−β)Id,

where is the relaxation parameter. The Fienup’s hybrid-input-output (HIO) method Fie82 can be viewed as a relaxation of DR:

 THIO:=PA((1+β)PB−Id)−(βPB−Id),

where is the relaxation parameter. Another relaxation of DR known as the relaxed averaged alternating reflections (RAAR) algorithm was proposed and analyzed in Luk05 ; Luk08 for phase retrieval. It is the convex combination of the DR operator and one of the projectors:

 TRAAR:=βTDR+(1−β)PB,

where is the relaxation parameter. Inexact versions of RAAR were also proposed and analyzed in Luk08 . The DRAP algorithm Tha18 is another relaxation of DR:

 TDRAP:=PA((1+λ)PB−λId)−λ(PB−Id), (19)

where is the relaxation parameter888Relaxation parameter zero is not allowed for KMDR, HIO and RAAR..

Interestingly, in the phase retrieval setting (9), coincides with the convex combination of the AP and DR operators provided that is an affine set. The latter condition implies that the set given by (10) is affine. Hence, the projector is linear and we obtain the the following expression:

 TDRAP =PA((1−λ)PB+λ(2PB−Id))−λ(PB−Id) =λ(Id+PA(2PB−Id)−PB)+(1−λ)PAPB =λTDR+(1−λ)TAP, (20)

where is the AP operator.

###### Remark 6

The two expressions (19) and (20) play their own role in explaining interesting features of DRAP999They do differ in general settings.. On the one hand, only two projections are required for computing an iteration of (19) ( once and once) compared to three projections for (20) ( once and twice). This means that the computational complexity of DRAP is at the same level as that of the other projection methods if (19) is used in numerical implementation. On the other hand, the expression (20) as a convex combination of and explains better the idea leading to the introduction of DRAP as a relaxation of DR compared to the less intuitive form (19).

Plugging the two projectors (12) and (14) into (19), we come up with the following explicit form of DRAP for addressing the feasibility problem (9):

 y+ ∈TDRAP(y) (21) =MPχ(M∗((1+λ)√r⊙y|y|−λy))−λ(√r⊙y|y|−y),

where and stand for the two consecutive iterations of DRAP. In the case , (21) further reduces to

 TDRAP(y) (22) =λ(IN−MM∗)(y)+((1+λ)MM∗−λIN)(√r⊙y|y|).

In the remainder of this paper, we analyze the DRAP algorithm in the phase retrieval setting (9) and demonstrate its advantages over the other algorithms.

## 4 Convergence analysis

In this section, we study convergence properties of DRAP using two different analysis schemes. Since the problem (9) is nonconvex, we can only obtain local convergence criteria though it is observed from numerical results that the quality of phase retrieval is not affected by the starting point for the algorithm.

### 4.1 A result from spectral analysis

The analysis in this section is based on the observation that the projector given by (14) is linear, and the projector given by (12) also has a good first order approximation around any solution of (9). We follow the analysis approach initiated by Chen and Fannjiang CheFan18 where they established a local linear convergence result for the DR algorithm. The mentioned result of CheFan18 was later extended for the RAAR algorithm in LiZho17 . We will show that DRAP also enjoys that kind of convergence result101010Similar results for the HIO algorithm are unknown.. In this section, we assume that the lowest intensity of the images is strictly positive:

 min1≤i≤Nri>0. (23)
###### Remark 7

When the phase diversities

are assumed to be continuous random variables, condition (

23) is satisfied almost surely CheFan18 .

We first analyze DRAP in the form (22) for solving (9) with . Let us denote

 Y:=diag(^y|^y|)∈CN×N,L:=Y∗M∈CN×n,

where is a solution to (9) and denotes the diagonal matrix with elements on its diagonal taken from the vector in the brackets. Since vanishes nowhere111111Recall that the square amplitude is element-wise. by (23), for all sufficiently close to , also vanishes nowhere. In particular, for a fixed vector , the vector vanishes nowhere provided that is sufficiently small. The next lemma establishes the first order approximation of as a complex vector valued function around in a given direction.

###### Lemma 2 (first order approximation of TDRAP)

For a vector and a sufficiently small number , we have

 TDRAP(^y+εv)−TDRAP(^y)=εY∇(μ)+o(ε), (24)

where and .

###### Proof

Let us first denote

 wε:=^y+εv|^y+εv|% and Yε:=diag(wε).

In view of (22), we have that

 TDRAP(^y) =^y=((1+λ)MM∗−λIN)Yr, TDRAP(^y+εv) =λ(IN−MM∗)(^y+εv)+((1+λ)MM∗−λIN)Yεr =ελ(IN−MM∗)(v)+((1+λ)MM∗−λIN)Yεr.

Then

 TDRAP(^y+εv)−TDRAP(^y) =ελ(IN−MM∗)v (25) +((1+λ)MM∗−λIN)(Yε−Y)r.

The following formula for the first order approximation of can be calculated directly:

 (Yε−Y)r=εjYIm(Y∗v)+o(ε). (26)

Substituting (26) into (25) yields

 TDRAP(^y+ εv)−TDRAP(^y) = ελ(IN−MM∗)v+εj((1+λ)MM∗−λIN)YIm(Y∗v)+o(ε) = ελY(IN−LL∗)μ+ε%jY((1+λ)LL∗−λIN)Im(μ)+o(ε).

The proof is complete. ∎

The next step is to analyze the spectrum of the real decomposition of the complex matrix as follows:

 L:=(Re(L)−Im(L))∈RN×2n.

Note that is isometric since is so. Define also the real decomposition of a complex vector by

 G(x):=(Re(x)Im(x))∈R2n,∀x∈Cn.

Let

be the singular values of

with the corresponding right singular vectors and the left singular vectors

. We have by the definition of the singular value decomposition (SVD) that

 Re(LG−1(vk))=Lvk=σkuk, σkG−1(vk)=G−1(σkvk)=G−1(LTuk)=G−1(Re(LT)uk−Im(LT)uk)=L∗uk.

The next technical result regarding the spectrum of is crucial.

###### Lemma 3

(CheFan18, , Proposition 5.6) There holds that , , , and .

Thanks to Lemma 3 and the definition of the SVD, one has the following expression of the second largest singular value of :

 σ2 =max{∥∥LTu∥∥:u∈RN,u⊥u1,∥u∥=1} (27) =max{∥Lv∥:v∈R2n,v⊥v1,∥v∥=1} =max{∥Im(Lx)∥:x∈Cn,x⊥j^x,∥x∥=1}.

The following theorem establishes linear convergence of the DRAP algorithm for solving (9). Since phase retrieval is ambiguous (at least) up to a global phase shift121212That is the first element of the orthogonal basis of Zernike polynomials., the following distance between two complex vectors is of interest:

 distopt(x,u):=minα∈C,|α|=1∥αx−u∥,∀x,u∈H. (28)
###### Theorem 4.1 (linear convergence of DRAP)

In the setting of (9) with , suppose that

 σ2:=max{∥Im(Lx)∥:x∈Cn,∥x∥=1,x⊥j^x}<1. (29)

Let be a sequence generated by in the form of (22) with for some . If is sufficiently close to , then there exists a number such that

 distopt(x(k),^x)≤ckdistopt(x(0),^x),(∀k∈N)

where .

###### Proof

First, the optimal global phase shift defined by (28) is given by LiZho17 :

 α(k) =argminα{∥αx(k)−^x∥:|α|=1,α∈C} =x(k)∗^x/∣∣x(k)∗^x∣∣=y(k)∗^y/∣∣y(k)∗^y∣∣. (30)

Let us denote . Thanks to Lemma 2, we have that

 Y∗(α(k)y(k+1)−^y) =Y∗(α(k)TDRAP(y(k))−TDRAP(^y)) =Y∗(TDRAP(α(k)y(k))−TDRAP(^y)) =∇(η(k))+o(∥η(k)∥).

Multiplying both sides of the above equality by and taking the isometry property of into account, we obtain that

 α(k)x(k+1)−^x=L∗Y∗(α(k)y(k+1)−^y)=L∗∇(η(k))+o(∥η(k)∥) (31) = λL∗(IN−LL∗)η(k)+jL∗((1+λ)LL∗−λIN)Im(η(k))+o(∥η(k)∥) = jL∗Im(η(k))+o(∥η(k)∥).

Due to (30) and the fact that we have

 ⟨η(k),j|^y|⟩ =⟨^y∗|^y|⊙(α(k)y(k)−^y),j|^y|⟩ =⟨^y∗|^y|⊙α(k)y(k),j|^y|⟩+⟨|^y|,j|^y|⟩ =⟨α(k)^y∗|^y|⊙y(k),j|^y|⟩ =⟨y(k)∗^y∣∣y(k)∗^y∣∣(^y∗|^y|⊙y(k)),j|^y|⟩ =⟨∣∣y(k)∗^y∣∣1|^y|,j|^y|⟩=0.

In other words, . By basic properties of the Hermitian inner product, one has . As a result, . Taking Lemma 3 into account, we have just shown that is orthogonal to which is the first left singular vector of . This together with the expression (27) of implies that

 ∥∥LTIm(η(k))∥∥≤σ2∥∥Im(η(k))∥∥. (32)

Combining (28), (31) and (32) yields that

 distopt(x(k+1),^x) =minα∈C,|α|=1∥∥αx(k+1)−^x∥∥ (33) ≤∥∥α(k)x(k+1)−^x∥∥ =∥∥L∗Im(η(k))∥∥+o(∥η(k)∥) =∥∥LTIm(η(k))∥∥+o(∥η(k)∥) ≤σ2∥∥Im(η(k))∥∥+o(∥η(k)∥) ≤σ2∥∥η(k)∥∥+o(∥η(k)∥).

Since by assumption (29), there exists a number such that for all with sufficiently small, it holds that

 σ2∥∥η(k)∥∥+o(∥η(k)∥)≤c∥∥η(k)∥∥. (34)

Combining (33), (34) and the definition of yields

 distopt(x(k+1),^x)≤c∥∥η(k)∥∥=cdistopt(x(k),^x),(k=1,2,…).

The proof is complete. ∎

###### Remark 8

In view of (CheFan18, , Proposition 6.2), the assumption (29) of Theorem 4.1 is satisfied almost surely.

###### Remark 9 (region of convergence)

Since the algorithm operates in the underlying space , for the sake of brevity, let us speak of region around instead of . In view of Theorem 4.1, such a convergence region, if exists, is mutually dependent on the constant . More specifically, given a number , it is the region in which the first order approximation (24) of around is valid and condition (34) is satisfied for all . Note that the latter involves not only and but also the sequence itself. The intersection of the regions over all possible sequences complied with can be taken as the region of convergence. Obviously, such a statement is not informative and hence it has not ever been an objective of local convergence analysis.

###### Remark 10 (influence of λ on convergence)

In view of Theorem 4.1, the relaxation parameter obviously has influence on the region in which the first order approximation of (Lemma 2) is valid and condition (34) is satisfied, however, its influences on the convergence speed of DRAP is unclear131313Of course, the influence is clearly observed from numerical computation..

We have analyzed the DRAP algorithm in the phase retrieval setting (9) with . The latter condition limits the effectiveness of Theorem 4.1 to phase retrieval without a priori constraint. In the remainder of this section, we will show that the convergence criterion can also be applicable to phase retrieval problems with an amplitude constraint, which is a helpful prior information and often available in practice141414This is because the light distribution in the pupil plane is often known, for example, it can be uniform or truncated Gaussian..

The amplitude constraint is described by

 χ={x∈Cn∣|x|=a}, (35)

where is the known amplitude of the complex signal. The next result shows that the problem (9) with an amplitude constraint can equivalently be reformulated as a problem without a priori constraint in a higher dimensional space.

###### Proposition 2

The problem (9) with the amplitude constraint (35) can equivalently be reformulated as:

 find y∈A∩B, (36)

where