 # Efficient Superimposition Recovering Algorithm

In this article, we address the issue of recovering latent transparent layers from superimposition images. Here, we assume we have the estimated transformations and extracted gradients of latent layers. To rapidly recover high-quality image layers, we propose an Efficient Superimposition Recovering Algorithm (ESRA) by extending the framework of accelerated gradient method. In addition, a key building block (in each iteration) in our proposed method is the proximal operator calculating. Here we propose to employ a dual approach and present our Parallel Algorithm with Constrained Total Variation (PACTV) method. Our recovering method not only reconstructs high-quality layers without color-bias problem, but also theoretically guarantees good convergence performance.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Efficient Superimposition Recovering Algorithm

With estimated transformation parameters , we align the transmitted layers by warping mixtures with . Then our mixing model is rewritten as:

 Ii(f−1i(x))=ai1Lt(x)+ai2Lr(i)(f−1i(x)),i=1,⋯,m. (1)

Here is the latent transmitted layer, is the reflected layer in th (mixtures), is the mixing coefficients. With this new mixing model, the influence of parametric transformations can be ignored in the intermediate recovering process. For simplicity, we use to represent . and denote and , respectively. Let stand for the extracted gradients from . To recover high quality latent image layers, we propose to employ penalty on the extracted gradients and nonnegative constraints on the layers’ intensities along with the loss of the mixing model. Thus our recovering objective function is written as:

 min0≤lvec≤1F(lvec)=λm+1∑x,i=1|∇Li(x)−Ei(x)| (2) +m∑x,i=112(Ii(x)−ai1L1(x)−Li+1(x))2

where

is a large vector containing all pixel values in all latent layers. The first

term enforces the agreement between reconstructed layer gradients and extracted layer gradients, while the second term tends to satisfy our mixing mode. Since the extracted gradients are nonzero at very few coordinates, the norm term not only prefers layers with sparse gradients but also avoids over-smooth results. is a trade off coefficient.

To solve the nonsmooth convex optimization model (2) efficiently, we denote

 f(lvec) =m∑x,i=112(Ii(x)−ai1L1(x)−Li+1(x))2, s.t 0≤lvec≤1, (3) g(lvec) =λm+1∑x,i=1|∇Li(x)−Ei(x)|.

Here is the penalty on the extracted gradients and corresponds to the loss and nonnegative constraints. can be formulated in the following matrix form:

 f(lvec) =12||Alvec−b||2,  s.t. 0≤lvec≤1, (4) where~{}A =⎡⎢ ⎢ ⎢⎣a11II⋮⋱am1II⎤⎥ ⎥ ⎥⎦,b=⎡⎢ ⎢⎣vec(I1)⋮vec(Im)⎤⎥ ⎥⎦,

where is continuously differentiable and , of which Lipschitz constant , and is the unit matrix. We note the objective function in (2) is a composite function of a differential term and a non-differential term . Denote

 PLs,lveck−1(lvec)=f(lveck−1)+⟨∇f(lveck−1),lvec−lveck−1⟩ (5) +Ls2∥lvec−lveck−1∥2,

which is the first order Taylor expansion of at , with the squared Euclidean distance between and as the regularization term. The traditional gradient descent algorithm obtains the solution at the -th iteration by with a proper step size (greater than ). Here we propose to employ the accelerated gradient descent [1, 2] to solve the reconstruction problem, named Efficient Superimposition Recovering Algorithm (ESRA). Here we generate a solution at the -th iteration by computing the following proximal operator

 lveck→argmin0≤lvec≤1PLs,Yk(lvec)+g(lvec) (6)

where and for . We note that is a linear combination of and . The combination coefficient plays an important role in the convergence of the algorithm. As suggested by , we set and for . According to the theoretical analysis in , this accelerated gradient descent method can get within of the optimal objective value after steps. While solving problem (6) is still very challenging, we propose a Parallel Algorithm with Constrained Total Variation (PACTV) method to find the optimal solution, which is presented in the sequel.

## 2 PACTV via dual approach

Given problem (6), we observe it can be solved block separable in the following way. If we denote (, we can split into separable parts. Then by employing the definition of (3), we transform (6) into the following form:

 lveck= argmin0≤lvecn≤1{m+1∑i=1∑x(λ|∇Li(x)−Ei(x)| (10) +Ls2||Li(x)−di(x)||2)}.

As illustrated in (10), finding is to solve following separable problems with constrained total variation in parallel:

 min0≤L≤1∑x(12||L(x)−d(x)||2 +β|∇L(x)−E(x)|). (11)

Here , and represent , respectively. Similar with the image denoising problem [4, 3], we propose a dual approach to solve (11) and give some notation in order:

• is the set of matrix-pairs where and that satisfy . And we assume , for every .

• The linear operation is defined by the formula

• The operator which is adjoint to is given by where and .

• is the orthogonal projection operator on the convex closed set .

Equipped with these notation, we derive a dual problem of (11), and give following proposition to state the relation between the primal and dual optimal solutions.

###### Proposition 1.

Let be the optimal solution of the problem

 min(p,q)∈P{H(p,q)≡12(−||HC(d−βL(p,q))||2+ (12) ||d−βL(p,q)||2)+β[Tr(p⊤E1)+Tr(q⊤E2)]}.

where for every . Then the optimal solution of (11) is given by .

###### Proof.

First note the following relation holds true:

 |x|=maxp{px:|p|≤1}. (13)

Hence, we can give

 ∑k|∇kL−Ek|=max(p,q)∈PT(L,p,q), (14)

where,

 T(L,p,q)= h−1∑i=1w−1∑j=1[pi,j(Li+1,j−Li,j−E1i,j) (15) +qi,j(Li,j+1−Li,j−E2i,j)] +h−1∑i=1pi,w(Li+1,w−Li,w−E1i,w) +w−1∑j=1ph,j(Lh,j+1−Lh,j−E2h,j).

With this notation we have

 T(L,p,q)=Tr(L(p,q)⊤L)−Tr(p⊤E1)−Tr(q⊤E2). (16)

Thus the original problem (11) becomes

 min0≤L≤1max(p,q)∈P{ 12∥L−d∥2+β[Tr(L(p,q)⊤L) (17) −Tr(p⊤E1)−Tr(q⊤E2)]}.

Since the objective function is convex in and concave in , we can exchange the order of the minimum and maximum and get

 max(p,q)∈Pmin0≤L≤1{ 12∥L−d∥2+β[Tr(L(p,q)⊤L) (18) −Tr(p⊤E1)−Tr(q⊤E2)]}.

and which can be written as

 max(p,q)∈Pmin0≤L≤1{ 12[∥L−(d−βL(p,q))∥2−∥d−βL(p,q)∥2 (19) +∥d∥2]−β[Tr(p⊤E1)+Tr(q⊤E2)]}.

Thus the optimal solution of the inner minimization problem is

 L=P{0≤L≤1}(d−βL(p,q)). (20)

And last, we plug the above expression for back into (19) and ignore the constant term, we obtain the dual problem is

 min(p,q)∈P{H(p,q)≡12(−||HC(d−βL(p,q))||2+ ||d−βL(p,q)||2)+β[Tr(p⊤E1)+Tr(q⊤E2)]},

which is the same as (12). ∎

what’s more, given (12), we can easily have following lemma.

###### Lemma 1.

The objective funtion of (12) is continuously differentiable and its gradient is given by

 ∇H(p,q)=−βL⊤PC(d−βL(p,q))+β(E1,E2). (21)

And let be the Lipschitz constant of , then .

###### Proof.

Consider the function defined by

 s(L)=∥HC(L)∥2. (22)

Then the dual function (12) can be written as:

 (23) +β[Tr(p⊤E1)+Tr(q⊤E2)].

Obviously, is continuously differentiable and its gradient is given by

 ∇s(L)=2(L−PC(L)). (24)

Therefore,

 ∇H(p,q) (25) =12∇(−s(d−βL(p,q))+||d−βL(p,q)||2)+β(E1,E2) =12βL⊤(∇s(d−βL(p,q))−2(d−βL(p,q)))+β(E1,E2) =−βL⊤PC(d−βL(p,q))+β(E1,E2)

Then for every two pairs of matrices where and for , we have

 ∥∇H(p1,q1)−∇H(p2,q2)∥ (26) =β∥L⊤[PC(d−βL(p1,q1))]−L⊤[PC(d−βL(p2,q2))]∥ ≤β∥L⊤∥∥PC(d−βL(p1,q1))−PC(d−βL(p2,q2))∥ ≤β2∥L⊤∥∥L(p1,q1)−L(p2,q2)∥ ≤β2∥L⊤∥∥L∥∥(p1,q1)−(p2,q2)∥ =β2∥L⊤∥2∥(p1,q1)−(p2,q2)∥

here the above inequalities follow from the non-expensiveness property of the orthogonal projection operator and property of linear operators . And from , we have . Therefore, implying that and hence . ∎

With definition of and , fast gradient projection (FGP) is applied on the dual problem (12). And the complexity of each iteration in FGP is . Above all, our proposed Parallel Algorithm with Constrained Total Variation (PACTV) is using FGP to solve the dual problems (12) in parallel. Then we catenate the optimal and resize them into vector form to achieve .

Given above proposition and lemma, we can use the fast gradient projection (FGP) on dual problem (12). Fast gradient projection (FGP) is outlined in Algorithm 2. Here means projecting the matrix-pair on the set . And finally we achieve the optimal solution of (11). Then our recovering method ESRA is outlined in Algorithm 1.

In our implementations, we set the total iteration number of ESRA is 100 and FGP tolerance is , and we also set to ensure a constant stepsize. The initial value of is zero. The final recovered reflected layers of (2) should be warped with and enhance the intensity by 2 to be visible. Our recovering method launches a general optimization framework and can be extended to solve other reconstruction problems in [5, 6].

## References

•  A. Nemirovski, “Efficient methods in convex programming,” 2005.
•  Y. Nesterov and I.U.E. Nesterov, Introductory lectures on convex optimization: A basic course, vol. 87, Springer, 2004.
•  A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 183–202, 2009.
•  A. Beck and M. Teboulle, “Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems,” TIP, vol. 18, no. 11, pp. 2419–2434, 2009.
•  Kun Gai, Zhenwei Shi, and Changshui Zhang, “Blind separation of superimposed images with unknown motions,” in Proc. CVPR, 2009, pp. 1881–8.
•  K. Gai, Z. Shi, and C. Zhang, “Blind separation of superimposed moving images using image statistics,” TPAMI, , no. 99, pp. 1–1.