# Dictionary and Image Recovery from Incomplete and Random Measurements

This paper tackles algorithmic and theoretical aspects of dictionary learning from incomplete and random block-wise image measurements and the performance of the adaptive dictionary for sparse image recovery. This problem is related to blind compressed sensing in which the sparsifying dictionary or basis is viewed as an unknown variable and subject to estimation during sparse recovery. However, unlike existing guarantees for a successful blind compressed sensing, our results do not rely on additional structural constraints on the learned dictionary or the measured signal. In particular, we rely on the spatial diversity of compressive measurements to guarantee that the solution is unique with a high probability. Moreover, our distinguishing goal is to measure and reduce the estimation error with respect to the ideal dictionary that is based on the complete image. Using recent results from random matrix theory, we show that applying a slightly modified dictionary learning algorithm over compressive measurements results in accurate estimation of the ideal dictionary for large-scale images. Empirically, we experiment with both space-invariant and space-varying sensing matrices and demonstrate the critical role of spatial diversity in measurements. Simulation results confirm that the presented algorithm outperforms the typical non-adaptive sparse recovery based on offline-learned universal dictionaries.

## Authors

• 2 publications
• 9 publications
• ### Dictionary Learning for Blind One Bit Compressed Sensing

This letter proposes a dictionary learning algorithm for blind one bit c...
08/30/2015 ∙ by Hadi Zayyani, et al. ∙ 0

• ### Link Delay Estimation Using Sparse Recovery for Dynamic Network Tomography

When the scale of communication networks has been growing rapidly in the...
12/02/2018 ∙ by Hao-Ting Wei, et al. ∙ 0

• ### Adaptive Dictionary Sparse Signal Recovery Using Binary Measurements

One-bit compressive sensing is an extended version of compressed sensing...
05/20/2018 ∙ by Hossein Beheshti, et al. ∙ 0

• ### Optimizing Codes for Source Separation in Color Image Demosaicing and Compressive Video Recovery

There exist several applications in image processing (eg: video compress...
09/07/2016 ∙ by Alankar Kotwal, et al. ∙ 0

• ### High-speed Millimeter-wave 5G/6G Image Transmission via Artificial Intelligence

Artificial Intelligence (AI) has been used to jointly optimize a mmWave ...
07/07/2020 ∙ by Shaolin Liao, et al. ∙ 0

• ### Dense and Sparse Coding: Theory and Architectures

The sparse representation model has been successfully utilized in a numb...
06/16/2020 ∙ by Abiy Tasissa, et al. ∙ 0

• ### Self-expressive Dictionary Learning for Dynamic 3D Reconstruction

We target the problem of sparse 3D reconstruction of dynamic objects obs...
05/22/2016 ∙ by Enliang Zheng, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

The theory of Compressed Sensing (CS) establishes that the combinatorial problem of recovering the sparsest vector from a limited number of linear measurements can be solved in a polynomial time given that the measurements satisfy certain isometry conditions

[20]. CS can be directly applied for recovering signals that are naturally sparse in the standard basis. Meanwhile, CS has been extended to work with many other types of natural signals that can be represented by a sparse vector using a dictionary [1]. As an alternative to model-based dictionaries such as wavelets [2], Dictionary Learning (DL) [3] is a data-driven algorithmic approach to build sparse representations for natural signals.

Learning dictionaries over large-scale databases of training images is a time and memory intensive process which results in a universal dictionary that works for most types of natural images. Meanwhile, several variations of DL algorithms have been proposed for real-time applications to make the sparse representations more adaptive111We must emphasize the difference between adapting and learning dictionaries although the two terms are sometimes used interchangeably. In this paper, by dictionary learning we refer to its typical usage, i.e. the process of applying the DL algorithm to a large database of training images that produces a universal dictionary. Adapting a dictionary is the process of applying the DL algorithm to only one or a small number of possibly corrupted images to produce a dictionary that specifically performs well for those images. in applications such as image denoising [4]

[5] and most recently, compressed sensing [6]. Particularly, the last application has been termed Blind Compressed Sensing (BCS) to differentiate it from the normal CS where the dictionary is assumed to be known and fixed. Clearly, one would expect BCS to improve CS recovery when the optimal sparsity basis is unknown, which is the case in most real-world applications. Unfortunately, the existing work on BCS for imaging is lacking in two directions: () empirical evaluations and () mathematical justification for the general case. These issues are discussed further below.

• Empirical evaluations: In existing BCS works, such as [7], empirical evaluations on images are mainly limited to the image inpainting problem which can be viewed as a CS problem where the compressive measurements are in the standard basis. In [8], the generic CS problem is only tested on artificially generated sparse vectors. Tested images and different running scenarios for the algorithms in [7, 8, 9, 10] are rather limited and arguably not adequate in indicating the strengths and weaknesses of BCS in real-world imaging applications. Finally and most importantly, existing studies fail to compare the adaptive BCS recovery with the non-adaptive CS recovery based on universally learned dictionaries.

• Mathematical justification: The original BCS effort [6] identifies the general unconstrained BCS problem as ill-posed. Subsequently, various structural constraints were proposed for the learned dictionary and were shown to ensure uniqueness at the cost of decreased flexibility. In a following effort [7], a different strategy was used to ensure the uniqueness without enforcing structural constraints on the dictionary. However, the uniqueness was only justified for the class of sparse signals that admit the block-sparse model by exploiting recent results from the area of low-rank matrix completion [11]. Finally, [8] and [9] take empirical approaches toward unconstrained DL based on compressive measurements but do not provide any justification for the uniqueness, convergence or the accuracy of the proposed DL algorithm.

The present work is different from existing efforts in BCS, both in terms of goals and methodology. In addition to the goal of having an objective function with a unique optimum, we would like the learned dictionary to be as close as possible to the ideal dictionary that is based on running the DL algorithm over the complete image. In other words, our goals include both convergence to a unique solution and high accuracy of the solution. Since no prior information is available about the structure of the ideal dictionary or the underlying signal, our method does not impose extra structural constraints on the learned dictionary222The constraint of having bounded column norm or Frobenius-norm of the dictionary, which is used in virtually every dictionary learning algorithm, does not constrain the dictionary structure other than bounding it or its columns inside the unit sphere. Some examples of structural constraints used in [6] and [10] respectively are block-diagonal dictionaries and sparse dictionaries. or the sparse coefficients.

Similar to most efforts in the area of compressive imaging, including the BCS framework, we employ a block compressed sensing or block-CS scheme for measurement and recovery of images [12, 13]. Unlike dense-CS, where the image is recovered as a single vector using a dense sampling matrix, block-CS attempts to break down the high dimensional dense-CS problem into many small-sized CS problems for each non-overlapping block of the image. Some advantages of block-CS are: () block-CS results in a block-diagonal sampling matrix which significantly reduces the amount of required memory for storing large-scale sampling matrices, () decoding in extremely high dimensions333A typical consumer image has an order of pixels which would make it impractical to be recovered as a single vector using existing sparse recovery methods with cubic or quadratic time complexities. is computationally challenging in dense-CS and () sparse modeling or learning sparsifying dictionaries for high-dimensional global image characteristics is challenging and not well studied. Specifically, we study a block-CS scheme where each block is sampled using a distinct sampling matrix and show that it is superior to using a fixed block sampling matrix for BCS. One of our goals in this paper is to outperform non-adaptive sparse image recovery using universal dictionaries based on well-known DL algorithms such as online-DL [17] and K-SVD [4] while overcoming challenges such as overfitting. Rather than focusing on new DL algorithms for BCS, we focus on the relationship between the block-CS measurements and the BCS performance in an unconstrained setup.

This paper is organized as follows. In Section II we review the dictionary learning problem under the settings of complete and compressive data. Before describing the details of our algorithm in Section IV, we present our main contributions regarding the uniqueness conditions and the DL accuracy in the presence of partial data in Section III. Simulation results are presented in Section V. Finally, we present the conclusion and a discussion of future directions in Section VI.

### I-a Notation

Throughout the paper, we use the following rules. Upper-case letters are used for matrices and lower-case letters are used for vectors and scalars.

denotes the identity matrix of size

. We reserve the following notation: is the total number of blocks in an image, is the size of each block (e.g. an block has size ), is the number of compressive measurements per block (usually ), is the number of atoms in a dictionary, is the iteration count, denotes a dictionary, represents the vectorized image block (column-major) number , is the representation of (i.e. ), denotes the measurement matrix for block number , denotes the vector of compressive measurements (i.e. ). For simplicity, we drop the block index subscripts in , , and when a single block is under consideration. Similarly, we omit the iteration superscript in and when a single iteration is under study and it does not create confusion.

The vector norm is defined as . The matrix operators and respectively represent the Kronecker and the Hadamard (or element-wise) products. The operator reshapes a matrix to its column-major vectorized format. The matrix inner product is defined as with denoting the matrix trace, i.e. the sum of diagonal entries of . The Frobenius-norm of is defined as .

Finally, due to the frequent usage of Lasso regression

[14] in this paper, we use the following abstractions:

 L(x,D,λ,α) = 12∥x−Dα∥22+λ∥α∥1 Lmin(x,D,λ) = minα12∥x−Dα∥22+λ∥α∥1 Largmin(x,D,λ) = argminα12∥x−Dα∥22+λ∥α∥1

In words, represents the model misfit and denotes the sparse coefficient vector. Also, note the obvious relationship:

 L(x,D,λ,Largmin(x,D,λ))=Lmin(x,D,λ)

## Ii Problem Statement and Prior Art

### Ii-a An overview of the dictionary learning problem

The Dictionary Learning (DL) problem can be compactly expressed as:

 D∗=argminDψ(D) (P1)

where represents the collective model misfit444There are alternative ways of expressing the DL problem that are related. For example, authors in [15], propose to minimize the sum of norms of coefficient vectors for a fixed (zero) representation error for each sample .:

 ψ(D)=N∑j=1Lmin(xj,D,λ)

As noted in [3] for a similar formulation of DL, (P1) represents a bi-level optimization problem :

• The inner layer (also known as the lower level) problem consists of solving Lasso problems to get .

• The outer layer (or upper level) problem consists of finding a that minimizes .

Note that even for large-scale images () the lower level optimization can be handled efficiently by parallel programming because each block is processed independently. However, in a batch-DL algorithm555Batch processing refers to the processing of all blocks at once while online processing refers to the one-by-one processing of blocks in a streaming fashion., in contrast to the online-DL [17], the upper level problem is centralized and combines the information collected from all blocks. In this paper, we use the batch-DL approach to stay consistent with the mathematical analysis. However, in Section IV, an efficient algorithm is described for solving the batch problem. Similar to [17], the batch algorithm and its analysis can be extended to online-DL for the best efficiency. Hereafter, we omit the prefix ‘batch-’ in batch-DL for simplicity.

The typical DL strategy that is used in most works [4, 5, 17, 18, 22, 23, 21] is to iterate between the inner and outer optimization problems until convergence to a local minimum. Expressed formally, the iterative procedure is:

 D(t+1)=argminDψ(t)(D) (1)

with

 ψ(t)(D) = N∑j=1L(xj,D,λ,α(t)j) α(t)j = Largmin(xj,D(t),λ)

The algorithm starts from an initial dictionary that can be selected to be, for example, the overcomplete discrete cosine frame [4].

Perhaps surprisingly, the solution of (P1) is trivial without the additional constraint of having a bounded dictionary norm. To explain more, one can always reduce the model misfit by multiplying with a scalar and multiplying with , thus reducing the norm of while keeping fixed, leading to and . There are two typical bounding methods to solve this issue that are reviewed in e.g. [24, 25]: ) bounding the norm of each dictionary column or ) bounding . In this work, we use the second approach, i.e. the bound

, since it does not enforce a uniform distribution of column norms which makes the sparse representation more adaptive. As pointed out in

[25], using a Frobenius-norm bound results in a weighted sparse representation problem (at the inner level) where some coefficients can have more priority over others in taking non-zero values. Additionally, having bounded column norms is a stronger constraint which makes the analysis more difficult when the dictionary is treated in its vectorized format (this becomes more clear in Section III). The typical method for bounding the dictionary is by projecting back the updated dictionary (at the end of each iteration) inside the constraint set. More details are provided in Section IV where we describe the DL algorithm.

### Ii-B Dictionary learning from compressive measurements

The problem of CS is to recover a sparse signal , or a signal that can be approximated by a sparse vector, from a set of linear measurements . When , the linear system is under-determined and the solution set is infinite. However, it is not difficult to show that for a sufficiently sparse the solution to is unique [26]. Unfortunately, the problem of searching for the sparsest subject to is NP-hard and impractical to solve for high-dimensional . Meanwhile, the CS theory indicates that this problem can be solved in a polynomial time, using sparsity promoting solvers such as Lasso [14], given that satisfies the Restricted Isometry Property (RIP) [20].

CS also applies to a dense when it has a sparse representation of the form (with a sparse ). Measurements can be expressed as , where is called the projection matrix. It has been shown that most random designs of would yield RIP with high probabilities [27]. The compressive imaging problem can be expressed as:

 ^x=D.Largmin(y,ΦD,λ) (2)

The well-known basis pursuit signal recovery corresponds to the following asymptotic solution [31]:

 ^x=limλ→0+D.Largmin(y,ΦD,λ) (3)

Hereafter, we focus on the block-CS framework where each represents an image block and represents the vector of compressive measurements for that block. The iterative DL procedure based on block-CS measurements can be written as:

 D(t+1)=argminD^ψ(t)(D) (4)

with

 ^ψ(t)(D) = N∑j=1L(yj,ΦjD,λ,α(t)j) α(t)j = Largmin(yj,ΦjD(t),λ)

To distinguish (4) from the normal DL in (1) and other BCS formulations [6, 7, 8], we refer to this problem as Dictionary Learning from linear Measurements or simply DL-M.

We could also arrange the block-wise measurements into a single system of linear equations:

 ⎡⎢ ⎢ ⎢ ⎢ ⎢⎣y1y2⋮yN⎤⎥ ⎥ ⎥ ⎥ ⎥⎦=Φ(IN⊗D)⎡⎢ ⎢ ⎢ ⎢⎣α1α2⋮αN⎤⎥ ⎥ ⎥ ⎥⎦ (5)

where

 Φ=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣Φ1Φ2⋱ΦN⎤⎥ ⎥ ⎥ ⎥ ⎥⎦ (6)

represents the block-diagonal measurement matrix. Our results can be easily extended to dense-CS, i.e. CS with a dense . Although, the utility of dense-CS would not allow sequential processing of blocks as required by an online-DL framework, a batch-DL framework is compatible with dense-CS.

## Iii Mathematical Analysis

The benefits of using a distinct for each block can be understood intuitively [9]. However, it is important to study the asymptotic behavior of DL when , as well as the non-asymptotic bounds for a finite .

In the first part of this section, we prove that the iterative DL-M algorithm returns a unique solution with a probability that approaches one for large . Specifically, we show that the outer problem, known as the ‘dictionary update’ stage,

 D(t+1)=argminD^ψ(t)(D)

is unique for fixed ’s and also every inner problem

 ∀j:α(t)j=Largmin(yj,ΦjD(t),λ)

is unique for a fixed . Therefore, starting from an initial point , the sequence of DL-M iterations forms a unique path.

To specify the accuracy of the DL-M algorithm, we measure the expectation

 EΦ{∥argminD^ψ(t)(D)−argminDψ(t)(D)∥2F} (7)

starting from the same (fixed) ’s. Meanwhile, the inner problem is precisely a noisy CS problem and its accuracy has been thoroughly studied [1]. Specifically, when where denotes the sparsity of , the inner CS problem for block can be solved exactly. The presented error analysis for a finite is limited to a single iteration of dictionary update. Nevertheless, the asymptotic conclusion as we present is that the DL-M and DL solutions converge as approaches infinity.

Based on the above remarks, our analysis is focused on a single iteration of DL-M. Therefore, for simplicity, we drop the iteration superscript in the rest of this section unless required.

First, we write in the standard quadratic format:

 ^ψ(D) = 12N∑j=1∥yj−ΦjDαj∥22+λN∑j=1∥αj∥1 = 12N∑j=1yTjyj+λN∑j=1∥αj∥1+ 12N∑j=1αTjDTΦTjΦjDαj−N∑j=1yTjΦjDαj

We can further write:

 αTjDTΦTjΦjDαj = Tr(αTjDTΦTjΦjDαj) = Tr(DTΦTjΦjDαjαTj) = ⟨D,ΦTjΦjDαjαTj⟩ = vec(D)Tvec(ΦTjΦjDαjαTj) = vec(D)T(αjαTj⊗ΦTjΦj)vec(D)

and

 yTjΦjDαj = Tr(yTjΦjDαj) = Tr(αjyTjΦjD) = ⟨ΦTjyjαTj,D⟩ = vec(ΦTjyjαTj)Tvec(D)

Letting and , the standard quadratic form of can be written as:

 g(d)=12dTQd+fTd+c (8)

with

 Q = N∑j=1αjαTj⊗ΦTjΦj f = −N∑j=1ΦTjyjαTj c = 12N∑j=1yTjyj+λN∑j=1∥αj∥1

Next, we shall specify the stochastic construction of block-CS measurements that we term the BIG measurement scheme.

###### Definition 1.

(BIG measurement) In a Block-based Independent Gaussian or BIG measurement scheme, each entry of each block measurement matrix

is independently drawn from a zero-mean random Gaussian distribution with variance

.

The variance guarantees that

. Note that although our analysis focuses on Gaussian measurements, it is straightforward to extend it to the larger class of sub-Gaussian measurements which includes the Rademacher and the general class of (centered) bounded random variables

[27].

### Iii-a Uniqueness

Before presenting the uniqueness results, we review the matrix extension of the Chernoff inequality [28] that is summarized in the following lemma.

###### Lemma 2.

(Matrix Chernoff, Theorem 5.1.1 in [28]). Consider a finite sequence of independent, random and positive semidefinite Hermitian matrices that satisfy . Define the random matrix . Compute the expectation parameters: and Then, for ,

 Eλmax(Y)≤eθ−1θμmax+1θRlogn (9)

and

 Eλmin(Y)≥1−e−θθμmin−1θRlogn (10)

Furthermore,

 P{λmax(Y)≥(1+δ)μmax}≤n(eδ(1+δ)(1+δ))μmax/R (11)

for and

 P{λmin(Y)≤(1−δ)μmin}≤n(e−δ(1−δ)(1−δ))μmin/R (12)

This lemma will be used to show that, with a high probability, the Hessian matrix of , which is a sum of random independent matrices, is full rank and invertible.

In the following theorem, let

denote the lower bound of the smallest eigenvalue of the covariance matrix

. Note that the covariance matrix must be full rank, or equivalently , otherwise even the original dictionary learning problem (based on the complete data) would not result in a unique solution. On top of that, the magnitude of has a direct impact on the condition number of the Hessian matrix and the numerical stability of DL-M, as well as DL.

###### Theorem 3.

In a BIG measurement scheme with , has a unique minimum with a high probability.

###### Proof.

Taking the derivative of with respect to and letting it equal to zero results in the linear equation . Thus, to prove that the solution is unique, we must show that the Hessian matrix is invertible (with a high probability) for large . Equivalently, we must show that the probability is close to one when . Since is a sum of independent matrices, we may use the matrix Chernoff inequality from Lemma 2. Hence, we must compute the following quantities:

 R=supλmax(αjαTj⊗ΦTjΦj)

and

 μmin = λmin(E∑jαjαTj⊗ΦTjΦj) = λmin(∑jαjαTj⊗In)

Using properties of the Kronecker product,

 λmax(αjαTj⊗ΦTjΦj) = λmax(αjαTj)λmax(ΦTjΦj) ≤ ∥αj∥22(1+δ)

where the inequality holds with probability for a random Gaussian measurement matrix [27]. Suppose the energy of every is bounded by some constant given that has bounded energy. Therefore, with probability , . Roughly speaking, assuming that pixel intensities are in the range , .

Again, using properties of the Kronecker product,

 μmin = λmin(N∑j=1αjαTj) ≈ Nλmin(E{αjαTj})≥Nμ0

Based on Lemma 2, specifically using (12) with ,

 P{λmin(Q)≤0}≤ne−μmin/R≤ne−Nμ0/R (13)

Requiring , and that , is equivalent to . ∎

We have established that the upper level problem results in a unique solution with a high probability666Clearly, projecting the resultant dictionary onto the space of matrices with a constant Frobenius-norm would preserve the uniqueness since there is only a single point on the sphere of constant-norm matrices that is closest to the current dictionary.. To complete this subsection, we use the following result from [29] that implies the lower level problem is unique.

###### Lemma 4.

[29] If entries of

are drawn from a continuous probability distribution on

, then for any and the lasso solution is unique with probability one.

Since each is drawn a continuous probability distribution in the BIG measurement scheme, is also distributed continuously in the space and this establishes that is unique.

### Iii-B Accuracy

In this subsection, we shall compute stochastic upper bounds for the distance between the DL-M solution and the DL solution for a single iteration of dictionary update and for fixed ’s. The extension of these results to multiple iterations is left as a future work. As before, in the following results, we omit the iteration superscript for simplicity.

Let us define the corresponding standard quadratic form for the upper-level DL problem:

 ¯g(d)=12dT¯Qd+¯fTd+¯c (14)

where

 ¯Q = N∑j=1αjαTj⊗In ¯f = −N∑j=1xjαTj ¯c = 12N∑j=1xTjxj+λN∑j=1∥αj∥1

For BIG measurements, . Therefore, it is easy to verify that , and where it is assumed that the data and coefficients are fixed. This leads to the following lemma that points out the unbiasedness of the compressive objective function.

###### Lemma 5.

is an unbiased estimator of

for BIG measurements:

 E{g(d)}=¯g(d)
###### Proof.
 E{g(d)} = E{12dTQd+fTd+c} = 12dTE{Q}d+E{f}Td+E{c} = 12dT¯Qd+¯fTd+¯c = ¯g(d)

The following crucial lemma implies that the two objective functions and get arbitrarily close as the number of blocks () approaches infinity.

###### Lemma 6.

([30], based on Theorem III.1) In a BIG scheme,

 P⎧⎨⎩∣∣∑Nj=1∥Φj(xj−Dαj)∥22−∑Nj=1∥xj−Dαj∥22∣∣∑Nj=1∥xj−Dαj∥22>ϵ⎫⎬⎭ ≤2e−Cϵm2γ

where can be computed as:

 γ=∑Nj=1∥xj−Dαj∥22maxj∥xj−Dαj∥22

We have simplified and customized Theorem III.1 of [30] for our problem here. Specifically, the bounds in [30] are tighter but more difficult to interpret. The above lemma states that, with a constant (high) probability that depends on , the deviation is inversely proportional to when the signal energy is evenly distributed among blocks.

Lemma 6 can be further customized by noticing that

 N∑j=1∥Φj(xj−Dαj)∥22−N∑j=1∥xj−Dαj∥22=2[g(d)−¯g(d)]

 P{|g(d)−¯g(d)|>ϵ¯g(d)}≤2e−Cϵm2γ (15)

Hereafter, to simplify the notation, let

 ^d=argmindg(d) (16)

and

 d∗=argmind¯g(d) (17)

.

The following theorem provides upper bounds for the expectation . Suppose that, for a fixed , there exists a positive constant such that . Clearly, this is a reasonable assumption for large according to Theorem 3. More specifically, according to Lemma 2, grows linearly with (because grows linearly with ).

###### Theorem 7.

and converge as approaches infinity. Specifically,

 E{∥^d−d∗∥22}≤2ϵμ1¯g(d∗)
###### Proof.

We start by writing the Taylor expansion of the quadratic function at :

 g(d∗)=g(^d)+∂g(d)∂dT∣∣ ∣∣d=^d(d∗−^d)+12(d∗−^d)TQ(d∗−^d)

Since ,

 ∂g(d)∂dT∣∣ ∣∣d=^d=0

and we can write

 g(d∗)−g(^d) = 12(d∗−^d)TQ(d∗−^d) ≥ λmin(Q)2∥d∗−^d∥22 ≥ μ12∥d∗−^d∥22

Taking the expected value of both sides

 E{∥d∗−^d∥22}≤2μ1E{g(d∗)−g(^d)} (18)

From Lemma 6 we know that with a probability of at least the following inequalities hold

 (1−ϵ)¯g(^d)≤g(^d)≤(1+ϵ)¯g(^d) (19)
 (1−ϵ)¯g(d∗)≤g(d∗)≤(1+ϵ)¯g(d∗) (20)

On the other hand, we have the following inequalities at the optimum points of and

 ¯g(d∗)≤¯g(^d) (21)

and

 g(^d)≤g(d∗) (22)

It is easy to check that, by combining (19), (20), (21) and (22), we can arrive at the following inequality:

 (1−ϵ)¯g(d∗)≤g(^d)≤(1+ϵ)¯g(d∗)

or equivalently,

 −ϵ¯g(d∗)≤¯g(d∗)−g(^d)≤ϵ¯g(d∗) (23)

Taking the expected value, we get

 −ϵ¯g(d∗)≤E{¯g(d∗)−g(^d)}≤ϵ¯g(d∗) (24)

From Lemma 5 we know that

 E{g(d∗)}=¯g(d∗)=E{¯g(d∗)}

Therefore,

 0≤E{g(d∗)−g(^d)}=E{¯g(d∗)−g(^d)} (25)

Use (24) and (25) to arrive at

 E{g(d∗)−g(^d)}≤ϵ¯g(d∗) (26)

which, along with (18), completes the proof. ∎

Note that is inversely proportional to and grows linearly with . Furthermore, using (12) for a constant probability, it can be shown that increases linearly with , making the ratio arbitrarily small for .

Finally, we show that after projection onto , where is a positive constant, the upper bound of the estimation error would still approach zero for large . By noticing that , this projection can be written as

 ^d←c^d∥^d∥2,d∗←cd∗∥d∗∥2

It is easy to show that

 ∥∥ ∥∥c^d∥^d∥2−cd∗∥d∗∥2∥∥ ∥∥2≤cmax{1∥^d∥2,1∥d∗∥2}∥^d−d∗∥2

Using and , lower bounds for and can be computed as

 ∥^d∥2≥∥f∥2λmax(Q),∥d∗∥2≥∥¯f∥2λmax(¯Q)

Therefore

 max{1∥^d∥2,1∥d∗∥2}≤max{λmax(Q)∥f∥2,λmax(¯Q)∥¯f∥2}

Using Lemma 2 and other well-established concentration inequalities, one can find stochastic upper bounds for quantities above. However, given that , , and scale linearly with , we can safely conclude that the estimation error remains bounded by an arbitrarily small number as approaches infinity. Moreover, intuitively speaking, the norm of and tends to increase before projection (which is the reason for bounding the dictionary in the first place) and the ratios and are likely to be smaller than one, resulting in a decrease in the estimation error.

## Iv The Main Algorithm

The employed algorithm for DL-M is based on (1). However, as explained below, we introduce several modifications to decrease the computational complexity and speed up the convergence. Similar to other DL algorithms, such as [4], the proposed algorithm consists of two stages that are called the sparse coding stage and the dictionary update stage. We describe them individually in the following subsections.

### Iv-a The sparse coding stage

As we mentioned in Section II-B, the basis pursuit (exact) CS recovery is the limit of the Lasso solution as approaches zero [31]. However, a truly sparse and exact representation is usually not possible using any dictionary with a finite size. As a result, in sparse recovery of natural images, is usually selected to be a small number rather than zero even in noiseless scenarios [17, 16]. Our algorithm starts from a coarse and overly sparse representation, by selecting the initial to be large, and gradually reduces until the desired balance between the total error sum of squares and the sparsity is achieved. The idea behind this modification is that the initial dictionary is suboptimal and not capable of giving an exact sparse representation. However, as the iterations pass, the dictionary becomes closer to the optimal dictionary and must be decreased to get a sparse representation that closely adheres to the measurements.

Initializing the counter at and starting from an initial dictionary , the sparse coding stage consists of performing the following optimization:

 ∀j:α(t)j=argminα12∥yj−ΦjD(t)α∥22+λ(t)∥α∥1 (27)

We deploy an exponential decay for :

 λ(t)=max{λ0e−tT∗.log(λ0λ∗),λ∗} (28)

According to (28), is decreased from to in iterations and stays fixed at henceforth. For an exact recovery, is seemingly a plausible choice. However, for the reasons mentioned earlier, we set to a very small but non-zero value that is specified in the simulations section.

### Iv-B The dictionary update stage

The quadratic optimization problem of (16) can be computationally inefficient to solve. More specifically, solving (16) in one step requires computing the inverse of (if it exists) which has roughly a time complexity of and is a memory intensive operation. The strategy that we employ in this paper, similar to what was proposed in [3, 24, 18], is to perform a gradient descent step:

 D(t+1)=D(t)−μ(t)∇D^ψ(t)(D) (29)

where can be computed efficiently:

 ∇D^ψ(t)(D)=−N∑j=1ΦTj(yj−ΦjD(t)α(t)j)α(t)jT (30)

The step size can be iteratively decreased with [18] or it can be optimized in a steepest descent fashion that is described below777If is well-conditioned, a single step of steepest descent can give a close approximation of the the solution of (16).

The optimal value of the step size can be computed using a simple line search [19]. However, for a quadratic objective function, we can derive in a closed form as shown below. Let . Then,

 μ(t)∗ = argminμN∑j=1∥yj−ΦjD(t+1)α(t)j∥22 = argminμN∑j=1∥yj−Φj(D(t)+μG(t))α(t)j∥22

Writing the optimality conditions for the objective function above, we can arrive at the following solution for :

 μ(t)∗=∥G(t)∥2F∑Nj=1∥ΦjG(t)α(t)j∥22 (31)

Since the initial dictionary consists of unit-norm columns, . As discussed in Section II, DL results in an unbounded dictionary if no constraint is put on the dictionary norm or the norm of its columns. Here, we employ a Frobenius bound on because it lets different dictionary columns have distinct norms. Specifically, after each update , we ensure the constraint . This is done by multiplying with . Algorithm 1 gives a summary of these steps.