1 Introduction
Sparsity has been shown to be a driving force in a myriad of applications in computer vision
[42, 43, 30], statistics [38, 39]and machine learning
[26, 23, 24]. Most often, sparsity is often enforced not on a particular signal but rather on its representation in a transform domain. Formally, a signal admits a sparse representation in terms of a dictionary if , and is sparse. In its simplest form, the problem of seeking for a sparse representation for a signal, possibly contaminated with noise as , can be posed in terms of the following pursuit problem:(1) 
where the pseudonorm counts the number of nonzero elements in . The choice of the (typically overcomplete) dictionary is far from trivial, and has motivated the design of different analytic transforms [9, 16, 27] and the development of dictionary learning methods [2, 36, 30]. The above problem, which is NPhard in general, is often relaxed by employing the penalty as a surrogate for the nonconvex measure, resulting in the celebrated Basis Pursuit DeNoising (BPDN) problem^{1}^{1}1This problem is also known in the statistical learning community as Least Absolute Shrinkage and Selection Operator (LASSO) [38], where the matrix is given by a set of measurements or descriptors, in the context of a sparse regression problem.:
(2) 
The transition from the to the relaxed case is by now well understood, and the solutions to both problems do coincide under sparse assumptions on the underlying representation (in the noiseless case), or have shown to be close enough in more general settings [17, 41].
This traditional model was recently extended to a multilayer setting [34, 1], where a signal is expressed as , for a sparse and (possibly convolutional) matrix , while also assuming that this representation satisfies , for yet another dictionary and sparse . Such a construction can be cascaded for a number of layers^{2}^{2}2In the convolutional setting [35, 37], the notion of sparsity is better characterized by the pseudonorm, which quantifies the density of nonzeros in the convolutional representations in a local sense. Importantly, however, the BPDN formulation (i.e., employing an penalty), still serves as a proxy for this norm. We refer the reader to [35] for a thorough analysis of convolutional sparse representations.. Under this framework, given the measurement , this multilayer pursuit problem (or Deep Coding Problem, as first coined in [34]), can be expressed as
(3) 
with . In this manner, one searches for the closest signal to while satisfying the model assumptions. This can be understood and analyzed as a projection problem [37]
, providing an estimate such that
, while forcing all intermediate representations to be sparse. Note the notation for brevity. Remarkably, the forward pass of neural networks (whose weights at each layer, , are set as the transpose of each dictionary ) yields stable estimations for the intermediate features or representations provided these are sparse enough [34]. Other generative models have also been recently proposed (as the closely related probabilistic framework in [patel2016probabilistic, nguyen2016semi]), but the multilayer sparse model provides a convenient way to study deep learning architectures in terms of pursuit algorithms
[Papyan18].As an alternative to the forward pass, one can address the problem in Equation (3) by adopting a projection interpretation and develop an algorithm based on a global pursuit, as in [37]. More recently, the work in [1] showed that this problem can be cast as imposing an analysis prior on the signal’s deepest sparse representation. Indeed, the problem in (3) can be written concisely as:
(4)  
(5) 
This formulation explicitly shows that the intermediate dictionaries play the role of analysis operators, resulting in a representation which should be orthogonal to as many rows from as possible – so as to produce zeros in . Interestingly, this analysis also allows for less sparse representations in shallower layers while still being consistent with the multilayer sparse model. While a pursuit algorithm addressing (4) was presented in [1], it is greedy in nature and does not scale to high dimensional signals. In other words, there are currently no efficient pursuit algorithms for signals in this multilayer model that leverage this symbiotic analysissynthesis priors. More importantly, it is still unclear how the dictionaries could be trained from real data under this scheme. These questions are fundamental if one is to bridge the theoretical benefits of the multilayer sparse model with practical deep learning algorithms.
In this work we propose a relaxation of the problem in Equation (4), turning this seemingly complex pursuit into a convex multilayer generalization of the Basis Pursuit (BP) problem^{3}^{3}3In an abuse of terminology, and for the sake of simplicity, we will refer to the BPDN problem in Equation (2) as BP.. Such a formulation, to the best of our knowledge, has never before been proposed nor studied, though we will comment on a few particular and related cases that have been of interest to the image processing and compressed sensing communities. We explore different algorithms to solve this multilayer problem, such as variable splitting and the Alternating Directions Method of Multipliers (ADMM) [6, 8] and the SmoothFISTA from [5]
, and we will present and analyze two new generalizations of Iterative Soft Thresholding Algorithms (ISTA). We will further show that these algorithms generalize feedforward neural networks (NNs), both fullyconnected and convolutional (CNNs), in a natural way. More precisely: the first iteration of such algorithms implements a traditional CNN, while a new recurrent architecture emerges with subsequent iterations. In this manner, the proposed algorithms provide a principled framework for the design of recurrent architectures. While other works have indeed explored the unrolling of iterative algorithms in terms of CNNs (e.g.
[44, 33]), we are not aware of any work that has attempted nor studied the unrolling of a global pursuit with convergence guarantees. Lastly, we demonstrate the performance of these networks in practice by training our models for image classification, consistently improving on the classical feedforward architectures without introducing filters nor any other extra parameters in the model.2 MultiLayer Basis Pursuit
In this work we propose a convex relaxation of the problem in Equation (4), resulting in a multilayer BP problem. For the sake of clarity, we will limit our formulations to two layers, but these can be naturally extended to multiple layers – as we will effectively do in the experimental section. This work is centered around the following problem:
(6) 
This formulation imposes a particular mixture of synthesis and analysis priors. Indeed, if >0 and , one recovers a traditional Basis Pursuit formulation with a factorized global dictionary. If , however, an analysis prior is enforced on the representation by means of , resulting in a more regularized solution. Note that if , and is not empty, the problem above becomes illposed without a unique solution^{4}^{4}4It is true that also in Basis Pursuit one can potentially obtain infinite solutions, as the problem is not strongly convex. since . In addition, unlike previous interpretations of the multilayer sparse model ([34, 37, 1]), our formulation stresses the fact that there is one unknown variable, , with different priors enforced on it. Clearly, one may also define and introduce , but this should be interpreted merely as the introduction of auxiliary variables to aid the derivation and interpretation of the respective algorithms. We will expand on this point in later sections.
Other optimization problems similar to have indeed been proposed, such as the AnalysisLASSO [10, 28], though their observation matrix and the analysis operator ( and , in our case) must be independent, and the latter is further required to be a tight frame [10, 28]. The Generalized Lasso problem [40] is also related to our multilayer BP formulation, as we will see in the following section. On the other hand, and in the context of image restoration, the work in [7] imposes a Total Variation and a sparse prior on the unknown image, as does the work in [21], thus being closely related to the general expression in (6).
2.1 Algorithms
From an optimization perspective, our multilayer BP problem can be expressed more generally as
(7) 
where is convex and smooth, and and are convex but nonsmooth. For the specific problem in (6), , and
. Since this problem is convex, the choice of available algorithms is extensive. We are interested in highdimensional settings, however, where interiorpoint methods and other solvers depending on secondorder information might have a prohibitive computational complexity. In this context, the Iterative Soft Thresholding Algorithm (ISTA), and its Fast version (FISTA), are appealing as they only require matrixvector multiplications and entrywise operations. The former, originally introduced in
[15], provides convergence (in function value) of order , while the latter provides an improved convergence rate with order of [4].Iterative shrinkage algorithms decompose the total loss into two terms: , convex and smooth (with Lipschitz constant ), and , convex and possibly non smooth. The central idea of ISTA, as a proximal gradient method for finding a minimizer of , is to iterate the updates given by the proximal operator of at the forwardstep point:
(8) 
Clearly, the appeal of ISTA depends on how effectively the proximal operator can be computed. When (as in the original BP formulation), such a proximal mapping becomes separable, resulting in the elementwise shrinkage or softthresholding operator. However, this family of methods cannot be readily applied to where . Indeed, computing when is a sum of composite terms is no longer directly separable, and one must resort to iterative approaches, making ISTA lose its appeal.
The problem is also related to the Generalized Lasso formulation [40] in the compressed sensing community, which reads
(9) 
Certainly, the multilayer BP problem we study can be seen as a particular case of this formulation^{5}^{5}5One can rewrite problem (6) as in (9) by making and .. With this insight, one might consider solving through the solution of the generalized Lasso [40]. However, such an approach also becomes computationally demanding as it boils down to an iterative algorithm that includes the inversion of linear operators. Other possible solvers might rely on reweighted approaches [11], but these also require iterative matrix inversions.
A simple way of tackling problem is the popular Alternating Directions Method of Multipliers (ADMM), which provides a natural way to address these kind of problems through variable splitting and auxiliary variables. For a two layer model, one can rewrite the multilayer BP as a constrained minimization problem:
(10)  
(11) 
ADMM minimizes this constrained loss by constructing an augmented Lagrangian (in normalized form) as
(12) 
which can be minimized iteratively by repeating the updates in Algorithm 1. This way, and after merging both terms, the pursuit of the innermost representation ( in this case) is carried out in terms of a regular BP formulation that can be tackled with a variety of convex methods, including ISTA or FISTA. The algorithm then updates the intermediate representations () by a simple shrinkage operation, followed by the update of the dual variable, . Note that this algorithm is guaranteed to converge (at least in the sequence sense) to a global optimum of due to the convexity of the function being minimized [6, 8].
A third alternative, which does not incur in an additional inner iteration nor inversions, is the SmoothFISTA approach from [5]. SFISTA addresses cost functions of the same form as problem by replacing one of the nonsmooth functions, in Eq. (7), by a smoothed version in terms of its Moreau envelope. In this way, SFISTA converges with order to an estimate that is away from the solution of the original problem in terms of function value. We will revisit this method further in the following sections.
Before moving on, we make a short intermission to note here that other Basis Pursuit schemes have been proposed in the context of multilayer sparse models and deep learning. Already in [34] the authors proposed the Layered Basis Pursuit, which addresses the sequence of pursuits given by
(13) 
from to , where . Clearly, each of these can be solved with any BP solver just as well. A related idea was also recently proposed in [sun2018supervised], showing that cascading basis pursuit problems can lead to competitive deep learning constructions. However, the Layered Basis Pursuit formulation, or other similar variations that attempt to unfold neural network architectures [33, 44], do not minimize
and thus their solutions only represent suboptimal and heuristic approximations to the minimizer of the multilayer BP. More clearly, such a series of steps never provide estimates
that can generate a signal according to the multilayer sparse model. As a result, one cannot reconstruct , because each representation is required to explain the next layer only approximately, so that .2.2 Towards MultiLayer ISTA
We now move to derive the proposed approach to efficiently tackle while relying on the concept of the gradient mapping (see for example [3, Chapter 10]), which we briefly review next. Given a function , where is convex and smooth with Lipschitz constant and is convex, the gradient mapping is the operator given by
(14) 
Naturally, the ISTA update step in Equation (8) can be seen as a “gradientmapping descent” step, since it can be rewritten as . Moreover, provides a sort of generalization of the gradient of , since

if ,

if and only if is a minimizer of .
We refer the reader to [3, Chapter 10] for further details on gradient mapping operators.
Returning to the problem in (7), our first attempt to minimize is a proximal gradientmapping method, and it takes an update of the following form:
(15) 
for constants and that will be specified shortly. This expression, however, requires the computation of , which is problematic as it involves a composite term^{6}^{6}6
The proximal of a composition with an affine map is only available for unitary linear transformations. See
[3, Chapter 10] and [13] for further details.. To circumvent this difficulty, we propose the following approximation in terms of. In the spirit of the chain rule
^{7}^{7}7The step taken to arrive at Equation (16) is not actually the chain rule, as the gradient mapping is not necessarily a gradient of a smooth function., we modify the previous update to(16) 
Importantly, the above update step now involves the prox of as opposed to that of . This way, in the case of , the proximal mapping of becomes the softthresholding operator with parameter , i.e. . An analogous operator is obtained for just as well. Therefore, the proposed MultiLayer ISTA update can be concisely written as
(17) 
A few comments are in place. First, this algorithm results in a nested series of shrinkage operators, involving only matrixvector multiplications and entrywise non linear operations. Note that if , i.e. in the case of a traditional Basis Pursuit problem, the update above reduces to the update of ISTA. Second, though seemingly complicated at first sight, the resulting operator in (17) can be decomposed into simple recursive layerwise operations, as presented in Algorithm 2. Lastly, because the above update provides a multilayer extension to ISTA, one can naturally suggest a “fast version” of it by including a momentum term, just as done by FISTA. In other words, MLFISTA will be given by the iterations
(18)  
(19) 
where , and the parameter is updated according to . Clearly, one can also write this algorithm in terms of layerwise operations, as described in Algorithm 3.
2.3 Convergence Analysis of MLISTA
One can then inquire – does the update in Equation (17
) provide convergent algorithm? Does the successive iterations minimize the original loss function? Though these questions have been extensively studied for proximal gradient methods
[4, 13], algorithms based on a proximal gradient mapping have never been proposed – let alone analyzed. Herein we intend to provide a first theoretical analysis of the resulting multilayer thresholding approaches.Let us formalize the problem assumptions, recalling that we are interested in
(20) 
where is a quadratic convex function, is a convex and Lipschitz continuous function with constant and is a proper closed and convex function that is Lipschitz continuous over its domain. Naturally, we will assume that both and are proximable, in the sense that and can be efficiently computed for any and . We will further require that has a bounded domain^{8}^{8}8This can be easily accommodated by adding to a norm bound constraint in the form of an indicator function , for some large enough .. More precisely, . We denote , so that satisfies .
Note also that the convex and smooth function can be expressed as , for a positive semidefinite matrix . The gradient of can then easily be bounded by
(21) 
for any with .
It is easy to see that the algorithm in Equation (16) does not converge to the minimizer of the problem by studying its fixed point. The point is a fixed point of MLISTA if
(22)  
(23) 
where . We extend on the derivation of this condition in Section 5.1. This is clearly different from the optimality conditions for problem , which is
(24)  
(25) 
Nonetheless, we will show that the parameter controls the proximity of “near”fixed points to the optimal solution in the following sense: as gets smaller, the objective function of a fixedpoint of the MLISTA method gets closer to , the minimal value of . In addition, for points that satisfy the fixed point equation up to some tolerance , the distance to optimality in terms of the objective function is controlled by both and .
We first present the following lemma, stating that the norm of the gradient mapping operator is bounded. For clarity, we defer the proof of this lemma, as well as that of the following theorem, to Section 5.
Lemma 2.1.
For any and ,
(26) 
We now move to our main convergence result. In a nutshell, it states that the distance from optimality in terms of the objective function value of fixed points is bounded by constants multiplying and .
Theorem 2.2.
Let^{9}^{9}9For a matrix , denotes the spectral norm of : , where
stands for the maximal eigenvalue of its argument.
, , and assume and . If(27) 
then
(28) 
where ,
(29) 
and
(30)  
(31)  
(32)  
(33) 
A consequence of this result is the following.
Corollary 2.2.1.
Suppose is the sequence generated by MLISTA with and . If , then
(34) 
where , and are those given in (30).
An additional consequence of Theorem 2.2 is an analogous result for MLFISTA. Recall that MLFISTA introduces a momentum term to the update provided by MLISTA, and can be written as
(35)  
(36) 
We have the following result.
Corollary 2.2.2.
Before moving on, let us comment on the significance of these results. On the one hand, we are unaware of any results for proximal gradientmapping algorithms, and in this sense, the analysis above presents a first result of its kind. On the other hand, the analysis does not provide a convergence rate, and so they do not reflect any benefits of MLFISTA over MLISTA. As we will see shortly, the empirical convergence of these methods significantly differ in practice.
2.4 Synthetic Experiments
We now carry a series of synthetic experiments to demonstrate the effectiveness of the multilayer Basis Pursuit problem, as well as the proposed iterative shrinkage methods.
First, we would like to illustrate the benefit of the proposed multilayer BP formulation when compared to the traditional sparse regression problem. In other words, exploring the benefit of having . To this end, we construct a two layer model with Gaussian matrices and , (, ). We construct our signals by obtaining representation with and , following the procedure described in [1], so as to provide representations consistent with the multilayer sparse model. Lastly, we contaminate the signals with Gaussian i.i.d. noise creating the measurements with SNR=10. We compare minimizing with (which accounts to solving a classical BP problem) with the case when , as a function of when solved with ISTA and MLISTA. As can be seen from the results in Figure 1, enforcing the additional analysis penalty on the intermediate representation can indeed prove advantageous and provide a lower recovery error in both and . Clearly, this is not true for any value of , as the larger this parameter becomes, the larger the bias will be in the resulting estimate. For the sake of this demonstration we have set as the optimal value for each (with grid search). The theoretical study of the conditions (in terms of the model parameters) under which provides a better recovery, and how to determine this parameter in practice, are interesting questions that we defer to future work.
We also employ this synthetic setup to illustrate the convergence properties of the main algorithms presented above: ADMM (employing either ISTA or FISTA for the inner BP problem), the SmoothFISTA [5] and the proposed MultiLayer ISTA and MultiLayer FISTA. Once again, we illustrate these algorithms for the optimal choice of and from the previous experiment, and present the results in Figure 2. We measure the convergence in terms of function value for all methods, as well as the convergence to the solution found with ADMM run until convergence.
ADMM converges in relatively few iterations, though these take a relatively long time due to the inner BP solver. This time is reduced when using FISTA rather than ISTA (as the inner solver converges faster), but it is still significantly slower than any of the other alternatives  even while using warmstart at every iteration, which we employ in these experiments.
Both SFISTA and our multi layer solvers depend on a parameter that controls the accuracy of their solution and affect their convergence speed – the for the former, and the for the latter approaches. We set these parameters so as to obtain roughly the same accuracy in terms of recovery error, and compare their convergence behavior. We can see that MLFISTA is clearly faster than MLISTA, and slightly slightly faster than SFISTA. Lastly, in order to demonstrate the effect of in MLISTA and MLFISTA, we run the same algorithms for different values of this parameter (in decreasing order and equispaced in logarithmic scale between and 1) for the same setting, and present the results in Figure 3 for MLISTA (top) and MLFISTA (bottom). These numerical results illustrate the theoretical analysis provided by Theorem 2.2 in that the smaller , the more accurate the solution becomes, albeit requiring more iterations to converge. These results also reflect the limitation of our current theoretical analysis, which is incapable of providing insights into the convergence rate.
3 Principled Recurrent Neural Networks
As seen above, the MLISTA and MLFISTA schemes provide efficient solvers for problem . Interestingly, if one considers the first iteration of either of the algorithms (with ), the update of the inner most representation results in
(39) 
for a twolayer model, for instance. If one further imposes a nonnegativity assumption on the representation coefficients, the thresholding operators become nonnegative projections shifted by a bias of . Therefore, the above softthresholding operation can be equivalently written as
(40) 
where the biases vectors
and account for the corresponding thresholds^{10}^{10}10Note that this expression is more general in that it allows for different thresholds per atom, as opposed the expression in (39). The latter can be recovered by setting every entry in the bias vector to be .. Just as pointed out in [34], this is simply the forward pass in a neural network. Moreover, all the analysis presented above holds also in the case of convolutional dictionaries, where the dictionary atoms are nothing but convolutional filters (transposed) in a convolutional neural network. Could we benefit from this observation to improve on the performance of CNNs?In this section, we intend to demonstrate how, by interpreting neural networks as approximation algorithms of the solution to a MultiLayer BP problem, one can boost the performance of typical CNNs without introducing any parameters in the model. To this end, we will first impose a generative model on the features in terms of multilayer sparse convolutional representations; i.e., we assume that , for convolutional dictionaries . Furthermore, we will adopt a supervised learning setting in which we attempt to minimize an empirical risk over training samples of signals with labels
. A classifier
, with parameters , will be trained on estimates of said features obtained as the solution of the MLBP problem; i.e.(41) 
The function is a loss or cost function to be minimized during training, such as the cross entropy which we employ for the classification case. Our approach to address this bilevel optimization problem is to approximate the solution of the lowerlevel problem by iterations of the MLISTA approaches – effectively implemented as layers of unfolded recurrent neural networks. This way, becomes a straightforward function of and the model parameters ( and ), which can be plugged into the loss function . A similar approach is employed by Task Driven Dictionary Learning [29] in which the constraint is a single layer BP (i.e. ) that is solved with LARS [18] until convergence, resulting in a more involved algorithm.
Importantly, if only one iteration is employed for the MLISTA, and a linear classifier^{11}^{11}11Or, in fact, any other neuralnetworkbased classifier acting on the obtained features. is chosen for , the problem in (41) boils down exactly to training a CNN to minimize the classification loss . Naturally, when considering further iterations of the multilayer pursuit, one is effectively implementing a recurrent neural network with “skip connections”, as depicted in Figure 4 for a twolayer model. These extended networks, which can become very deep, have exactly as many parameters as their traditional forwardpass counterparts – namely, the dictionaries , biases and classifier parameters . Notably, and unlike other popular constructions in the deep learning community (e.g., Residual Neural Networks [22], DenseNet [25], and other similar constructions), these recurrent components and connections follow a precise optimization justification.
The concept of unfolding an iterative sparse coding algorithm is clearly not new. The first instance of such an idea was formalized by the Learned ISTA (LISTA) approach [20]. LISTA decomposes the linear operator of ISTA in terms of 2 matrices, replacing the computation of by
(42) 
following the equivalences and . Then, it adaptively learns these new operators instead of the initial dictionary in order to provide estimates that approximate the solution of ISTA. Interestingly, such a decomposition allows for the acceleration of ISTA [32], providing an accurate estimate in very few iterations. A natural question is, then, could we propose an analogous multilayer Learned ISTA?
There are two main issues that need to be resolved if one is to propose a LISTAlike decomposition in the framework of our multilayer pursuits. The first one is that the decomposition in (42) has been proposed and analyzed for general matrices (i.e., fullyconnected layers in a CNN context), but not for convolutional dictionaries. If one was to naively propose to learn such an (unconstrained) operator , this would result in an enormous amount of added parameters. To resolve this point, in the case where is a convolutional dictionary (as in CNNs) we propose a decomposition of the form
(43) 
where is also constrained to be convolutional, thus controlling the number of parameters^{12}^{12}12For completeness, we have also tested the traditional decomposition proposed in Equation (42), resulting in worse performance than that of MLISTA – likely due to the significant increase in the number of parameters discussed above.. In fact, the number of parameters in a layer of this MLISTA is simply twice as many parameters as the conventional case, since the number of convolutional filters in and (and their dimensions) are equal to those in .
The second issue is concerned with the fact that LISTA was proposed as a relaxation of ISTA – a pursuit tackling a single layer pursuit problem. To accommodate a similar decomposition in our multilayer setting, we naturally extend the update to:
(44)  
(45) 
for a twolayer model for simplicity. In the context of the supervised classification setting, the learning of the dictionaries is replaced by learning the operators and . Note that this decomposition prevents us from obtaining the dictionaries , and so we use^{13}^{13}13An alternative is to employ , but this choice was shown to perform slightly worse in practice. in Equation (44).
4 Experiments
In this final section, we show how the presented algorithms can be used for image classification on three common datasets: MNIST, SVHN and CIFAR10, while improving the performance of CNNs without introducing any extra parameters in the model. Recalling the learning formulation in Equation (41), we will compare different architectures resulting from different solvers for the features
. As employing only one iteration of the proposed algorithms recovers a traditional feedforward network, we will employ such a basic architecture as our baseline and compare it with the Multi Layer ISTA and FISTA, for different number of iterations or unfoldings. Also for this reason, we deliberately avoid using training “tricks” popular in the deep learning community, such as batch normalization, dropout, etc., so as to provide clear experimental setups that facilitate the understanding and demonstration of the presented ideas.
For the MNIST case, we construct a standard (LeNetstyle) CNN with 3 convolutional layers (i.e., dictionaries) with 32, 64 and 512 filters, respectively^{14}^{14}14Kernel sizes of , and
, respectively, with stride of 2 in the first two layers.
, and a final fullyconnected layer as the classifier . We also enforce nonnegativity constraints on the representations, resulting in the application of ReLUs and biases as shrinkage operators. For SVHN we use an analogous model, though with three input channels and slightly larger filters to accommodate the larger input size. For CIFAR, we define a MLCSC model with 3 convolutional layers, and the classifier functionas a threelayer CNN. This effectively results in a 6 layers architecture, out of which the first three are unfolded in the context of the multilayer pursuits. All models are trained with SGD with momentum, decreasing the learning rate every so many iterations. In particular, we make use of a PyTorch implementation, and training code is made available
^{15}^{15}15Available through the first author’s website. online.In order to demonstrate the effect of the MLISTA iterations (or unfoldings), we first depict the test error as a function of the training epochs for different number of such iterations in Figure
5. Recall that the case of 0 unfoldings corresponds to the typical feedforward CNN, while the case with 6 unfoldings effectively implements a 18layersdeep architecture, alas having the same number of parameters. As can be seen, further unfoldings improve on the resulting performance.Moving to a more complete comparison, we demonstrate the MLISTA and MLFISTA architectures when compared to some of the models mentioned above; namely:

MLLISTA: replacing the learning of the convolutional dictionaries (or filters) by the learning of the (convolutional) factors and , as indicated in Equation (44).

Layered Basis Pursuit: the approach proposed in [34], which unrolls the iteration of ISTA for a singlelayer BP problem at each layer. In contrast, the proposed MLISTA/FISTA unrolls the iterations of the entire MultiLayer BP problem.

An “AllFree” model: What if one ignores the generative model (and the corresponding pursuit interpretation) and simply frees all the filters to be adaptively learned? In order to study this question, we train a model with the same depth and an analogous recurrent architecture as the unfolded MLISTA/FISTA networks, but where all the filters of the different layers are free to be learned and to provide the best possible performance.
It is worth stressing that the MLISTA, MLFISTA and Layered BP have all the same number of parameters as the feedforward CNN. The MLLISTA version has twice as many parameters, while the AllFree version has order more parameters, where is the number of layers and is the number of unfoldings.
The accuracy as a function of the iterations for all models are presented in Figure 6, and the final results are detailed in Table I. A first observation is that most “unrolled” networks provide an improvement over the baseline feedforward architecture. Second, while the Layered BP performs very well on MNIST, it falls behind on the other two more challenging datasets. Recall that while this approach unfolds the iterations of a pursuit, it does so one layer at a time, and does not address a global pursuit problem as the one we explore in this work.
Third, the performances of the MLISTA, MLFISTA and MLLISTA are comparable. This is interesting, as the LISTAtype decomposition does not seem to provide an importance advantage over the unrolled multilayer pursuits. Forth, and most important of all, freeing all the parameters in the architecture does not provide important improvements over the MLISTA networks. Limited training data is not likely to be the cause, as MLISTA/FISTA outperforms the larger model even for CIFAR, which enjoys a rich variability in the data and while using dataaugmentation. This is noteworthy, and this result seems to indicate that the consideration of the multilayer sparse model, and the resulting pursuit, does indeed provide an (approximate) solution to the problem behind CNNs.
Model  MNIST  SVHN  CIFAR 10 

FeedForward  98.78 %  92.44 %  79.00 % 
Layered BP  99.19 %  93.42 %  80.73 % 
MLISTA  99.10 %  93.52 %  82.93 % 
MLFISTA  99.16 %  93.79 %  82.79 % 
MLLISTA  98.81 %  93.71 %  82.68 % 
AllFree  98.89 %  94.06 %  81.48 % 
5 Proofs of Main Theorems
5.1 Fixed Point Analysis
A vector is a fixed point of the MLISTA update from Equation (16) iff
(46) 
By the second prox theorem [3, Theorem 6.39], we have that
(47) 
or, equivalently, there exists so that
(48) 
Employing the definition of ,
(49) 
Next, denote
(50) 
Employing the second prox theorem on (50), we have that the above is equivalent to the existence of for which . Thus, (49) amounts to
(51) 
for some and
Comments
There are no comments yet.