# Learning Stable Multilevel Dictionaries for Sparse Representations

Sparse representations using learned dictionaries are being increasingly used with success in several data processing and machine learning applications. The availability of abundant training data necessitates the development of efficient, robust and provably good dictionary learning algorithms. Algorithmic stability and generalization are desirable characteristics for dictionary learning algorithms that aim to build global dictionaries which can efficiently model any test data similar to the training samples. In this paper, we propose an algorithm to learn dictionaries for sparse representations from large scale data, and prove that the proposed learning algorithm is stable and generalizable asymptotically. The algorithm employs a 1-D subspace clustering procedure, the K-hyperline clustering, in order to learn a hierarchical dictionary with multiple levels. We also propose an information-theoretic scheme to estimate the number of atoms needed in each level of learning and develop an ensemble approach to learn robust dictionaries. Using the proposed dictionaries, the sparse code for novel test data can be computed using a low-complexity pursuit procedure. We demonstrate the stability and generalization characteristics of the proposed algorithm using simulations. We also evaluate the utility of the multilevel dictionaries in compressed recovery and subspace learning applications.

• 44 publications
• 42 publications
• 17 publications
07/07/2020

### Efficient and Parallel Separable Dictionary Learning

Separable, or Kronecker product, dictionaries provide natural decomposit...
09/08/2013

### A Clustering Approach to Learn Sparsely-Used Overcomplete Dictionaries

We consider the problem of learning overcomplete dictionaries in the con...
01/31/2020

### Learning Deep Analysis Dictionaries – Part II: Convolutional Dictionaries

In this paper, we introduce a Deep Convolutional Analysis Dictionary Mod...
04/19/2018

### Dictionary learning - from local towards global and adaptive

This paper studies the convergence behaviour of dictionary learning via ...
02/18/2013

### Metrics for Multivariate Dictionaries

Overcomplete representations and dictionary learning algorithms kept att...
01/29/2017

### Supervised Deep Sparse Coding Networks

In this paper, we describe the deep sparse coding network (SCN), a novel...
03/11/2018

### Deep Dictionary Learning: A PARametric NETwork Approach

Deep dictionary learning seeks multiple dictionaries at different image ...

## I Introduction

### I-a Dictionary Learning for Sparse Representations

Several types of naturally occurring data have most of their energy concentrated in a small number of features when represented using an linear model. In particular, it has been shown that the statistical structure of naturally occurring signals and images allows for their efficient representation as a sparse linear combination of elementary features [1]. A finite collection of normalized features is referred to as a dictionary. The linear model used for general sparse coding is given by

 y=Ψa+n, (1)

where

is the data vector and

is the dictionary. Each column of the dictionary, referred to as an atom, is a representative pattern normalized to unit norm. is the sparse coefficient vector and

is a noise vector whose elements are independent realizations from the Gaussian distribution

.

The sparse coding problem is usually solved as

 ^a=\operatornamewithlimitsargmina∥a∥0 subj. to ∥y−Ψa∥22≤ϵ, (2)

where indicates the sparsity measure which counts the number of non-zero elements, denotes the norm and is the error goal for the representation. The norm, denoted by , can be used instead of measure to convexify (2). A variety of methods can be found in the literature to obtain sparse representations efficiently [2, 3, 4, 5]. The sparse coding model has been successfully used for inverse problems in images [6], and also in machine learning applications such as classification, clustering, and subspace learning to name a few [7, 8, 9, 10, 11, 12, 13, 14, 15, 16].

The dictionary used in (2) can be obtained from predefined bases, designed from a union of orthonormal bases [17], or structured as an overcomplete set of individual vectors optimized to the data [18]. A wide range of batch and online dictionary learning algorithms have been proposed in the literature [19, 20, 21, 22, 23, 24, 25, 26, 27], some of which are tailored for specific tasks. The conditions under which a dictionary can be identified from the training data using an minimization approach are derived in [28]. The joint optimization problem for dictionary learning and sparse coding can be expressed as [6]

 minΨ,A∥Y−ΨA∥2F subj. to ∥ai∥0≤S,∀i,∥ψj∥2=1,∀j, (3)

where is a matrix of training vectors, is the coefficient matrix, is the sparsity of the coefficient vector and denotes the Frobenius norm.

### I-B Multilevel Learning

In this paper, we propose a hierarchical multilevel dictionary learning algorithm that is implicitly regularized to aid in sparse approximation of data. The proposed multilevel dictionary (MLD) learning algorithm is geared towards obtaining global

dictionaries for the entire probability space of the data, which are

provably stable, and generalizable to novel test data. In addition, our algorithm involves simple schemes for learning and representation: a 1-D subspace clustering algorithm (K-hyperline clustering [29]) is used to infer atoms in each level, and sparse representations are obtained in each level using a pursuit scheme that employs just correlate-and-max operations. In summary, the algorithm creates a sub-dictionary for each level and obtains a residual which is used as the training data for the next level, and this process is continued until a pre-defined stopping criterion is reached.

The primary utility of sparse models with learned dictionaries in data processing and machine learning applications stems from the fact that the dictionary atoms serve as predictive features

, capable of providing a good representation for some aspect of the test data. From the viewpoint of statistical learning theory

[30], a good predictive model is one that is stable and generalizable, and MLD learning satisfies both these properties. To the best of our knowledge, there is no other dictionary learning method which has been proven to satisfy these properties. Generalization ensures that the learned dictionary can successfully represent test data drawn from the same probability space as the training data, and stability guarantees that it is possible to reliably learn such a dictionary from an arbitrary training set. In other words, the asymptotic stability and generalization of MLD provides theoretical justification for the uniformly good performance of global multilevel dictionaries. We can minimize the risk of overfitting further by choosing a proper model order. We propose a method based on the minimum description length (MDL) principle [31] to choose the optimal model order, which in our case corresponds to the number of dictionary elements in each level. Recently, other approaches have been proposed to choose the best order for a given sparse model using MDL [27], so that the generalization error is minimized. However, the difference in our case is that, in addition to optimizing the model order for a given training set using MDL, we prove that any dictionary learned using MLD is generalizable and stable. Since both generalization and stability are asymptotic properties, we also propose a robust variant of our MLD algorithm using randomized ensemble methods, to obtain an improved performance with test data. Note that our goal is not to obtain dictionaries optimized for a specific task [24], but to propose a general predictive sparse modeling framework that can be suitably adapted for any task.

The dictionary atoms in MLD are structurally regularized, and therefore the hierarchy in representation is imposed implicitly for the novel test data, leading to improved recovery in ill-posed and noise-corrupted problems. Considering dictionary learning with image patches as an example, in MLD the predominant atoms in the first few levels (see Figure 1) always contribute the highest energy to the representation. For natural image data, it is known that the patches are comprised of geometric patterns or stochastic textures or a combination of both [32]. Since the geometric patterns usually are of higher energy when compared to stochastic textures in images, MLD learns the geometric patterns in the first few levels and stochastic textures in the last few levels, thereby adhering to the natural hierarchy in image data. The hierarchical multistage vector quantization (MVQ) [33] is related to MLD learning. The important difference, however, is that dictionaries obtained for sparse representations must assume that the data lies in a union-of-subspaces, and the MVQ does not incorporate this assumption. Note that multilevel learning is also different from the work in [34], where multiple sub-dictionaries are designed and one of them is chosen for representing a group of patches.

### I-C Stability and Generalization in Learning

A learning algorithm is a map from the space of training examples to the hypothesis space of functional solutions. In clustering, the learned function is completely characterized by the cluster centers. Stability of a clustering algorithm implies that the cluster centroids learned by the algorithm are not significantly different when different sets of i.i.d. samples from the same probability space are used for training [35]. When there is a unique minimizer to the clustering objective with respect to the underlying data distribution, stability of a clustering algorithm is guaranteed [36]

and this analysis has been extended to characterize the stability of K-means clustering in terms of the number of minimizers

[37]. In [38], the stability properties of the K-hyperline clustering algorithm have been analyzed and they have been shown to be similar to those of K-means clustering. Note that all the stability characterizations depend only on the underlying data distribution and the number of clusters, and not on the actual training data itself. Generalization implies that the average empirical training error becomes asymptotically close to the expected error with respect to the probability space of data. In [39], the generalization bound for sparse coding in terms of the number of samples , also referred to as sample complexity, is derived and in [40] the bound is improved by assuming a class of dictionaries that are nearly orthogonal. Clustering algorithms such as the K-means and the K-hyperline can be obtained by constraining the desired sparsity in (3) to be . Since the stability characteristics of clustering algorithms are well understood, employing similar tools to analyze a general dictionary learning framework such as MLD can be beneficial.

### I-D Contributions

In this paper, we propose the MLD learning algorithm to design global representative dictionaries for image patches. We show that, for a sufficient number of levels, the proposed algorithm converges, and also demonstrate that a multilevel dictionary with a sufficient number of atoms per level exhibits energy hierarchy (Section III-B). Furthermore, in order to estimate the number of atoms in each level of MLD, we provide an information-theoretic approach based on the MDL principle (Section III-C). In order to compute sparse codes for test data using the proposed dictionary, we develop the simple Multilevel Pursuit (MulP) procedure and quantify its computational complexity (Section III-D). We also propose a method to obtain robust dictionaries with limited training data using ensemble methods (Section III-E). Some preliminary algorithmic details and results obtained using MLD have been reported in [41].

Using the fact that the K-hyperline clustering algorithm is stable, we perform stability analysis of the MLD algorithm. For any two sets of i.i.d. training samples from the same probability space, as the number of training samples , we show that the dictionaries learned become close to each other asymptotically. When there is a unique minimizer to the objective in each level of learning, this holds true even if the training sets are completely disjoint. However, when there are multiple minimizers for the objective in at least one level, we prove that the learned dictionaries are asymptotically close when the difference between their corresponding training sets is . Instability of the algorithm when the difference between two training sets is , is also shown for the case of multiple minimizers (Section IV-C). Furthermore, we prove the asymptotic generalization of the learning algorithm (Section IV-D).

In addition to demonstrating the stability and the generalization behavior of MLD learning with image data (Sections V-A and V-B), we evaluate its performance in compressed recovery of images (Section V-C). Due to its theoretical guarantees, the proposed MLD effectively recovers novel test images from severe degradation (random projection). Interestingly, the proposed greedy pursuit with robust multilevel dictionaries results in improved recovery performance when compared to minimization with online dictionaries, particularly at reduced number of measurements and in presence of noise. Furthermore, we perform subspace learning with graphs constructed using sparse codes from MLD and evaluate its performance in classification (Section V-D). We show that the proposed approach outperforms subspace learning with neighborhood graphs as well as graphs based on sparse codes from conventional dictionaries.

## Ii Background

In this section, we describe the K-hyperline clustering, a 1-D subspace clustering procedure proposed in [29], which forms a building block of the proposed dictionary learning algorithm. Furthermore, we briefly discuss the results for stability analysis of K-means and K-hyperline algorithms reported in [35] and [38] respectively. The ideas described in this section will be used in Section IV to study the stability characteristics of the proposed dictionary learning procedure.

### Ii-a K-hyperline Clustering Algorithm

The K-hyperline clustering algorithm is an iterative procedure that performs a least squares fit of -D linear subspaces to the training data [29]. Note that the K-hyperline clustering is a special case of general subspace clustering methods proposed in [42, 43], when the subspaces are dimensional and constrained to pass through the origin. In contrast with K-means, K-hyperline clustering allows each data sample to have an arbitrary coefficient value corresponding to the centroid of the cluster it belongs to. Furthermore, the cluster centroids are normalized to unit norm. Given the set of data samples and the number of clusters , K-hyperline clustering proceeds in two stages after initialization: the cluster assignment and the cluster centroid update. In the cluster assignment stage, training vector is assigned to a cluster based on the minimum distortion criteria, , where the distortion measure is

 d(y,ψ)=∥y−ψ(yTψ)∥22. (4)

In the cluster centroid update stage, we perform singular value decomposition (SVD) of

, where contains indices of training vectors assigned to the cluster . The cluster centroid is updated as the left singular vector corresponding to the largest singular value of the decomposition. This can also be computed using a linear iterative procedure. At iteration , the cluster centroid is given by

 ψ(t+1)j=YjYTjψ(t)j/∥YjYTjψ(t)j∥2. (5)

Usually a few iterations are sufficient to obtain the centroids with good accuracy.

### Ii-B Stability Analysis of Clustering Algorithms

Analyzing the stability of unsupervised clustering algorithms can be valuable in terms of understanding their behavior with respect to perturbations in the training set. These algorithms extract the underlying structure in the training data and the quality of clustering is determined by an accompanying cost function. As a result, any clustering algorithm can be posed as an Empirical Risk Minimization (ERM) procedure, by defining a hypothesis class of loss functions to evaluate the possible cluster configurations and to measure their quality

[44]. For example, K-hyperline clustering can be posed as an ERM problem over the distortion function class

 GK={gΨ(y)=d(y,ψj),j=\operatornamewithlimitsargmaxl∈{1,⋯,K}|yTψl|}. (6)

The class is constructed with functions corresponding to all possible combinations of unit length vectors from the space for the set . Let us define the probability space for the data in as , where is the sample space and is a sigma-algebra on , i.e., the collection of subsets of over which the probability measure is defined. The training samples, , are i.i.d. realizations from this space.

Ideally, we are interested in computing the cluster centroids that minimize the expected distortion with respect to the probability measure . However, the underlying distribution of the data samples is not known and hence we resort to minimizing the average empirical distortion with respect to the training samples as

 g^Ψ=\operatornamewithlimitsargming∈GK1TT∑i=1gΨ(yi). (7)

When the empirical averages of the distortion functions in uniformly converge to the expected values over all probability measures ,

 limT→∞supPP(supgΨ∈GK∣∣ ∣∣E[gΨ]−1TT∑i=1gΨ(yi)∣∣ ∣∣>δ)=0, (8)

for any , we refer to the class

as uniform Glivenko-Cantelli (uGC). In addition, if the class also satisfies a version of the central limit theorem, it is defined as uniform Donsker

[44]. In order to determine if is uniform Donsker, we have to verify if the covering number of with respect to the supremum norm, , grows polynomially in the dimensions [35]. Here, denotes the maximum distance between an arbitrary distortion function in , and the function that covers it. For K-hyperline clustering, the covering number is upper bounded by [38, Lemma 2.1]

 N∞(γ,GK)≤(8R3K+γγ)MK, (9)

where we assume that the data lies in an -dimensional ball of radius centered at the origin. Therefore, belongs to the uniform Donsker class.

Stability implies that the algorithm should produce cluster centroids that are not significantly different when different i.i.d. sets from the same probability space are used for training [36, 37, 35]. Stability is characterized based on the number of minimizers to the clustering objective with respect to the underlying data distribution. A minimizer corresponds to a function with the minimum expectation . Stability analysis of K-means clustering has been reported in [37, 35]. Though the geometry of K-hyperline clustering is different from that of K-means, the stability characteristics of the two algorithms have been found to be similar [38].

Given two sets of cluster centroids and learned from training sets of i.i.d. samples each realized from the same probability space, let us define the distance between the clusterings as

 ∥gΨ−gΛ∥L1(P)=∫|gΨ(y)−gΛ(y)|dP(y). (10)

When , and is uniform Donsker, stability in terms of the distortion functions is expressed as

 ∥gΨ−gΛ∥L1(P)P→0, (11)

where denotes convergence in probability. This holds true even for and learned from completely disjoint training sets, when there is a unique minimizer to the clustering objective. When there are multiple minimizers, (11) holds true with respect to a change in samples between two training sets and fails to hold with respect to a change in samples [38]. The distance between the cluster centroids themselves is defined as [35]

 Δ(Ψ,Λ)=max1≤j≤Kmin1≤l≤K[(d(ψj,λl))1/2+(d(ψl,λj))1/2]. (12)
###### Lemma II.1 ([38])

If the distance between the distortion functions for the clusterings and is bounded as , for some , and , for some , then where

 ρ≤2sin−1⎡⎢⎣1r(μ^CC,M)1M+1⎤⎥⎦. (13)

Here the training data is assumed to lie outside an -dimensional ball of radius centered at the origin, and the constant depends only on and .

When the clustering algorithm is stable according to (11), for admissible values of , Lemma II.1 shows that the cluster centroids become arbitrarily close to each other.

## Iii Multilevel Dictionary Learning

In this section, we develop the multilevel dictionary learning algorithm, whose algorithmic stability and generalizability will be proved in Section IV. Furthermore, we propose strategies to estimate the number of atoms in each level and make the learning process robust for improved generalization. We also present a simple pursuit scheme to compute representations for novel test data using the MLD.

### Iii-a Algorithm

We denote the MLD as , and the coefficient matrix as . Here, is the sub-dictionary and is the coefficient matrix for level . The approximation in level is expressed as

 Rl−1=ΨlAl+Rl, for l=1,...,L, (14)

where , are the residuals for the levels and respectively and , the matrix of training image patches. This implies that the residual matrix in level serves as the training data for level . Note that the sparsity of the representation in each level is fixed at . Hence, the overall approximation for all levels is

 Y=L∑l=1ΨlAl+RL. (15)

MLD learning can be interpreted as a block-based dictionary learning problem with unit sparsity per block, where the sub-dictionary in each block can allow only a -sparse representation and each block corresponds to a level. The sub-dictionary for level , , is the set of cluster centroids learned from the training matrix for that level, , using K-hyperline clustering. MLD learning can be formally stated as an optimization problem that proceeds from the first level until the stopping criteria is reached. For level , we solve

 \operatornamewithlimitsargminΨl,Al∥Rl−1−ΨlAl∥2F subject to% ∥al,i∥0≤1, \operatornamewithlimitsargminΨl,Al∥Rl−1−ΨlAl∥2F for i={1,...,T}, (16)

along with the constraint that the columns of have unit norm, where is the column of and is the number of columns in . We adopt the notation to denote the problem in (16) where is the number of atoms in . The stopping criteria is provided either by imposing a limit on the residual representation error or the maximum number of levels (). Note that the total number of levels is the same as the maximum number of non-zero coefficients (sparsity) of the representation. The error constraint can be stated as, , where is the column in , and is the error goal.

Table I lists the MLD learning algorithm with a fixed . We use the notation to denote the element of the set . The set contains the indices of the residual vectors of level whose norm is greater than the error goal. The residual vectors indexed by are stacked in the matrix, , which in turn serves as the training matrix for the next level, . In MLD learning, for a given level , the residual is orthogonal to the representation . This implies that

 ∥rl−1,i∥22=∥Ψlal,i∥22+∥rl,i∥22. (17)

Combining this with the fact that , is sparse, and the columns of are of unit norm, we obtain the relation

 ∥yi∥22=L∑l=1∥al,i∥22+∥rL,i∥22. (18)

Equation (18) states that the energy of any training vector is equal to the sum of squares of its coefficients and the energy of its residual. From (17), we also have that,

 ∥Rl−1∥2F=∥ΨlAl∥2F+∥Rl∥2F. (19)

The training vectors for the first level of the algorithm, lie in the ambient space and the residuals, , lie in a finite union of subspaces. This is because, for each dictionary atom in the first level, its residual lies in an dimensional space orthogonal to it. In the second level, the dictionary atoms can possibly lie anywhere in , and hence the residuals can lie in a finite union of and dimensional subspaces. Hence, we can generalize that the dictionary atoms for all levels lie in , whereas the training vectors of level , lie in finite unions of dimensional subspaces of the space.

### Iii-B Convergence

The convergence of MLD learning and the energy hierarchy in the representation obtained using an MLD can be shown by providing two guarantees. The first guarantee is that for a fixed number of atoms per level, the algorithm will converge to the required error within a sufficient number of levels. This is because the K-hyperline clustering makes the residual energy of the representation smaller than the energy of the training matrix at each level (i.e.) . This follows from (19) and the fact that .

The second guarantee is that for a sufficient number of atoms per level, the representation energy in level will be less than the representation energy in level . To show this, we first state that for a sufficient number of dictionary atoms per level, . This means that for every

 ∥Rl∥2F<∥ΨlAl∥2F<∥Rl−1∥2F, (20)

because of (19). This implies that , i.e., the energy of the representation in each level reduces progressively from to , thereby exhibiting energy hierarchy.

### Iii-C Estimating Number of Atoms in Each Level

The number of atoms in each level of an MLD can be optimally estimated using an information theoretic criteria such as minimum description length (MDL) [31]. The broad idea is that the model order, which is the number of dictionary atoms here, is chosen to minimize the total description length needed for representing the model and the data given the model. The codelength for encoding the data given the model is given as the negative log likelihood . The description length for the model is the number of bits needed to code the model parameters.

In order to estimate the number of atoms in each level using the MDL principle, we need to make some assumptions on the residual obtained in each level. Our first assumption will be that the a fraction of the total energy in each level will be represented at that level and the remaining energy will be the residual energy. The residual and the representation energy sum up to the total energy in each level because, the residual in any level of MLD is orthogonal to the representation in that level. Therefore, at any level , the represented energy will be and the residual energy will be , where is the total energy of training data at the first level. For simplicity, we also assume that the residual at each level follows the zero-mean multinormal distribution

. Combining these two assumptions, the variance is estimated as

.

The total MDL score, which is an indicator of the information-theoretic complexity, is the sum of the negative log likelihood and the number of bits needed to encode the model. Encoding the model includes encoding the non-zero coefficients, their location, and the dictionary elements themselves. The MDL score for level with the data , dictionary , and the coefficient matrix is

 MDL(Rl−1|Ψl,Al,Kl)=12σ2lT∑i=1∥rl−1,i−Ψlal,i∥22 +12Tlog(MT)+Tlog(TKl)+12KlMlog(MT). (21)

Here, the first term in the sum represents the data description length, which is also the negative log-likelihood of the data after ignoring the constant term. The second term is the number of bits needed to code the non-zero coefficients as reals where each coefficient is coded using bits [45]. The third term denotes the bits needed to code their locations which are integers between and , and the fourth term represents the total bits needed to code all the dictionary elements as reals. The optimal model order is the number of dictionary atoms that results in the least MDL score. In practice, we test a finite number of model orders and pick the one which results in the least score. As an example, we train a dictionary using grayscale patches of size from the BSDS dataset [46]. We preprocess the patches by vectorizing them and subtracting the mean of each vectorized patch from its elements. We perform MLD learning and estimate the estimate the optimal number of dictionary atoms in each level using , for a maximum of levels. For the sub-dictionary in each level, the number of atoms were varied between and , and one that provided the least MDL score was chosen as optimal. The first few levels and the last level of the MLD obtained using such procedure is shown in Figure 1. The minimum MDL score obtained in each level is shown in 2. From these two figures, clearly, the information-theoretic complexity of the sub-dictionaries increase with the number of levels, and the atoms themselves progress from being simple geometric structures to stochastic textures.

### Iii-D Sparse Approximation using an MLD

In order to compute sparse codes for novel test data using a multilevel dictionary, we propose to perform reconstruction using a Multilevel Pursuit (MulP) procedure which evaluates a -sparse representation for each level using the dictionary atoms from that level. Therefore, the coefficient vector for the data sample in level is obtained using a simple correlate-and-max operation, whereby we compute the correlations and pick the coefficient value and index corresponding to the maximum absolute correlation. The computational complexity of a correlate-and-max operation is of order and hence the complexity of obtaining the full representation using levels is of order , where is the total number of atoms in the dictionary. Whereas, the complexity of obtaining an sparse representation on the full dictionary using Orthogonal Matching Pursuit is of order .

### Iii-E Robust Multilevel Dictionaries

Although MLD learning is a simple procedure capable of handling large scale data with useful asymptotic generalization properties as described in Section (IV-D), the procedure can be made robust and its generalization performance can be improved using randomization schemes. The Robust MLD (RMLD) learning scheme, which is closely related to Rvotes [47] - a supervised ensemble learning method, improves the generalization performance of MLD as evidenced by Figure 8. The Rvotes scheme randomly samples the training set to create sets of samples each, where . The final prediction is obtained by averaging the predictions from the multiple hypotheses learned from the training sets. For learning level in RMLD, we draw subsets of randomly chosen training samples, from the original training set of size , allowing for overlap across the subsets. Note that here, . The superscript here denotes the index of the subset. For each subset of size , we learn a sub-dictionary with atoms using K-hyperline clustering. For each training sample in , we compute sparse representations using all the sub-dictionaries, and denote the set of coefficient matrices as . The approximation for the training sample in level , , is computed as the average of approximations using all sub-dictionaries, . The ensemble approximations for all training samples in the level can be used to compute the set of residuals, and this process is repeated for a desired number of levels, to obtain an RMLD.

Reconstruction of test data with an RMLD is performed by extending the multilevel pursuit. We obtain approximations for each data sample at a given level, average the approximations, compute the residual and repeat this for the subsequent levels. Note that this can also be implemented as multiple correlate-and-max operations per data sample per level. Clearly, the computational complexity for obtaining a sparse representation using the RMLD is of order , where .

## Iv Stability and Generalization

In this section, the behavior of the proposed dictionary learning algorithm is considered from the viewpoint of algorithmic stability: the behavior of the algorithm with respect to the perturbations in the training set. It will be shown that the dictionary atoms learned by the algorithm from two different training sets whose samples are realized from the same probability space, become arbitrarily close to each other, as the number of training samples . Since the proposed MLD learning is equivalent to learning K-hyperline cluster centroids in multiple levels, the stability analysis of K-hyperline clustering [38], briefly discussed in Section II-B, will be utilized in order to prove its stability. For each level of learning, the cases of single and multiple minimizers to the clustering objective will be considered. Proving that the learning algorithm is stable will show that the global dictionaries learned from the data depend only on the probability space to which the training samples belong and not on the actual samples themselves, as . We also show that the MLD learning generalizes asymptotically, i.e., the difference between expected error and average empirical error in training approaches zero, as . Therefore, the expected error for novel test data, drawn from the same distribution as the training data, will approach the average empirical training error.

The stability analysis of the MLD algorithm will be performed by considering two different dictionaries and with levels each. Each level consists of dictionary atoms and the sub-dictionaries in each level are indicated by and respectively. Sub-dictionaries and are the cluster centers learned using K-hyperline clustering on the training data for level . The steps involved in proving the overall stability of the algorithm are: (a) showing that each level of the algorithm is stable in terms of distance between the distortion functions, defined in (10), as the number of training samples (Section IV-A), (b) proving that stability in terms of distances indicates closeness of the centers of the two clusterings (Section IV-B), in terms of the metric defined in (12), and (c) showing that level-wise stability leads to overall stability of the dictionary learning algorithm (Section IV-C).

### Iv-a Level-wise Stability

Let us define a probability space where is the data that lies in , and is the probability measure. The training samples for the sub-dictionaries and are two different sets of i.i.d. realizations from the probability space. We also assume that the norm of the training samples is bounded from above and below (i.e.), . Note that, in a general case, the data will lie in for the first level of dictionary learning and in a finite union of lower-dimensional subspaces of for the subsequent levels. In both cases, the following argument on stability will hold. This is because when the training data lies in a union of lower dimensional subspaces of , we can assume it to be still lying in , but assign the probabilities outside the union of subspaces to be zero.

The distortion function class for the clusterings, defined similar to (6), is uniform Donsker because the covering number with respect to the supremum norm grows polynomially, according to (9). When a unique minimizer exists for the clustering objective, the distortion functions corresponding to the different clusterings and become arbitrarily close, , even for completely disjoint training sets, as . However, in the case of multiple minimizers, holds only with respect to a change of training samples between the two clusterings, and fails to hold for a change of samples [35, 38].

### Iv-B Distance between Cluster Centers for a Stable Clustering

For each cluster center in the clustering , we pick the closest cluster center from , in terms of the distortion measure (4), and form pairs. Let us indicate the pair of cluster centers by and . Let us define disjoint sets , in which the training data for the clusterings exist, such that . By defining such disjoint sets, we can formalize the notion of training data lying in a union of subspaces of . The intuitive fact that the cluster centers of two clusterings are close to each other, given that their distortion functions are close, is proved in the lemma below.

###### Lemma IV.1

Consider two sub-dictionaries (clusterings) and with atoms each obtained using the training samples that exist in the disjoint sets in the space, with , and in each of the sets. When the distortion functions become arbitrarily close to each other, as , the smallest angle between the subspaces spanned by the cluster centers becomes arbitrarily close to zero, i.e.,

 ∠(ψl,j,λl,j)P→0,,∀j∈1,…,Kl. (22)
• Denote the smallest angle between the subspaces represented by and as and define a region . If , then . An illustration of this setup for a 2-D case is given in Figure 3. In this figure, the arc is of radius and represents the minimum value of . By definition, the distance between the distortion functions of the clusterings for data that exists in the disjoint sets is

 ∥gΨl−gΛl∥L1(Pl)=τ∑i=1∫Ai|gΨl(y)−gΛl(y)|dPl(y). (23)

For any and with a non-empty we have,

 ∥gΨl−gΛl∥L1(Pl)≥∫Bl,i,j|gΨl(y)−gΛl(y)|dPl(y), (24)
 =∫Bl,i,j[yT(I−λl,jλTl,j)y−K∑k=1yT(I−ψl,kψTl,k)y =∫Bl,i,jI(y % closest to ψl,k)]dPl(y), (25)
 ≥∫Bl,i,j[yT(I−λl,jλTl,j)y−yT(I−ψl,jψTl,j)y]dPl(y), (26) ≥C∫Bl,i,j[(yTψl,j)2−(yTλl,j)2]dy. (27)

We have in (IV-B), since is the closest cluster center to the data in in terms of the distortion measure (4). Note that is the indicator function and (27) follows from (26) because . Since by assumption, , from (27), we have

 (yTψl,j)2−(yTλl,j)2P→0, (28)

because the integrand in (27) is a continuous non-negative function in the region of integration.

Denoting the smallest angles between and the subspaces spanned by and to be and respectively, from (28), we have , for all . By definition of the region , we have . Since is bounded away from zero and infinity, if holds for all , then we have . This is true for all cluster center pairs as we have shown this for an arbitrary and .

### Iv-C Stability of the MLD Algorithm

The stability of the MLD algorithm as a whole, is proved in Theorem IV.3 from its level-wise stability by using an induction argument. The proof will depend on the following lemma which shows that the residuals from two stable clusterings belong to the same probability space.

###### Lemma IV.2

When the training vectors for the sub-dictionaries (clusterings) and are obtained from the probability space , and the cluster center pairs become arbitrarily close to each other as , the residual vectors from both the clusterings belong to an identical probability space .

• For the cluster center pair , , define and as the projection matrices for their respective orthogonal complement subspaces and . Define the sets and , where , is an arbitrary fixed vector, not orthogonal to both and , and is a differential element. The residual vector set for the cluster , when is given by, , or equivalently . Similarly for the cluster , we have . For a 2-D case, Figure 4 shows the 1-D subspace , its orthogonal complement , the set and the residual set .

In terms of probabilities, we also have that , because the residual set

is obtained by a linear transformation of

. Here and are probability measures defined on the training data for levels and respectively. Similarly, . When , the cluster center pairs become arbitrarily close to each other, i.e., , by assumption. Therefore, the symmetric difference between the sets and approaches the null set, which implies that . This implies,

 Pl+1(rψl,j ∈{¯Ψl,j(β+dβ)})− (29)

for an arbitrary and , as . This means that the residuals of and belong to a unique but identical probability space. Since we proved this for an arbitrary and , we can say that the residuals of clusterings and belong to an identical probability space given by .

###### Theorem IV.3

Given that the training vectors for the first level are generated from the probability space , and the norms of training vectors for each level are bounded as , the MLD learning algorithm is stable as a whole.

• The level-wise stability of MLD was shown in Section IV-A, for two cases: (a) when a unique minimizer exists for the distortion function and (b) when a unique minimizer does not exist. Lemma IV.1 proved that the stability in terms of closeness of distortion functions implied stability in terms of learned cluster centers. For showing the level-wise stability, we assumed that the training vectors in level for clusterings and belonged to the same probability space. However, when learning the dictionary, this is true only for the first level, as we supply the algorithm with training vectors from the probability space .

We note that the training vectors for level are residuals of the clusterings and . Lemma IV.2 showed that the residuals of level for both the clusterings belong to an identical probability space , given that the training vectors of level are realizations from the probability space and . By induction, this along with the fact that the training vectors for level belong to the same probability space , shows that all the training vectors of both the dictionaries for any level indeed belong to a probability space corresponding to that level. Hence all the levels of the dictionary learning are stable and the MLD learning is stable as a whole. Similar to K-hyperline clustering, if there are multiple minimizers in at least one level, the algorithm is stable only with respect to a change of training samples between the two clusterings and failts to hold for a change of samples.

### Iv-D Generalization Analysis

Since our learning algorithm consists of multiple levels, and cannot be expressed as an ERM on a whole, the algorithm can be said to generalize asymptotically if the sum of empirical errors for all levels converge to the sum of expected errors, as . This can be expressed as

 ∣∣ ∣∣1TL∑l=1T∑i=1gΨl(yl,i)−L∑l=1EPl[gΨl]∣∣ ∣∣P→0, (30)

where the training samples for level given by are obtained from the probability space . When (30) holds and the learning algorithm generalizes, it can be seen that the expected error for test data which is drawn from the same probability space as that of the training data, is close to the average empirical error. Therefore, when the cluster centers for each level are obtained by minimizing the empirical error, the expected test error will also be small.

In order to show that (30) holds, we use the fact that each level of MLD learning is obtained using K-hyperline clustering. Hence, from (8), the average empirical distortion in each level converges to the expected distortion as ,

 (31)

The validity of the condition in (30) follows directly from the triangle inequality,

 ∣∣ ∣∣1TL∑l=1T∑i=1gΨl(yl,i)−L∑l=1EPl[gΨl]∣∣ ∣∣ ≤L∑l=1∣∣ ∣∣1TT∑i=1gΨl(yl,i)−EPl[gΨl]∣∣ ∣∣. (32)

If the MulP coding scheme is used for test data, and the training and test data for level are obtained from the probability space , the probability space for both training and test data in level will be . This is because, both the MulP coding scheme and MLD learning associate the data to a dictionary atom using the maximum absolute correlation measure and create a residual that is orthogonal to the atom chosen in a level. Hence, the assumption that training and test data are drawn from the same probability space in all levels hold and the expected test error will be similar to the average empirical training error.

## V Simulation Results

In this section, we present experiments to demonstrate the stability and generalization characteristics of a multilevel dictionary, and evaluate its use in compressed recovery of images and subspace learning. Both stability and generalization are crucial for building effective global dictionaries that can model patterns in any novel test image. Although it is not possible to demonstrate the asymptotic behavior experimentally, we study the changes in the behavior of the learning algorithm with increase in the number of samples used for training. Compressed recovery is a highly relevant application for global dictionaries, since it is not possible to infer dictionaries with good reconstructive power directly from the low-dimensional random measurements of image patches. It is typical to employ both minimization and greedy pursuit methods for recovering images from their compressed measurements. Though

minimization incurs higher computational complexity, it often provides improved recovery performance when compared to greedy approaches. Hence, it is important to compare its recovery performance to that of the MLD that uses a simple greedy pursuit. Subspace learning is another application that can benefit from the use of multilevel dictionaries. In subspace learning, it is common to obtain a linear embedding from the training data, and apply it to novel test data for dimensionality reduction, classification, and visualization. These approaches can be unsupervised (eg. Principal Component Analysis, Locality Preserving Projections) or can use the class label information while learning the embedding (eg. Linear Discriminant Analysis, Local Discriminant Embedding). Several subspace learning algorithms can be unified under the framework of graph embedding

[48], wherein an undirected graph describing the relation between the data samples is provided as the input. We propose to use graphs constructed based on sparse codes, from a multilevel dictionary, for subspace learning in both supervised and unsupervised settings.

All simulations for stability/generalization, and compressed recovery use dictionaries trained on image patches from the Berkeley Segmentation Dataset (BSDS) [46]. The BSDS dataset contains a total of images and the number of patches used in our experiments vary between and . The images were converted to grayscale and no other preprocessing was performed on these images. We used patches of size and no noise was added to the patches. For evaluating the performance of the dictionaries, we considered standard images (Barbara, Boat, House, Lena, Couple, Fingerprint, Man, Peppers). For the subspace learning simulations, we used the Forest Covertype dataset [49] which consists of samples belonging to different classes. As per the standard procedure, we used the first samples ( per class) for training and the rest for testing.

### V-a Stability

In order to illustrate the stability characteristics of MLD learning, we setup an experiment where we consider a multilevel dictionary of levels, with atoms in each level. We trained multilevel dictionaries using different number of training patches . As we showed in Section IV, asymptotic stability is guaranteed when the training set is changed by not more than samples. The inferred dictionary atoms will not vary significantly, if this condition is satisfied. We fixed the size of the training set at different values 1000, 5000, 10,000, 50,000, 100,000 and learned an initial set of dictionaries using the proposed algorithm. The second set of dictionaries were obtained by replacing different number of samples from the original training set. For each case of , the number of replaced samples was varied between and . For example, when , the number of replaced training samples were and . The amount of change between the initial and the second set of dictionaries was quantified using the minimum Frobenius norm of their difference with respect to permutations of their columns and sign changes. In Figure 5, we plot this quantity for different values of as a function of the number of samples replaced in the training set. For each case of , the difference between the dictionaries increases as we increase the replaced number of training samples. Furthermore, for a fixed number of replaced samples (say ), the difference reduces with the increase in the number of training samples, since it becomes closer to asymptotic behavior.

### V-B Generalization

Generalization of a dictionary learning algorithm guarantees a small approximation error for a test data sample, if the training samples are well approximated by the dictionary. In order to demonstrate the generalization characteristics of MLD learning, we designed dictionaries using different number of training image patches, of size , and evaluated the sparse approximation error for patches in the test dataset. The test dataset consisted of patches chosen randomly from the standard images. For multilevel learning, we fixed the number of levels at , and used the approach proposed in Section III-C to estimate the number of atoms needed in each level (). Similarly, we fixed the number of levels at for the RMLD learning. Since RMLD learning does not require careful choice of the number of atoms in each level, we fixed . Though learning multiple sets of atom