Decentralized Dynamic Discriminative Dictionary Learning

05/03/2016 ∙ by Alec Koppel, et al. ∙ 0

We consider discriminative dictionary learning in a distributed online setting, where a network of agents aims to learn a common set of dictionary elements of a feature space and model parameters while sequentially receiving observations. We formulate this problem as a distributed stochastic program with a non-convex objective and present a block variant of the Arrow-Hurwicz saddle point algorithm to solve it. Using Lagrange multipliers to penalize the discrepancy between them, only neighboring nodes exchange model information. We show that decisions made with this saddle point algorithm asymptotically achieve a first-order stationarity condition on average.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 10

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

We develop a framework to solve machine learning problems in cases where latent geometric structure in the feature space may be exploited. We consider cases where the number of training examples is either very large, or signals are sequentially observed by a platform operating in real-time such as an autonomous robot. In the former case, since the sample size is large-scale, processing a few training examples at a time is necessary due to computational cost. However, doing so at a centralized location may be impractical, which motivates the use of learning techniques that may be done collaboratively by a network of interconnected computing servers. In the later case, an autonomous robot with no priors on its operating environment only has access to information based on the path it has traversed, which may omit regions of the feature space crucial for tasks such as learning-based control. By communicating with other robots in a network, individuals may learn over a broader domain associated with that which has been explored by the whole network, and thus more effectively solve autonomous learning tasks. The problem formulation breaks down into three aspects: developing data-driven feature representations, learning task-driven model parameters over these representations, and extending this problem to dynamic, networked settings.

Formally, consider the problem of computing an alternative representation of a set of vectors where this alternative representation may reveal latent relationships between them. Broadly, this problem is referred to as unsupervised learning, and techniques designed to address it have yielded important advances in a variety of signal processing applications

[3]. To learn such a representation, a variety of objectives may be considered. If the vector’s dimension is very large, dimensionality reduction is of interest, whereby one aims to find a representation that explains the most data variability across a feature space. Classically this task been approached with principle component analysis [4], which requires orthogonality of the basis elements. Alternatively, if specialized domain knowledge is available, finding representations based on particularized functions, i.e. wavelets for natural imagery [5], is more appropriate. A more general approach to seeking signal representations of a feature space is to learn the basis elements from data, as in dictionary learning. Dictionary learning has been successfully applied to signal reconstruction tasks such as inpainting or denoising [6, 7, 8], and higher level signal processing tasks such as classification [9, 10].

An important question recently posed in [11] is why one aims to learn a signal representation from data, if not for feeding into a higher-level signal processing task. Thus the authors in [11] propose tailoring the dictionary to a discriminative modeling task, referred to as discriminative dictionary learning. Such methods have recently shown promise as compared to their unsupervised counterparts [12, 13, 14]. The problem of developing a dictionary representation of a signal specifically suited to the learning problem of interest is a difficult optimization problem. In the centralized offline setting, this class of problems has been solved with block coordinate descent [15, 16], or alternating gradient methods [17]; however, these techniques are only effective when the training set is static and not too large. In the centralized online setting, prior approaches have made use of stochastic approximation methods [6, 18].

In this paper, we extend the online discriminative dictionary learning formulation of [11] to networked settings, where a team of agents seeks to learn a common dictionary and model parameters based upon local dynamic information. To do so, we consider tools from stochastic approximation [19] and its decentralized extensions which have incorporated ideas from distributed optimization such as weighted averaging [20, 21, 22, 23, 24], dual reformulations where each agent ascends in the dual domain [25, 26], and primal-dual methods which combine primal descent with dual ascent [27, 20, 28, 29].

Our main technical contribution is the formulation of the dynamic multi-agent discriminative dictionary learning problem as a distributed stochastic program, and the development of a block variant of the primal-dual algorithm proposed in [29]. Moreover, we establish that the proposed method converges in expectation to a first-order stationary solution of the problem. We describe the discriminative dictionary learning and sparse representation problem in Section II. We extend this problem to multi-agent settings, and derive an algorithmic solution which is a block variant of the saddle point algorithm of Arrow and Hurwicz [27, 29] in Section III. In Section IV, we establish the convergence properties of the method. In Section V, we analyze the proposed framework’s empirical performance on a texture classification problem based upon image data for a variety of network settings and demonstrate its capacity to solve a new class of collaborative multi-class classification problems in decentralized settings. In Section VI we consider the algorithm’s use in a mobile robotic team for navigability assessment. We conclude in Section VII.

Ii Discriminative Dictionary Learning

Consider a set of signals each of which lives in an -dimensional feature space so that we have . We aim to represent the signals as combinations of a common set of linear basis elements , which are unknown and must also be learned from the data. We group these basis elements into a dictionary matrix and denote the coding of as . For a given dictionary, the coding problem calls for finding a representation such that the signal is close to its dictionary representation

. This goal can be mathematically formulated by introducing a loss function

that depends on the proximity between and the data point and formulating the coding problem as [30]

(1)

Hereafter, we assume that basis elements are normalized to have norms so that the dictionary is restricted to the convex compact set .

The dictionary learning problem associated with the loss function entails finding a dictionary such that the signals are close to their representations for all possible . Here, however, we focus on discriminative problems where the goal is to find a dictionary that is well adapted to a specific classification or regression task [11]. Formally, we associate with each a variable that represents a discrete label – in the case of classification problems – or a set of associated vectors – in the case of regression. We then use the coding in (5) as a feature representation of the signal

and introduce the classifier

that is used to predict the label or vector when given the signal . The merit of the classifier is measured by the smooth loss function that captures how well the classifier may predict when given the sparse coding that we compute using the dictionary . The discriminative dictionary learning problem is formulated as the joint determination of the dictionary and classifier that minimize the cost averaged over the training set,

(2)

For given dictionary and signal sample we compute the code as per (1), predict using , and measure the prediction error with the loss function . The optimal pair in (2) is the one that minimizes the cost averaged over the given sample pairs . Observe that is not a variable in the optimization in (2) but a mapping for a implicit dependence of the loss on the dictionary . To simplify notation we henceforth write (2) as

(3)

The optimization problem in (3) is not assumed to be convex – this would be too restrictive because the dependence of on is, partly, through the mapping defined by (1). In general, only local minima of (3) can be found.

Our goal in this paper is to study online algorithms that solve (3) as training pairs become available. To do so we introduce the assumption that training pairs

are independently sampled from a common probability distribution and replace (

3) by

(4)

The problems in (4) and (3) are equivalent in the limit of if

are independently drawn from the joint distribution of the random pair

. The problem in (4), as the one in (3), is not convex. We clarify the problem formulation in (4) with two representative examples.

Example 1 (Sparse non-discriminative learning)

When we have , the formulation in (1) aims at finding a dictionary that reduces data dimensionality from to . In this paper we are more interested in the overdetermined case in which but we want the codes to be sparse. These sparsity constraints can be written as upper limits on the zero norm of but that would yield computationally intractable formulations. To circumvent this issue, sparsity can be incentivized by adding, e.g., elastic net regularization terms [31, 32], in which case we can write the loss functions in (1) as

(5)

In (5), measures proximity between and , the term encourages sparsity, and the term is a smooth regularizer. Common choices for the proximity functions are the Euclidean distance and the norm . In a non-discriminative problem we simply want to make and close to each other across elements of the training set. We achieve that by simply making .

Example 2 (Sparse logistic regression)

Given a training set of pairs where is a feature vector with associated binary label

, we seek a decision hyperplane

which best separates data points with distinct labels. However, instead of looking for linear separation in the original space, we seek for linear separation in a sparse coded space. Thus, let be the sparse coding of computed through (1) when using the loss function in (5). We want to find a classifier such that when and when . This hyperplane need not exist but we can always model the probability of observing

through its odds ratio relative to

. This yields the optimal classifier as the one that minimizes the logistic loss

(6)

For a feature vector , (2) models the probability of the label being or as determined by the inner product through the given logistic transformation. Substituting (2) into (4

) yields the discriminative dictionary learning problem for logistic regression with sparse features.

Ii-a Decentralized discriminative learning

We want to solve (4) in distributed settings where signal and observation pairs are independently observed by agents of a network. Agents aim to learn a dictionary and model parameters that are common with all others while having access to local information only. In particular, associated with each agent

is a local random variable and associated output variable

and each agent’s goal is to learn over the aggregate training domain . Let be a symmetric and connected network with node set and directed edges of the form and further define the neighborhood of as the set of nodes that share an edge with . When each of the agents observes a pair , the function in (4) can be written as a sum of local losses,

(7)

where we have defined the vertically concatenated dictionary and model parameter .

Substituting (7) into the objective in (4) yields a problem in which the agents learn dictionaries and classifiers that depend on their local observations only. The problem to be formulated here is one in which the agents learn common dictionaries and models . Since the network is assumed to be connected, this relationship can be attained by imposing the constraints and for all pairs of neighboring nodes . Substituting (7) into the objective in (4) with these constraints yields the distributed stochastic program

 s.t. (8)

When the agreement constraints in (II-A) are satisfied, the objective is equivalent to one in which all the observations are made at a central location and a single dictionary and model are learnt. Thus, (II-A) corresponds to a problem in which each agent , having only observed the local pairs , aims to learn a dictionary representation and model parameters that are optimal when information is aggregated globally over the network. The decentralized discriminative learning problem is to develop an iterative algorithm that relies on communication with neighbors only so that agent learns the optimal (common) dictionary and discriminative model . We present in the following section an algorithm that is shown in Section IV to converge to a local optimum of (II-A).

Remark 1.

Decentralized learning techniques may be applied to solving pattern recognition tasks in networks of autonomous robots operating in real-time, provided that realizations of the output variables are generated by a process which is internal to the individual platforms. In particular, consider the formulation in (

II-A), and let represent the difference between state information associated with a commanded trajectory and that which is observed by on-board sensors of robot . Most robots are equipped with sensors such as gyroscopes, accelerometers, and inertial measurement units, which make state information available.

In this case, the interconnected network of robots does not need external supervision or human in the loop in order to perform discriminative learning. In Section VI we propose solving problems of the form (II-A) in a network of interconnected robots operating in a field setting by generating binary labels by thresholding the difference between measurements made via on-board inertial measurement units (IMU) and movements which are executed in an open-loop manner with a joystick.

Iii Block Saddle Point Method

To write the constraints in (II-A) more compactly, define the augmented graph edge incidence matrix associated with the dictionary constraint. The matrix is formed by square blocks of dimension . If the edge links node to node the block is and the block , where

denotes the identity matrix of dimension

. All other blocks are identically null, i.e., when . Likewise, the matrix is defined by blocks of dimension with and when and otherwise. Then the constraints and for all pairs of neighboring nodes can be written as

(9)

The edge incidence matrices and have exactly and

null singular values, respectively. We denote as

the smallest nonzero singular value of and as the largest singular value of , which both measure network connectedness.

Imposing the constraints in (9) for all realizations of the local random variables requires global coordination – indeed, the formulation would be equivalent to the centralized problem in (4). Instead, we consider a modification of (7) in which we add linear penalty terms to incentivize the selection of coordinated actions. Introduce then dual variables associated with the constraint and consider the addition of penalty terms of the form . For an edge that starts at node , the multiplier is assumed to be kept at node . Similarly, introduce dual variables associated with the constraint for all neighboring node pairs and penalty terms . By introducing the stacked matrices and which are restricted to compact convex sets and , we can write the Lagrangian of this problem as

(10)

Recall that in the applications considered here the optimization problem in (II-A) is nonconvex. Thus, we use the dual formulation in (III) to develop an iterative distributed algorithm that converges to a KKT point of (II-A).

To do so, suppose agent receives local observation pairs at time and define the instantaneous Lagrangian as the stochastic approximation of (III) evaluated with the observations aggregated across the network as

(11)

We consider the use of the Arrow-Hurwicz saddle point method to solve (II-A) by alternating block variable updates, in order to exploit the fact that primal-dual stationary pairs are saddle points of the Lagrangian to work through successive primal alternating gradient descent steps and dual gradient ascent steps. Particularized to the stochastic approximate Lagrangian in (III), the primal iteration of the saddle point algorithm takes the form

(12)
(13)

where and , are projected stochastic subgradients of the Lagrangian with respect to and , respectively. Moreover, and denote orthogonal projections onto feasible sets and . Likewise, the dual iteration is defined as

(14)
(15)

where and are the stochastic subgradients of the Lagrangian with respect to and , respectively. Moreover, and denote orthogonal projections onto dual feasible sets and which are compact subsets of Euclidean space in the appropriate dimension. Additionally, is a step size chosen as – see Section IV.

We now show that the algorithm specified by (12)-(15) yields an effective tool for discriminative learning in multi-agent settings.

Proposition 1

The gradient computations in (12)-(13) may be separated along the local primal variables and associated with node , yielding parallel updates

(16)
(17)

where denotes the orthogonal projection operator onto set , and likewise for . Moreover, the dual gradients in the updates of and respectively in (14)-(15) may separated into parallel updates associated with edge

(18)
(19)

which allows for distributed computation across the network. Again, and denote projections onto sets and .

Proof : See Appendix A.

The D4L algorithm follows by letting node implement (1)-(1) as we summarize in Algorithm 1. To do so, node utilizes its local primal iterates and , its local dual iterates and , and its local instantaneous observed pair . Node also needs access to the neighboring multipliers and to implement (1) and (1) as well as to the neighboring primal iterates and to implement (18) and (19). The core steps of D4L in Algorithm 1 are the primal iteration in Step 5 and the dual iteration in Step 7. Steps 4 and 6 refer to the exchange of dual and primal variables that are necessary to implement steps 5 and 7, respectively. Step 1 refers to the acquisition of the signal and observation pair and Step 2 to the computation of the code in (1) using the local current dictionary iterate . We discuss the specific use of Algorithm 1 to learning discriminative sparse signal representations in a distributed setting to clarify ideas.

1:, initial dictionary, , local random variables, , regularization parameter.
2:for  do
3:     Acquire local signal and observation pair .
4:     Coding [cf. (1)],
5:     Send and receive for all .
6:     Update dictionary and model parameters [cf. (1) and (1)]
7:     Send and receive for all .
8:     Update Lagrange multipliers [cf. (18) and (19)]
9:end for
Algorithm 1 D4L: Decentralized Dynamic Discriminative Dictionary Learning
Example 3 (Distributed sparse dictionary learning)

Consider a multi-agent system in which signals are independently observed at each agent, and the data domain has latent structure which may be revealed via learning discriminative representations that are sparse. In this case, we select the particular form of in (1) as the elastic net [cf. (5)] with the Euclidean distance . Then the dictionary update in (1) may be derived from the subgradient optimality conditions of the elastic-net (see [32]):

(20)

where is a vector of signs of . Proceeding as in the Appendix of [11], define as the set of nonzero entries of . Then is the solution to the system of linear inequalities in (3), i.e.

(21)

At time , to compute the stochastic gradient with of (III) respect to a local dictionary for agent , apply Proposition 1 of [11] which yields the explicit form

(22)

is shorthand for (5) and is defined as the set of indices associated with nonzero entries of . Moreover, we define as

(23)

as in [11], Proposition 1. This result is established via a perturbation analysis of the elastic-net optimality conditions. Note that this follows from substituting the solution of (5) into

and applying the chain rule in the gradient computation.

Iv Convergence Analysis

We turn to establishing that the saddle point algorithm in (12)-(15) asymptotically converges to a stationary point of the problem (II-A). Before proceeding with our analysis, we define the primal descent direction with respect to associated with the projected block stochastic saddle point method as

(24)

and the dual ascent direction with respect to as

(25)

The projected stochastic gradients and associated with variables and are analogously defined to (24) and (25), respectively. Note descent (respectively, ascent) using projected stochastic gradients in [cf. (24) - (25)] is equivalent to using the projected stochastic saddle point method [cf. (1) - (19)].

To establish convergence of D4L, some conditions are required of the network, loss functions, and stochastic approximation errors which we state below.

Assumption 1.

(Network connectivity) The network is connected with diameter . The singular values of the incidence matrix are respectively upper and lower bounded by and .

Assumption 2.

(Smoothness) The Lagrangian has Lipschitz continuous gradients in the primal and dual variables with constants , , , and . This implies that, e.g.,

(26)

Moreover, the projected gradients of the Lagrangian in the primal and dual variables are bounded with block constants , , , and , which implies that, e.g.,

(27)
Assumption 3.

(Diminishing step-size rules) The step-size is chosen as , i.e. satisfies (i) , (non-summability) (ii) , (square-summability).

Assumption 4.

(Stochastic Approximation Error) The bias of the stochastic gradients of the Lagrangian with respect to each block variable asymptotically converges to null at a rate on the order of the algorithm step-size, which allows us to write, e.g.

(28)

where denotes the stochastic errors of the Lagrangian with respect to the dictionary , and denotes the dual stochastic approximation error with respect to .

Moreover, let be a sigma algebra that measures the history of the system up until time

. Then, the conditional second moments of the stochastic gradients are bounded by

for all times , which for example allows us to write

(29)

Assumption 1 is standard in distributed algorithms and Assumption 2 is common in analysis of descent methods, and is guaranteed to hold by making use of gradients which are projected into compact sets , , , and . Assumption 3 specifies that a diminishing step-size condition for the algorithm and Assumption 4 provides conditions on the stochastic approximation errors, both of which are typical in stochastic optimization. They are all satisfied in most cases.

Observe that the projected stochastic gradients in the updates in (12) - (13) imply that the primal variables themselves are contained in compact sets and , which allows us to write

(30)

for all dictionaries and model parameters . The compactness of dual sets and ensure the primal gradients are bounded [cf.(27)], and the respective dual gradients in and are bounded by constants and .

Before stating the main theorem, we present a lemma which will be used in its proof, and appears as Proposition 1.2.4 in [33].

Lemma 1

Let and be two nonnegative scalar sequences such that and . Then

(31)

Furthermore, if for some constant , then

(32)

With these preliminary results in place, we may state our main result, which says that the proposed algorithm on average asymptotically achieves a first-order stationarity condition of the Lagrangian associated with the optimization problem stated in (II-A).

Theorem 1

Denote as the sequence generated by the block saddle point algorithm in (12)-(15). If Assumptions 1 - 4 hold true, then the first-order stationary condition with respect to the primal variables

(33)
(34)

is asymptotically achieved in expectation. Moreover, the asymptotic feasibility condition

(35)
(36)

is attained in an expected sense.

Proof : The analysis is broken up into distinct components for the primal and dual variables. In the primal variables, we consider the Lagrangian difference of iterates at the next and current time. We expand terms, use properties of the stochastic gradients and function smoothness, and take conditional expectations on past information to establish a decrement property. We then mirror this analysis in the dual domain. At this point we leverage the step-size rules and apply (31). Then we consider the magnitude of block gradient differences which we bound by a term that diminishes with the step-size, which implies (32) holds, yielding an the expected asymptotic convergence to a stationary solution. We subsequently use the shorthand and , and analogous notation for the other block variables.

Begin by considering the difference of Lagrangians evaluated at the primal variables at the next and current time, and apply Taylor’s Theorem to quadratically approximate the former term