Learning heat diffusion graphs

11/04/2016 ∙ by Dorina Thanou, et al. ∙ EPFL MIT 0

Effective information analysis generally boils down to properly identifying the structure or geometry of the data, which is often represented by a graph. In some applications, this structure may be partly determined by design constraints or pre-determined sensing arrangements, like in road transportation networks for example. In general though, the data structure is not readily available and becomes pretty difficult to define. In particular, the global smoothness assumptions, that most of the existing works adopt, are often too general and unable to properly capture localized properties of data. In this paper, we go beyond this classical data model and rather propose to represent information as a sparse combination of localized functions that live on a data structure represented by a graph. Based on this model, we focus on the problem of inferring the connectivity that best explains the data samples at different vertices of a graph that is a priori unknown. We concentrate on the case where the observed data is actually the sum of heat diffusion processes, which is a quite common model for data on networks or other irregular structures. We cast a new graph learning problem and solve it with an efficient nonconvex optimization algorithm. Experiments on both synthetic and real world data finally illustrate the benefits of the proposed graph learning framework and confirm that the data structure can be efficiently learned from data observations only. We believe that our algorithm will help solving key questions in diverse application domains such as social and biological network analysis where it is crucial to unveil proper geometry for data understanding and inference.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 10

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Data analysis and processing tasks typically involve large sets of structured data, where the structure carries critical information about the nature of the recorded signals. One can find numerous examples of such datasets in a wide diversity of application domains, such as transportation networks, social or computer networks, brain analysis or even digital imaging and vision. Graphs are commonly used to describe the structure or geometry of such data as they provide a flexible tool for representing and eventually manipulating information that resides on topologically complicated domains. Once an appropriate graph is constructed, inference and analysis tasks can be carried out with a careful consideration of the data geometry using, e.g., spectral theory [1] or graph signal processing [2] concepts. The most common model that relates data and graph topology consists in assuming that the data is globally smooth on the graph, that is, samples connected by strong edge weights tend to be similar. This global model is, however, not always capable of properly capturing specific properties of the data. While recent research has put a focus on the development of effective methods for processing data on graphs and networks, relatively little attention has been given to the definition of good graphs. This problem remains critical and may actually represent the major obstacle towards effective processing of structured data.

In this work, we first depart from the common globally smooth signal model and propose a more generic model where the data consists of (sparse) combinations of overlapping local patterns that reside on the graph. These patterns may describe localized events or specific processes appearing at different vertices of the graph, such as traffic bottlenecks in transportation networks or rumor sources in social networks. More specifically, we view the data measurements as observations at different time instants of a few processes that start at different nodes of an unknown graph and diffuse with time. Such data can be represented as the combination of graph heat kernels or, more generally, of localized graph kernels. Particularly the heat diffusion model can be widely applied in real world scenarios to understand the distribution of heat (sources) [3]. One example is the propagation of a heat wave in geographical spaces. Another example is the movement of people in buildings or vehicles in cities, which are represented on a geographical graph. Finally, a shift of people’s interest towards certain subjects on social media platforms such as Twitter could also be understood via a heat diffusion model [4].

Fig. 1: Decomposition of a graph signal (a) in four localized simple components (b), (c), (d), (e). Each component is a heat diffusion process at time that has started from different network nodes (). The size and the color of each ball indicate the value of the signal at each vertex of the graph.

We then cast a new graph learning problem that aims at estimating a graph that best explains the data measurements, denoted as graph signals, under the heat diffusion model. Specifically, we represent our signals as a linear combination of a few (sparse) components from a graph dictionary consisting of heat diffusion kernels. The graph learning problem is then formulated as a regularized inverse problem where both the graph and the sparse coefficients are unknown. We propose a new algorithm to solve the resulting nonconvex optimization problem, which, under mild assumptions 

[5], guarantees that the iterates converge to a critical point. We finally provide a few illustrative experiments on synthetic data, as well as on two real world datasets that capture (i) the diffusion of tracers in atmospheric systems and (ii) the mobility patterns of Uber trips in New York City. The graph recovered from the first dataset correctly captures the trajectory of the chemical tracer, while the graphs learned from the Uber data reveal meaningful mobility patterns at different time intervals of the day across the city. The results confirm that the proposed algorithm is effective at inferring meaningful graph topologies in both synthetic and real world settings. Our framework is one of the first attempts to learn graphs carrying the structure of data that is not necessarily smooth but instead obeys a more generic sparse model. We believe that this framework will prove particularly useful in the analysis of social and biomedical networks, for example, where the data structure is not immediate from the application settings or design constraints.

The structure of the paper is as follows. We first highlight some related work on the learning of graph topologies in Section II. In Section III, we introduce our signal model and the structure of the diffusion dictionary. The graph learning algorithm is presented in Section IV. Finally, in Section V, we evaluate the performance of our algorithm for both synthetic and real world graph signals.

Ii Related work

A number of approaches have recently been proposed to learn the geometry of data. Intense research efforts have been dedicated to methods for estimating covariance matrices (see, e.g. [6]), which carry information about the data geometry. Richer structures can be estimated by learning data graphs instead of the mere covariance matrix. For example, the work in [7] learns a valid graph topology (the adjacency matrix) with an optimization problem that is very similar to sparse inverse covariance estimation, but it instead involves a regularized full-rank Laplacian matrix. Then, the authors in [8]

relax the assumption of a full-rank matrix and propose to learn a valid graph Laplacian by imposing smoothness of observations on the graph. Thus, instead of focusing on pairwise-correlations between random variables, they explore the link between the signal model and the graph topology to learn a graph that provides a globally smooth representation of the corresponding graph signals. This framework has been extended further to yield a more scalable algorithm for learning a valid graph topology

[9]. The authors in [10] propose an algorithm to estimate a generalized Laplacian matrix instead of the classical combinatorial or normalized Laplacian. Finally, manifold learning certainly represents another important class of works that aims at estimating the data structure and bears some similarity with the graph learning algorithms in some specific settings [11]. However, all the above works assume that the data evolve smoothly on the underlying structure, which is not necessarily the ideal model for all datasets.

The idea of recovering graph topologies for different graph signal models is relatively new and has not yet received a lot of attention. An autoregressive model that is based on graph filter dynamics is used in

[12] to discover unknown relations among the vertices of a set of time series. The authors in [13] model the observations as being measured after a few steps of diffusing signals that are initially mutually independent and have independent entries. The diffusion process is modeled by powers of the normalized Laplacian matrix. They propose an algorithm for characterizing and then computing a set of admissible diffusion matrices, which relies on a good estimation of the covariance matrix from the independent signal observations. The problem of estimating a topology from signal observations that lead to particular graph shift operators is studied in [14]

. The authors propose to learn a sparse graph matrix that can explain signals from graph diffusion processes, under the assumption that eigenvectors of the shift operators, i.e., the graph templates, are already known. The graph learning problem then becomes equivalent to learning the eigenvalues of the shift matrix. We will discuss the differences with this scheme in the experimental section. Contrary to the existing works, we learn a graph diffusion process without making any assumption on the eigenvectors of the graph process but instead make an explicit assumption on the diffusion process and the sparse signal model.

Iii Sparse representation of graph signals

Iii-a Signal representation

We consider a weighted and undirected graph , where and represent the vertex (node) and edge sets of the graph, respectively. The matrix contains the edge weights, with denoting the positive weight of an edge connecting vertices and , and if there is no edge. Without loss of generality, we assume that the graph is connected. The graph Laplacian operator is defined as , where is the diagonal degree matrix with the diagonal element equal to the sum of the weights of all edges incident to vertex [1]. Being a real symmetric and positive semidefinite matrix, the graph Laplacian has an orthonormal basis of eigenvectors. We let denote the eigenvector matrix of . The diagonal matrix contains the corresponding eigenvalues on its diagonal.

A graph signal is a function such that is the value of the function at the vertex . We consider the factor analysis model from [15] as our graph signal model, which is a generic linear statistical model that aims at explaining observations of a given dimension with a potentially smaller number of unobserved latent variables. Specifically, we consider

(1)

where is the observed graph signal, is the latent variable that controls , and is a representation matrix that linearly relates the two variables, with . The parameter is the mean of , which we set to zero for simplicity, and is a multivariate Gaussian noise with zero mean and covariance .

To represent signals residing on graphs, and especially to identify and exploit structure in the data, we need to take the intrinsic geometric structure of the underlying graph into account. This structure is usually incorporated in the columns of the representation matrix, i.e., atoms of a dictionary [16, 17]. These atoms carry spectral and spatial characteristics of the graph. Specifically, one can consider spectral graph dictionaries defined by filtering the eigenvalues of the graph Laplacian in the following way:

(2)

where are graph filter functions defined on a domain containing the spectrum of the graph Laplacian. Each of these filters captures different spectral characteristics of the graph signals.

For efficient signal representation, the latent variables should be sparse such that they reveal the core components of the graph signals [18]. In particular, one can impose a Laplace (sparse) prior on the latent variable like

(3)

where is constant, and a Gaussian prior on the noise

. Then the conditional probability of

given can be written as

Given the observation and the Laplace prior distribution of in Eq. (3), we can compute a maximum a posteriori (MAP) estimate of the sparse set of components. Specifically, by applying Bayes’ rule and assuming without loss of generality that , the MAP estimate of the latent variable is [19]:

(4)

where denotes the -norm.

Sparsity-based inverse problems have been widely used in the literature to perform classical signal processing tasks on the observations , such as denoising and inpainting. Sparsity however largely depends on the design of the dictionary, which itself depends on the graph. In the following, we discuss the choice of the representation matrix and the latent variables in our heat diffusion signal model.

Iii-B Diffusion signals on graphs

In this paper, we focus on graph signals generated from heat diffusion processes, which are useful in identifying processes evolving nearby a starting seed node. In particular, the graph Laplacian matrix is used to model the diffusion of the heat throughout a graph or, more generally, a geometric manifold. The flow of the diffusion is governed by the following second order differential equation with initial conditions:

(5)

where describes the heat at node at time , beginning from an initial distribution of heat given by at time zero. The solution of the differential equation is given by

(6)

Going back to our graph signal model, the graph heat diffusion operator is defined as [20]

Different powers of the heat diffusion operator correspond to different rates of heat flow over the graph. If such operators are used to define a dictionary in (2), our graph signal model of (1) becomes

which is a linear combination of different heat diffusion processes evolving on the graph. For each diffusion operator , the signal component can also be interpreted as the result of filtering an initial graph signal with an exponential, low-pass filter on the graph spectral domain. The obtained signal is the sum of each of these simple components . Notice that the parameter in our model carries a notion of scale. In particular, when is small, the column of , i.e., the atom centered at node of the graph is mainly localized in a small neighborhood of . As becomes larger, reflects information about the graph at a larger scale around . Thus, our signal model can be seen as an additive model of diffusion processes that started at different time instances. Finally, the sparsity assumption of Eq. (3) on the latent variables implies that we expect the diffusion process to start from only a few nodes of the graph and spread over the entire graph over time.

Iv Learning graph topologies under sparse signal prior

In many applications, the graph is not necessarily known, and thus the MAP estimate of the latent variables in (4) cannot be solved directly. In the following, we show how the sparse representation model of the previous section can be exploited to infer the underlying graph topology, under the assumption that the signals are generated by a set of heat diffusion processes. First, we formulate the graph learning problem, and then we propose an efficient algorithm to solve it.

Iv-a Problem formulation

Given a set of signal observations , resulting from heat diffusion processes evolving on an unknown weighted graph , our objective is twofold: (i) infer the graph of nodes by learning the graph Laplacian , and (ii) learn, for each signal, the latent variable that reveals the sources of the observed processes, i.e., and the diffusion parameters . As the graph Laplacian captures the sparsity pattern of the graph, learning is equivalent111Since our graph does not contain self-loops, the weight matrix of the graph can be simply computed as , and then setting the diagonal entries to zero. to learning the graph . This results in the following joint optimization problem for , , and :

(7)
subject to

where corresponds to the column of the matrix . According to Eq. (4), the objective can be interpreted as the negative log-likelihood of the latent variables (columns of ) conditioned on the graph Laplacian . The positive scalars and are regularization parameters, while and

denote the vectors of all ones and zeros, respectively. In addition,

and denote the trace and Frobenius norm of a matrix, respectively. The trace constraint acts as a normalization factor that fixes the volume of the graph and the remaining constraints guarantee that the learned is a valid Laplacian matrix that is positive semidefinite. Note that the trace constraint, together with the other constraints, also fixes the -norm of , while the Frobenius norm is added as a penalty term in the objective function to control the distribution of the off-diagonal entries in , that is, the edge weights of the learned graph. When is fixed, the optimization problem bears similarity to the linear combination of and penalties in an elastic net regularization [21], in the sense that the sparsity term is imposed by the trace constraint. When , are fixed, problem (7) becomes equivalent to a MAP estimator, as discussed in the previous subsection.

Note that our problem formulation depends on the number of blocks , i.e., the number of scales of the diffusion processes. The choice of depends on the training signals, in particular, on the number of scales that one can detect in the training data. As we expect the diffusion processes to be localized, we typically choose a small value for , say, 1 to 3. One of the blocks would correspond to a very small scale (i.e., highly localized atoms), and the other blocks would capture larger scale, but still somewhat localized patterns.

The optimization problem (7) is nonconvex with respect to simultaneously. In particular, the data fidelity term is smooth but nonconvex as it contains the product of the three matrix variables (e.g., ). As such, the problem may have many local minima and solving it is hard. One could apply alternating minimization, where at each step of the alternation we update one variable by fixing the rest. This, however, does not provide convergence guarantees to a local minimum and, moreover, solving the problem with respect to is difficult due to the matrix exponential, which makes the problem nonconvex even when are fixed. In the next section, we propose an effective algorithm to solve the graph learning problem, which is not affected by this difficulty.

Iv-B Graph learning algorithm

In order to solve (7), we apply a proximal alternating linearized minimization algorithm (PALM) [5], which can be interpreted as alternating the steps of a proximal forward-backward scheme [22]. PALM is a general algorithm for solving a broad class of nonconvex and nonsmooth minimization problems, which, under mild assumptions [5], guarantees that the iterates converge to a critical point. Moreover, it does not require convexity of the optimization problem with respect to each variable separately. The basis of the algorithm is alternating minimization between the three variables , but in each step we linearize the nonconvex fitting term with a first order function at the solution obtained from the previous iteration. In turn, each step becomes the proximal regularization of the nonconvex function, which can be solved efficiently. More specifically, the algorithm consists of three main steps: (i) update of , (ii) update of , (iii) update of , and inside each of these steps we compute the gradient and estimate the Lipschitz constant with respect to each of the variables. Algorithm 1 contains a summary of the basic steps of PALM adapted to our graph learning problem.

0:   Input: Signal set , number of iterations
0:   Output: Sparse signal representations , graph Laplacian , diffusion parameter
0:   Initialization: ,
0:   for do:
0:    Choose
0:     Update by solving opt. problem (8)
0:    Choose
0:     (a) Update by solving opt. problem (IV-B2)
0:     (b) Update
0:    Choose
0:     (a) Update by solving opt. problem (12)
0:     (b) Update
0:   end for
0:   .
Algorithm 1 Learning heat kernel graphs (LearnHeat)

In the following, we explain in detail each of the steps of Algorithm 1. We make use of the following definitions:

where is an indicator function for the convex set , defined as

Iv-B1 Update of (Algorithm 1: lines 5-6)

For iteration of the sparse coding update step, we solve the following optimization problem:

(8)

where are the updates obtained at iteration and is a positive constant. This step can simply be viewed as the proximal regularization of the nonconvex function , linearized at :

where is the proximal operator of the convex function with parameter , given by

(9)

where . The required gradient of with respect to is computed in Appendix A-A. The parameter is defined such that , with and the Lipschitz constant of with respect to , as derived in Appendix B-A.

Iv-B2 Update of (Algorithm 1: lines 7-9)

The graph update step is performed by

(10)

with for some and the estimate of the Lipschitz constant of described in Appendix B-B. Given that comprises a quadratic term constrained in a convex polytope, the proximal minimization step (10) is a quadratic program (QP) that can be written as:

subject to
(11)

This requires the gradient of with respect to , the derivation of which can be found in Appendix A-B. Given this gradient, the optimization problem (IV-B2) can be solved using operator splitting methods, such as the alternating direction method of multipliers (ADMM) [23]. In this paper, we solve the problem by using the algorithm proposed in [24], which converts the problem to a convex cone optimization problem, and utilizes ADMM to solve the homogenous self-dual embedding. Compared to other methods, this approach finds both primal and dual solutions of the problem, is free of parameters, and scales to large problem sizes.

Iv-B3 Update of (Algorithm 1: lines 10-12)

Finally, we can update the diffusion parameters following the same reasoning as above. The corresponding optimization problem can be written as

subject to (12)

where , with and the Lipschitz constant computed in Appendix B. This problem has a closed form solution given by

(13)

with the gradient computed in Appendix A-C. Finally, we note that if we have an a priori estimate of the diffusion parameters (e.g., from the training phase) then we solve our optimization problem with respect to by following the first two steps of our algorithm.

Iv-C Discussion on the computational complexity

In the following, we discuss the computational complexity our graph learning algorithm. Dealing with the heat diffusion processes represents one of the main computational bottlenecks. Both, the computation of the matrix exponential via a spectral decomposition or via the scaling and squaring method [25] as well as the computation of its gradient described in Appendix A-B require operations. Thus, this part of the algorithm can be expected to become time consuming for very large graphs. One way to reduce this cost is to approximate the heat diffusion kernel with a polynomial of degree , reducing the complexity of applying a heat diffusion process to , where is the number of edges of the graph Laplacian. Since we generally consider heat diffusion processes that remain well localized, the degree will typically be small. This approximation of the heat diffusion process is particularly efficient when the graph Laplacian is sparse. Also, it can be expected that the complexity of the gradient computation greatly reduces when using a polynomial approximation of the kernel; see [26] for some recent work in this direction. A detailed investigation of this aspect is part of our future work.

We further note that the computational complexity of the sparse coding step (lines 5-6 of the Algorithm) is dominated by the cost of computing the Lipschitz constant (see Appendix A-A), which requires the computation of the product and is of order . Again, this cost greatly reduces when using a polynomial approximation of the kernel. The update of the sparse codes in Eq. (8) requires operations. Finally, the update of the graph Laplacian (Algorithm 1: lines 7-9) consists of three steps: the recursive approximation of the Lipschitz constant (Appendix B-B), the computation of the gradient discussed above and the solution of the optimization problem (IV-B2). The solution of (IV-B2) involves three main steps [24], among which the most expensive one is solving a linear system. For large scale systems, this can be done efficiently by applying a conjugate gradient method. Finally, the computation of the Lipschitz constant in the update of (see Appendix B-C) requires the computation of the spectral norm of , which can be estimated in operations by a few steps of the power or Lanczos method [27].

V Experiments

In this section, we evaluate the performance of the proposed algorithm in both synthetic and real world experiments. We solve the optimization problem of Eq. (IV-B2) using ADMM, which is implemented with the splitting conic solver [24], a numerical optimization package for solving large-scale convex cone problems222The conic solver can be found at https://github.com/cvxgrp/scs. As a termination criteria, we stop the algorithm when a maximum number of iterations (set to 1000 in our experiments) is reached or the absolute difference in the value of the objective function at two consecutive iterations is smaller than .

V-a Results on synthetic data

V-A1 Simulation settings

We first test the performance of the learning algorithm by comparing the learned graph to the one from the groundtruth in synthetic datasets. We evaluate the performance of the algorithm on random graphs of vertices, generated from three different models: the RBF random graph, the Barabási-Albert model (BA) [28], and the Erdős-Rényi model (ER) [29]. In the case of the RBF graph, we generate the coordinates of the vertices uniformly at random in the unit square, and we set the edge weights based on a thresholded Gaussian kernel function so that if the distance between vertices and is less than or equal to , and zero otherwise. We further set and in our experiments. In the ER graph, an edge is included with probability 0.2 independently of the other edges. Finally, in the BA graph, we add vertices one after the others and connect them to existing vertices following a preferential attachment mechanism. Given the adjacency matrix of each type of graph, we finally compute the graph Laplacian and we normalize in such a way that its trace is equal to .

With the above model-based graphs, we then construct synthetic graph signals as follows. We use the graph Laplacian to generate an oracle dictionary of the form , with , for the RBF and the ER graph and for the BA model. These values are chosen in such a way that our dictionaries contain two patterns that are sufficiently distinct from each other. In particular, the one corresponding to a small captures a very localized pattern while the one corresponding to a large

captures a diffusion that has already spread in the local neighborhood of the vertex. We then generate 100 graph signals by linearly combining three random atoms from the dictionary with random coefficients drawn from a Gaussian distribution with zero mean and unit variance. The sparse vector of random coefficients represents the initial heat on the graph. We finally hide the graph structure, and apply Algorithm

1 with different sets of parameters and in order to estimate the graph only from the signal observations. The initialization of the graph Laplacian is done with a random valid Laplacian matrix. We compare our algorithm (LearnHeat) with the following two methods: (i) the algorithm proposed in [14], which is based on learning the graph Laplacian from diffusion filters and does not have any assumption on the global smoothness of the signals, and (ii) the algorithm in [8] which recovers a graph Laplacian by assuming global smoothness of the signals on the graph. We solve the algorithm in [14] for different values of the parameter , where controls the imperfection of the spectral templates estimated from the covariance matrix, and provides a constraint on how close the optimized matrix should be to these templates. We threshold the learned Laplacian matrices by ignoring entries whose absolute values are smaller than .

(a) Gaussian RBF: Groundtruth
(b) LearnHeat
(c) Diffusion filters [14]
(d) Smooth model [8]
(e) Gaussian RBF: Groundtruth
(f) LearnHeat
(g) Diffusion filters [14]
(h) Smooth model [8]
Fig. 2: The learned graph Laplacian matrices for a Gaussian RBF graph. The color indicates the values of the entries of the graph Laplacians. The first row illustrates the groundtruth Laplacian and the Laplacians recovered with LearnHeat, the algorithm of [14] and the smooth signal model of [8], when the training signals are clean. The second row illustrates the same results obtained from noisy training signals.

V-A2 Graph learning performance

We first compare visually the learned graph Laplacian for an RBF graph model with the corresponding groundtruth one. The results illustrated in Fig. 2 are the ones obtained for the pair of and that leads to the best quantitative results (see below), and the best for [14]. First, we consider the noiseless case of clean training signals (first row). We observe that both the graph Laplacians learned with the proposed algorithm and the diffusion filters of [14] are visually consistent with the groundtruth Laplacian, reaching an F-measure score of 0.9784 and 0.9927 respectively. On the other hand, the performance of [8] that is based on a smooth signal model is worse in terms of F-measure score (0.9173). This is quite expected as, when the diffusion parameter is relatively small, the signals generated by heat diffusion processes consist mainly of localized, piecewise smooth components that can be better captured with the other two algorithms. A globally smooth signal model can help recovering parts of the graph, but is not accurate enough to reveal the true topology. However, as increases the diffusion tends to a steady state that is a smooth signal on the graph. In that case, the behavior of our algorithm is expected to be close to the one in [8].

In the second set of experiments, we test the sensitivity of the three algorithms to noise on the training signals. In particular, we add some white noise with zero mean and variance 0.02 to our training signals, leading to a signal to noise ratio of approximately 13 dB. In the second row of Fig.

2, we observe that LearnHeat is quite resilient to the noise, reaching an F-measure score of 0.9552 and an error of 0.2642 in terms of the Frobenius difference of the edge weights compared to the groundtruth ones. The performance of [14] seems to deteriorate significantly due to the noise, achieving an F-measure score of 0.8451 and error weight of 0.3546. This is quite expected as the algorithm is based on the estimation of the eigenvectors of the Laplacian, which depends on the covariance of the noisy training set. The performance of [8] deteriorates too but less significantly, as this algorithm contains a term in the optimization problem that performs denoising of the training signals.

In order to evaluate quantitatively the performance of our learning algorithm in recovering the edges of the groundtruth graph, we report the Precision, Recall, F-measure and Normalized Mutual Information (NMI) [30] scores, as well as the difference in terms of the Frobenius norm of the edge weights, averaged over ten random instances of three graph models with their corresponding 100 signal observations. For computing the NMI, we first compute a 2-cluster partition of all the vertex pairs using the learned graph, based on whether or not there exists an edge between the two vertices. We then compare this partition with the 2-class partition obtained in the same way using the groundtruth graph. The LearnHeat results shown in Tables I, II are the ones corresponding to the best combinations of and in terms of F-measure for noiseless and noisy training signals respectively, while the results of [14] are the ones obtained for the constant that gives the best F-measure. These results confirm that our algorithm is able to learn graph topologies that are very similar to the groundtruth ones and its performance is quite robust to noise. The algorithm of [14] seems to perform very well in the noiseless case. However, its performance deteriorates in most of the noisy cases. As expected, the worst performance is observed in [8]. Since our training signals consist of localized heat diffusion patterns, the performance of [8] is significantly penalized by the global smoothness constraint that is imposed in the optimization problem.

Graph model F-measure Precision Recall NMI weight error
Gaussian RBF (LearnHeat) 0.9779 0.9646 0.9920 0.8977 0.2887
Gaussian RBF [14] 0.9911 0.9905 0.9919 0.9550 0.2081
Gaussian RBF [8] 0.8760 0.8662 0.8966 0.5944 0.4287
ER (LearnHeat) 0.9303 0.8786 0.9908 0.7886 0.3795
ER [14] 0.8799 0.8525 0.9157 0.65831 0.3968
ER [8] 0.7397 0.6987 0.8114 0.4032 0.5284
BA (LearnHeat) 0.9147 0.8644 0.9757 0.7538 0.4009
BA [14] 0.8477 0.7806 0.9351 0.6009 0.3469
BA [8] 0.6969 0.6043 0.8459 0.3587 0.5880
TABLE I: Graph learning performance for clean data
Graph model F-measure Precision Recall NMI weight error
Gaussian RBF (LearnHeat) 0.9429 0.9518 0.9355 0.7784 0.3095
Gaussian RBF [14] 0.8339 0.8184 0.8567 0.5056 0.3641
Gaussian RBF [8] 0.8959 0.7738 0.9284 0.5461 0.4572
ER (LearnHeat) 0.8217 0.7502 0.9183 0.5413 0.3698
ER [14] 0.8195 0.7662 0.8905 0.5331 0.3809
ER [8] 0.6984 0.5963 0.8690 0.3426 0.5172
BA (LearnHeat) 0.8155 0.7503 0.8986 0.5258 0.4036
BA [14] 0.8254 0.7613 0.9068 0.5451 0.3980
BA [8] 0.7405 0.6800 0.8230 0.3980 0.5899
TABLE II: Graph learning performance for noisy data

V-A3 Algorithm analysis

To understand the effect of the number of the training signals in the learning performance, we run a set of experiments on some clean training signals. In Fig. 3, we illustrate the best F-measure score achieved for a training set of size . We observe that the performance of all three algorithms under study depends on the training set. However, for a very small size of the training set, our algorithm seems to outperform the others. In that regime, the recovery performance from the diffusion filters [14] depends on the estimation of the spectral templates, which is highly dependent on the number of the training samples. Although this approach is quite interesting and works very well when the training set is large and the estimation of the covariance matrix is accurate, it might face some limitations when the training set is limited and noisy. In contrary, our algorithm learns a graph diffusion process without making any assumption on the eigenvectors of the graph process: it rather sets an explicit assumption on the (heat) diffusion process and the signal model. Moreover, our sparsity assumption imposes additional structure to the problem, leading to high recovery performance even when the training set is limited.

We now study the effect of the parameters and in the objective function of Eq. (7). We illustrate in Fig. 4 the number of edges of the learned graph and the F-measure score under different combinations of these parameters, for a random instance of the Gaussian RBF graph. The obtained results indicate that the performance of the algorithm in terms of both the number of edges and the F-measure is mainly determined by the sparsity control parameter . The parameter influences the sparsity pattern of the learned graph. In particular, for a fixed , the number of learned edges decreases as decreases. A big implies a small Frobenius norm, which leads to a Laplacian matrix with many non-zero entries that are similar to each other. Thus, the correct value of is determined by the true sparsity of the underlying graph. Then, in order to understand the effect of the parameter , we need to distinguish the following three cases. When is relatively small, the number of edges is relatively big, leading to a F-measure score that is low. In this case, the solution of the optimization problem is mainly determined by the fitting term and the Frobenius norm constraint leading to a graph that is dense, with similar entries when is larger. As we increase , we observe in Fig. 4 that the number of edges decreases, and the learned graph becomes similar to the groundtruth one as indicated by the F-measure score. In particular, there exists a range of values for the parameter where the learned graph reaches a number of edges that is similar to the one of the true graph, and the F-measure reaches its peak. Alternatively, when the value of is relatively big, the solution of the sparse coding step tends to give a matrix that is sparser. In that case, the algorithm tries to express the signals in the dense matrix as a heat diffusion process starting from some sparse initial heat sources . We recall that the heat kernel can be written as the Taylor expansion of the exponential function . Moreover, the power of the Laplacian is localized in the -hop neighborhood of a node , i.e., if nodes and are not connected with a path of at least -hops on the graph [31]. Thus, the initial heat , corresponding to an observation , diffuses all over the graph only if there exists a finite path connecting the sources indicated in with the other nodes of the graph. As a result, in order to approximate a dense observation , the graph that we learn should be more connected. In the extreme case when

is a zero matrix, the objective function penalizes only the Frobenius norm of

. The latter explains the tendency of the algorithm to favor complete graphs with similar entries when is large.

Fig. 3: Dependence of the F-measure on the size of the training set for the three different algorithms i.e., LearnHeat, Diffusion Filters and Smooth priors.

V-A4 Source localization

In a final set of experiments, we illustrate the performance of the learned diffusion dictionary in terms of source localization. For the sake of simplicity, we focus on the case of only one dictionary block. In particular, we use the different instances of the learned topologies with our scheme for an RBF Gaussian graph model. We then use the learned graphs to solve a sparse inverse problem, similar to (4), to recover the sources from a set of some other signal observations. For each value of the parameter , we generate one diffusion dictionary per topology instance. Each of these dictionaries are used to generate a set of 1000 testing signals that are each a linear combination of 3 atoms of the corresponding dictionary, generated in the same way as the training signals. The location of these atoms defines the initial sources of the process. We aim at recovering the initial sources by solving an iterative soft thresholding algorithm [32] with the diffusion dictionary on a learned graph.

Fig. 4: (a) The number of edges in the learned graph, and (b) the F-measure score, under different combinations of the parameters and for an instance of the Gaussian RBF graph.
Fig. 5: Source recovery performance measured with respect to (a) the F-measure score between the recovered and the groundtruth sparse codes and (b) the location of the highest in magnitude sparse codes coefficients for different values of the diffusion parameter , and three initial sources.

In Fig. 5, we show the source recovery performance for different values of the parameter . In particular, in Fig. 5, we illustrate the average F-measure score between the groundtrouth sparse codes of the testing signals and the recovered ones as a function of . We observe that the F-measure is high when is low. The latter is intuitive as a small implies that the diffusion process is quite localized around the initial sources, leading to an easier recovery. As increases the performance reduces significantly, as the diffusion process tends towards a smooth signal on the graph. Thus, recovering the sources becomes more difficult. We notice that the recovery performance is measured in terms of the activation of the sources, i.e., the non-zero position of the learned sparse codes, and not the actual value. In order to understand better the source localization performance with a sparse prior, we keep only the highest ones in terms of magnitude values of the sparse codes, where is the number of the initial sources. We then illustrate in Fig. 5 the average number of recovered sources for different values of the parameter , for the sparsity parameter of (4) that gives the best source recovery results. These results are consistent with the previous ones and they confirm that when is low, the location of the sparse codes with higher magnitude refers to the initial sources.

V-B Graph learning in real-world datasets

V-B1 ETEX dataset

We now illustrate the performance of our algorithm on real world datasets. We first consider data from the European Tracer Experiment (ETEX), which took place in 1994 [33].333The dataset is publicly available in https://rem.jrc.ec.europa.eu/etex/ and has already been processed in [34]. The experiment consists in injecting a particular gas, namely the tracer, into the atmospheric system and then in observing the evolution of the tracer with a variety of sampling and analysis tools. Tracer techniques are widely applied for the determination of dispersion and dilution patterns of atmospheric pollutants. In particular, an easily identifiable tracer (perfluorocarbons) has been released in the atmosphere from the city of Rennes, in France. The concentration of the tracer has then been measured over a period of 72 consecutive hours, at 168 ground-level stations in Western and Eastern Europe. In our experiments, we consider the 168 sampling stations as nodes of the graph and the concentration measured in each of them as signals on the graph. The measurements obtained at different time instances within the 72-hour period form 30 observations, which are used to infer the diffusion topology that can explain well the diffusion of the tracer. For this experiment, we choose as the observations consist of many zeros entries, which indicates that they can be approximated with a single diffusion process at small scale. Moreover, we fix the scale parameter to and we initialize the Laplacian matrix, with a random graph Laplacian matrix.

Fig. 6: The learned graph and different measurements over time (a-c) of the concentration of the tracer (signal observations). The color code represents the concentration measured in each station.

In Fig. 6, we illustrate the most important edges of the graph learned with LearnHeat and some representative measurements of the concentration of the tracers, which are used as training signals in the learning. The estimated graph indicates the main directions towards which the tracer moved, which are consistent with the signal observations. These directions are influenced by many parameters such as the meteorological conditions and the direction of the wind. We observe that there exist some strong connections between stations in France and Germany, and those in Sweden and Hungary, which are consistent with the conclusions in the documentation of the dataset [33].

Finally, in Fig. 7, we study how well a diffusion dictionary based on the graph Laplacian can represent the signal observations with only a few atoms of the dictionary. We compute the sparse approximation using an iterative soft thresholding algorithm that promotes sparsity of the coefficients . We compare our results with a diffusion dictionary designed based on a graph that is constructed using geographical distances between these stations. We observe that the approximation error is significantly smaller in the case of the diffusion dictionary based on the learned graph for different sparsity levels. These results indicate that learning the topology can bring significant benefits for effective structured data representation.

Fig. 7: Approximation performance of the daily signals for different sparsity levels on dictionaries generated from a geographical graph (blue) and the learned graph (red).

V-B2 Uber dataset

In the final set of experiments, we use our graph learning algorithm to detect patterns from Uber rides in New York City. In particular, we use the Uber dataset444The dataset is publicly available in https://github.com/fivethirtyeight/uber-tlc-foil-response for the month of September 2014, which provides time and location for pickups. For the sake of simplicity, we divide the city into taxi zones, as shown in Fig. 8, and each zone is a node of a graph. The hourly number of Uber pickups in each zone is a signal on the graph. Moreover, we divide the day into five time slots 1) 7 am - 10 am, 2) 10 am - 4 pm, 3) 4 pm - 7 pm, 4) 7 pm - 12 pm, 5) 12 pm - 7 am. For each of these slots, we define as training signals the number of pickups measured for each hour inside the corresponding time interval, all weekdays of the month.

For each of these five set of training signals, we learn a heat diffusion dictionary with blocks, for different parameters of and . In order to choose the best parameter of and , we define as a criteria that the number of edges of the learned graph should be approximately . We expect that the graph learned for each time interval conveys some information about the traffic patterns in the city. In Fig. 8, we show the learned graphs for each of the time intervals. We can clearly see patterns that are indicative of the behavior of the people in the city. First, there is a clear tendency of people using Uber to go/come mostly to/from airports (JFK, La Guardia, Newark) in early morning (Fig. 8). Moreover, the connections of the graph during the rush hours (7 am - 10 am and 4 pm - 7 pm) indicate the commuting of people from/to different neighborhood of the city to/from Manhattan. During the day (10 am - 4 pm), there is no clear pattern as the graph learned from the distribution of the Uber cars indicates that people tend to use Uber to go to random places in the city. Finally, from 7 pm to midnight (Fig. 8), most of the connections are concentrated across Manhattan, which probably indicates that most of the people use Uber to visit bars or restaurants that are concentrated around that area. These are of course just some observations that confirm the efficiency of our algorithm in learning meaningful graphs. A more detailed mining of the mobility patterns in New York City requires taking into consideration other factors such as the population of each region, a finer grid of the zone, the organization of the city in terms of public transform, which is out of the scope of this paper.

Fig. 8: (a) Boundaries of the taxi zones in New York City. Each zone is represented by a node on a the learned graph. The learned graphs over different time intervals: (b) 0.00 - 7.00 am, (c) 7.00 am - 10.00 am, (d) 10.00 am - 4.00 pm, (e) 4.00 pm - 7.00 pm, and (f) 7.00 pm - 12 pm.

Vi Conclusion

In this paper, we have presented a framework for learning graph topologies (graph Laplacians) from signal observations under the assumption that the signals are generated from heat diffusion processes starting from a few nodes of the graph. Specifically, we have proposed an algorithm for learning graphs that enforces sparsity of the graph signals in a dictionary that is a concatenation of graph diffusion kernels at different scales. Experimental results on both synthetic and real world diffusion processes have confirmed the usefulness of the proposed algorithm in recovering a meaningful graph topology and thus leading to better data understanding and inference. We believe that the proposed graph learning framework opens new perspectives in the field of data inference and information propagation in particular from the emerging graph signal processing viewpoint.

Vii Acknowledgments

The authors would like to thank S. Segarra and A. Ribeiro for sharing their MATLAB code of the algorithm in [14] used in the experiments, and G. Stathopoulos for discussions on the solution of the optimization problem.

Appendix A Computation of the gradient

As noted in Section IV-B, Algorithm 1 require the computation of the gradient of the fitting term with respect to each of the variables . In the following, we discuss the computation of each of the gradients separately.

A-a Gradient with respect to

The gradient of with respect a column of is independent of the other columns of . Moreover, it depends only on the corresponding observation and it can be written as

(14)

A-B Gradient with respect to

The gradient of with respect to is:

(15)

In order to compute Eq. (15), we make use of the following proposition.

Proposition 1.

Consider a general matrix and a symmetric matrix , admitting a spectral decomposition . Then

where denotes the Hadamard product and is the matrix defined by the entries

(16)
Proof.

The desired gradient is uniquely defined by satisfying the relation

(17)

for all sufficiently small perturbations . Using the fact that the eigenvectors of are orthonormal, i.e., , where

is the identity matrix, we can write the left hand-side of Eq. (

17) as follows:

(18)

The Frèchet derivative of the matrix exponential at a diagonal matrix applied to a direction is the matrix with defined in (16); see [25]. Using the above developments and the linearity of the trace operator we obtain that

(19)

Finally, using again the orthonormality of the eigenvectors , we can write

(20)

Combining Eqs. (19), (20), we conclude that . ∎

Given the result of Proposition 1, the gradient for some

can be found by applying the chain rule:

.

A-C Gradient with respect to

The gradient of with respect to satisfies

(21)

By the Taylor expansion of the exponential, it follows for any that

(22)

Combining (21), (22), we obtain that

Finally, the gradient with respect to the vector is given by a vector whose elements consist of the gradient with respect to each element of , i.e.,

Appendix B Computation of the Lipschitz constants

A condition for ensuring convergence of PALM is that at each iteration of the algorithm the descent lemma is satisfied [5]. This, however, requires to determine a global Lipschitz constant or an approximation thereof such that the descent condition is satisfied. Next, we discuss the computation of the Lipschitz constants related to the update of each of the three variables in our graph learning algorithm. As we will see, it is feasible to compute these constants for the update of and . On the other hand, the computation of the Lipschitz constant more difficult for because of the involved matrix exponential. In this case, we perform backtracking to approximate the Lipschitz constant.

B-a Variable

The function is globally Lipschitz with Lipschitz constant , as can be seen from

B-B Variable

Due to the difficulty of computing the Lipschitz constant for an exponential matrix function, we estimate the associated constant by performing backtracking line search as follows. One condition for convergence of PALM is that the descent lemma is satisfied at each iteration, i.e.,

(23)

Moreover, the solution of the optimization problem (IV-B2) indicates that for every

By setting in the right-hand side of the inequality and combining with Eq. (B-B), we obtain that

(24)

where we have used the fact that . This result guarantees the decrease of the objective function over the iterations. The backtracking is shown in Algorithm 2.

0:   Input: , initial guess for ,
0:   Output: Estimate of the Lipschitz constant
0:   while (B-B) is False do:
0:    Update: ,
0:    
0:    Update by solving Eq. (IV-B2)
Algorithm 2 Backtracking algorithm for estimating at iteration

B-C Variable

Since the objective function is convex and twice differentiable with respect to , we estimate the Lipschitz by computing the Hessian . Using (21), the entries of this matrix are given by

(25)

Given that the Hessian is a positive semidefinite matrix, its -norm is its largest eigenvalue and any upper bound on this eigenvalue gives a global Lipschitz constant. We will use the fact that the largest absolute row sum of the matrix represents such an upper bound. For this purpose, we first estimate

where we have used the fact that , for every , due to the positive semidefiniteness of . An upper bound on the largest eigenvalue, which in turn gives the Lipschitz constant, is thus given by

References

  • [1] F. R. K. Chung, Spectral graph theory.   American Mathematical Society, 1997.
  • [2]

    D. I Shuman, S. K. Narang, P. Frossard, A. Ortega, and P. Vandergheynst, “The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains,”

    IEEE Signal Processing Magazine, vol. 30, no. 3, pp. 83–98, May 2013.
  • [3] F. Chung, “The heat kernel as the pagerank of a graph,” National Academy of Sciences, vol. 104, no. 50, pp. 19 735–19 740, 2007.
  • [4] H. Ma, H. Yang, M. R. Lyu, and I. King, “Mining social networks using heat diffusion processes for marketing candidates selection,” in 17th ACM Conference on Information and Knowledge Management, 2008, pp. 233–242.
  • [5] J. Bolte, S. Sabach, and M. Teboulle, “Proximal alternating linearized minimization for nonconvex and nonsmooth problems,” Math. Program., vol. 146, no. 1-2, pp. 459–494, Aug. 2014.
  • [6] J. Friedman, T. Hastie, and R. Tibshirani, “Sparse inverse covariance estimation with the graphical lasso,” Biostatistics, vol. 9, no. 3, pp. 432–441, Jul 2008.
  • [7] B. Lake and J. Tenenbaum, “Discovering structure by learning sparse graph,” in 33rd Annual Cognitive Science Conference, 2010.
  • [8] X. Dong, D. Thanou, P. Frossard, and P. Vandergheynst, “Learning Laplacian matrix in smooth graph signal representations,” IEEE Transactions on Signal Processing, vol. 64, no. 23, pp. 6160