Graph heat mixture model learning

01/24/2019 ∙ by Hermina Petric Maretic, et al. ∙ 0

Graph inference methods have recently attracted a great interest from the scientific community, due to the large value they bring in data interpretation and analysis. However, most of the available state-of-the-art methods focus on scenarios where all available data can be explained through the same graph, or groups corresponding to each graph are known a priori. In this paper, we argue that this is not always realistic and we introduce a generative model for mixed signals following a heat diffusion process on multiple graphs. We propose an expectation-maximisation algorithm that can successfully separate signals into corresponding groups, and infer multiple graphs that govern their behaviour. We demonstrate the benefits of our method on both synthetic and real data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Understanding pairwise relationships is often crucial in interpreting and analysing high-dimensional data. While these relationships are sometimes given explicitly in the dataset (e.g., data from social, biological or sensor networks), many datasets do not have a readily available graph structure modelling relationships between data. Network inference deals precisely with such data, providing means to better represent, understand and eventually analyze data.

First efforts in inferring data relationships came in terms of sparse inverse covariance (precision) matrix inference [1], where pairwise relationships are modelled as conditional dependencies between nodes. More recent works focus on structured graph representations, such as (generalised) graph Laplacian matrices[2]. A standard assumption is that of signal smoothness that permits to develop learning algorithms with a signal processing perspective [3, 4, 5]. Another path of works assume that the data are generated by a heat diffusion process on an unknown graph. This is a commonly used model, with applications stemming from brain diffusion modelling to social networks [6]. Several works have studied the graph inference problem from heat diffusion signals, including sparse dictionary models [7], online graph inference [8] and models that can deal with more general diffusion signals [9, 10, 11]. Some works have considered data which is a priori grouped into multiple clusters, and each of these clusters can be represented with a different graph [12] [13]. However, it is not always reasonable to assume clusters are predefined or easily obtainable.

In this work, we build on our prior work on the Graph Laplacian mixture model [14] and propose a generative model for mixed signals that follow a heat diffusion process on different graphs. Specifically, each signal belongs to a cluster and follows a heat diffusion process on a graph corresponding to its cluster. However, both the clusters and the graphs are assumed to be unknown. We present a novel algorithm that can jointly separate signals into clusters that relate to the generative graphs, and efficiently infer the corresponding graph structures. The algorithm relies on a well established expectation maximisation scheme, while the graph learning step is formulated in a convex manner and can be efficiently solved with FISTA [15]. We compare our method to existing works that take into account a simple smoothness assumption [14] or implicitly learn graph structures, as well as a separated clustering and graph inference scheme. We show the benefits of our model in terms of both signal clustering and multiple graph inference on synthetic data and real data describing Uber pick-ups in New York city.

This is one of the first methods for multiple graph inference from mixed signals. We believe this is an important area that brings a new dimension to graph inference and hope our method will provide valuable insights in many complex datasets.

Ii Preliminaries

Let be a collection of undirected, weighted graphs with a set of shared vertices . Each graph has a separate set of edges , while are the weighted adjacency matrices, with , and for all .

The Laplacian matrix of is defined as

(1)

where is a diagonal matrix of node degrees. A signal that lives on the graph and corresponds to a heat diffusion process, is defined as [16]:

(2)

where is the heat diffusion parameter. Throughout this paper, we will assume .

Iii Graph heat mixture model

We consider a set of observed signals , where each signal is associated with one of the graphs , for every and . As shown in Figure 1, the set of signals associated to the same graph defines a cluster .

Fig. 1: A toy example for the graph heat mixture model. The signals live each on exactly one of the two proposed graphs, Graph 1 and Graph 2. Our objective is to separate the signals into clusters corresponding to each graph, while inferring both graph structures at the same time.

We propose a generative model for such signals. Each signal is associated to cluster

with probability

. As shown in Figure 2, this selection is modelled through a latent variable , such that

(3)

This directly defines a prior probability for the latent variable

, where .

Fig. 2: Plate notation for our generative model. Filled in circles are observed variables, small empty squares are unknown parameters, and non-filled circles represent latent variables. Large plates indicate repeated variables.

Further, the signals in cluster share a mean and follow a heat diffusion process on graph , yielding:

(4)

We can now model the probability distribution of

as

(5)

Marginalising over all possible clusters , we have:

(6)

Finally, taking all independent signals into account, the probability distribution for becomes:

(7)

Iii-a Problem formulation

Given the model in Eq. (7), we want to infer the unknown clusters and graph structures from the observed signals

. Specifically, we formulate a maximum likelihood estimation problem:

(8)

where the optimisation variable relates to cluster allocation, the variable represents the graphs, and the variables and characterize the heat diffusion processes. The optimisation problem in Eq. (III-A) is very difficult to solve directly, and we propose to solve it with an expectation-maximisation (EM) algorithm in the next Section.

Iv Inference Algorithm

We propose here to solve the inference problem of Eq. (III-A) with an alternating EM algorithm, as it is commonly done for problems of the same form. We randomly initialise the values , and . Then, we alternate between an expectation step, where we estimate expected values for latent variables and a maximisation step, where we use these expected values to uncover unknown parameters , and . As will be shown later, acts as a scaling factor for , and only has a unique solution if there is some additional knowledge about . The two steps of the algorithm are described in more details below.

In the expectation step of the algorithm we estimate cluster responsibilities . They are the expected values of latent variables and the best estimation for the clusters of given the observed data and the current version of parameters , and . Formally, these cluster responsibilities can be estimated as:

(9)

This closed form solution permits to infer the entire matrix of cluster responsibilities .

With the estimated responsibilities , we can move to the maximisation step. Specifically, we maximise the optimisation problem of Eq. (III-A) over the expected posterior distribution given all observations (for details, see [14]):

(10)

It is not difficult to infer a closed form solution for and , with:

(11)
(12)

To infer graph Laplacian matrices , we first notice that the covariance matrices that relates to the heat diffusion processes, can also be written in closed form as:

(13)

In order to efficiently infer graph structures, the information of data probability might however not be sufficient. Namely, without very large amounts of data, the sample covariance matrices are usually noisy (if not low rank), and it can be difficult to recover the exact structure of the graph matrix. We thus formulate a problem that aims at finding a valid Laplacian matrix that would give a covariance matrix similar to the sample covariance one, while at the same time imposing a graph sparsity constraint. Namely, we can estimate the weight matrix as

(14)

This is equivalent to solving

(15)

with the same constraints. It results in a convex problem that can be solved efficiently with FISTA [15].

Notice that the heat kernel parameter becomes just a scale for values in , with eg. . Unfortunately, that means that without any prior knowledge on values in , it is impossible to uniquely determine the value of . We still keep in the formulation, as it is easily determined when the norm or size of values in

is known a priori. More realistically, its scale is uniquely determined when the heat diffusion process is observed in different moments (i.e. for different

values), but on the same set of graphs. In those cases proves to be very important, and apart from significantly changing signal values, it highly affects the accuracy of graph inference, as will be demonstrated in experiments below.

V Experimental results

In this section, we present experimental results that show the effectiveness of our new inference algorithm. We first evaluate the graph heat mixture model (GHMM) on synthetically generated data, comparing it with alternative methods from the literature. We then turn to real data describing Uber pick-ups in Manhattan, where our method manages to automatically separate data corresponding to different mobility patterns at different times of day.

V-a Synthetic results

We first evaluate the performance of our method for different sizes of the observed signal set and different values of the heat kernel parameter . We generate two connected Erdos-Renyi graphs and of size 20, with edge probability . The means for each cluster are randomly drawn from , and the membership probabilities for each cluster are fixed to .

We compare our method to the Graph Laplacian Mixture model (GLMM) [14]

, which jointly infers signal clusters and the corresponding graphs based on mere smoothness priors. We also compare to a Gaussian mixture model (GMM), where the thresholded inverse covariance (precision) matrices act as graph structures; as well as a method performing

-means followed by an established graph learning technique [4]

on each of the clusters separately (K-means + GL). As all methods show high sensitivity to initialisation, we run each experiment 5 times with different random intialisations of

and , and select the best performing run for each algorithm. We repeat this experiment 100 times, and present results in terms of clustering NMSE (in %) and graph inference F-measure.

First, we observe the behaviour of our algorithm with respect to a different number of observed signals . For each we generate signals from and from

. The signals are random instances of Gaussian distributions

, where the mean , and the graph Laplacian drive the heat diffusion processes.

We see in Figure 3 that a graph mixture model is very favourable as opposed to separately clustering data and performing graph inference afterwards. The Graph Laplacian mixture model shows slightly better performance for very low sizes of signals sets, due to the fact that the graph inference method in GLMM does not rely on sample covariance matrices, known to be very noisy for small amounts of data. However, in scenarios with larger number of training signals, our new Graph heat mixture model gives the best performance among the methods under comparison, both in terms of clustering error and graph inference F-measure.

(a) Clustering performance
(b) Graph inference
Fig. 3: Performance with respect to different number of available signal observations.

We next test the performance of the inference algorithms as a function of the heat parameter that changes between 0.1 and 0.8. The number of signals is fixed to in this case. Figure 4 shows that, for very small values of , all algorithms have difficulties in recovering the structure as the covariance matrix is close to identity. For large values of , the signals that we observe are very smooth. For this reason, the simple smoothness assumption used in GLMM is too weak to successfully separate signals, while our new Graph heat mixture model provides the best performance. The method based on separate clustering and graph learning again performs worst in these experiments.

(a) Clustering performance
(b) Graph inference
Fig. 4: Performance with respect to .

V-B Uber data

We use GHMM to search for patterns in Uber data representing hourly pickups in New York City during the working days of September 2014 111https://github.com/fivethirtyeight/ubertlc-foil-response. We divide the city into 29 taxi zones, and treat each zone as a node in our graphs. The signal on these nodes corresponds to the number of Uber pickups in the corresponding zone. We fix the number of clusters to .

Fig. 5: Cluster indexes for Uber hourly signals. Each dot represents one hour in the day, and thin vertical lines represent the beginning of each working day.

Figure 5 shows the clustering of hourly Uber signals into 4 different clusters. We can see a slightly noisy periodic pattern, recurring daily. If we inspect the results more carefully, the data in each cluster corresponds to a different period of the day, specifically 23h-6h, 6h-15h, 15h-20h or 20h-23h. In fact, compared to these fixed periods, the clusters inferred with GHMM differ only in a small percentage of observations, with the normalised mean square difference of 7.58%. Finally, Figure 6 presents different graphs inferred with our method. Each graph shows patterns of a different period in the day. For example, the traffic during nights and early mornings is restricted to the city center and communications with the airports, while direct communication among non-central locations becomes more active later in the day. These different mobility patterns look reasonable with respect to daily people routine in NYC.

(a) 23h - 6h
(b) 6h - 15h
(c) 15h - 20h
(d) 20h - 23h
Fig. 6: Graphs corresponding to Uber patterns in different times of day.

Vi Conclusion

We have proposed a novel generative model for mixed signals, where each signal is assumed to belong to an unknown cluster and to follow a heat diffusion process on an unknown graph associated to this cluster. In these realistic settings that do not require prior knowledge on signal clusters, we design a new inference method based on a expectation-maximisation algorithm, which can jointly group the signals into clusters and learn their respective graph structures. Experiments on both synthetic and real data show that our new algorithm performs better than alternative inference methods that are based on mere smooth signal priors, or that perform clustering and graph learning separately.

References

  • [1] Arthur P Dempster, “Covariance selection,” Biometrics, pp. 157–175, 1972.
  • [2] Xiaowen Dong, Dorina Thanou, Michael Rabbat, and Pascal Frossard, “Learning graphs from data: A signal representation perspective,” arXiv preprint arXiv:1806.00848, 2018.
  • [3] X. Dong, D. Thanou, P. Frossard, and P. Vandergheynst, “Learning laplacian matrix in smooth graph signal representations,” IEEE Transactions on Signal Processing, vol. 64, no. 23, pp. 6160–6173, 2016.
  • [4] Vassilis Kalofolias, “How to learn a graph from smooth signals,” in Artificial Intelligence and Statistics, 2016, pp. 920–929.
  • [5] Hilmi E Egilmez, Eduardo Pavez, and Antonio Ortega, “Graph learning from data under laplacian and structural constraints,” IEEE Journal of Selected Topics in Signal Processing, vol. 11, no. 6, pp. 825–841, 2017.
  • [6] Hao Ma, Haixuan Yang, Michael R Lyu, and Irwin King, “Mining social networks using heat diffusion processes for marketing candidates selection,” in Proceedings of the 17th ACM conference on Information and knowledge management. ACM, 2008, pp. 233–242.
  • [7] Dorina Thanou, Xiaowen Dong, Daniel Kressner, and Pascal Frossard, “Learning heat diffusion graphs,” IEEE Transactions on Signal and Information Processing over Networks, vol. 3, no. 3, pp. 484–499, 2017.
  • [8] Stefan Vlaski, Hermina P Maretić, Roula Nassif, Pascal Frossard, and Ali H Sayed, “Online graph learning from sequential data,” in

    2018 IEEE Data Science Workshop (DSW)

    . IEEE, 2018, pp. 190–194.
  • [9] Hilmi E Egilmez, Eduardo Pavez, and Antonio Ortega, “Graph learning from filtered signals: Graph system and diffusion kernel identification,” arXiv preprint arXiv:1803.02553, 2018.
  • [10] Santiago Segarra, Antonio G Marques, Gonzalo Mateos, and Alejandro Ribeiro, “Network topology inference from spectral templates,” IEEE Transactions on Signal and Information Processing over Networks, vol. 3, no. 3, pp. 467–483, 2017.
  • [11] Hermina Petric Maretic, Dorina Thanou, and Pascal Frossard, “Graph learning under sparsity priors,” in Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on. Ieee, 2017, pp. 6523–6527.
  • [12] Vassilis Kalofolias, Andreas Loukas, Dorina Thanou, and Pascal Frossard, “Learning time varying graphs,” in Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on. IEEE, 2017, pp. 2826–2830.
  • [13] Santiago Segarra, Yuhao Wangt, Caroline Uhler, and Antonio G Marques, “Joint inference of networks from stationary graph signals,” in Signals, Systems, and Computers, 2017 51st Asilomar Conference on. IEEE, 2017, pp. 975–979.
  • [14] Hermina Petric Maretic and Pascal Frossard, “Graph laplacian mixture model,” arXiv preprint arXiv:1810.10053, 2018.
  • [15] Amir Beck and Marc Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM journal on imaging sciences, vol. 2, no. 1, pp. 183–202, 2009.
  • [16] Fan Chung, “The heat kernel as the pagerank of a graph,” Proceedings of the National Academy of Sciences, vol. 104, no. 50, pp. 19735–19740, 2007.