hosgns
Time-varying graph representation learning via higher-order skip-gram with negative sampling
view repo
Representation learning models for graphs are a successful family of techniques that project nodes into feature spaces that can be exploited by other machine learning algorithms. Since many real-world networks are inherently dynamic, with interactions among nodes changing over time, these techniques can be defined both for static and for time-varying graphs. Here, we build upon the fact that the skip-gram embedding approach implicitly performs a matrix factorization, and we extend it to perform implicit tensor factorization on different tensor representations of time-varying graphs. We show that higher-order skip-gram with negative sampling (HOSGNS) is able to disentangle the role of nodes and time, with a small fraction of the number of parameters needed by other approaches. We empirically evaluate our approach using time-resolved face-to-face proximity data, showing that the learned time-varying graph representations outperform state-of-the-art methods when used to solve downstream tasks such as network reconstruction, and to predict the outcome of dynamical processes such as disease spreading. The source code and data are publicly available at https://github.com/simonepiaggesi/hosgns.
READ FULL TEXT VIEW PDFTime-varying graph representation learning via higher-order skip-gram with negative sampling
A great variety of natural and artificial systems can be represented as networks of elementary structural entities coupled by relations between them. The abstraction of such systems as networks helps us understand, predict and optimize their behaviour [newman2003structure, albert2002statistical]. In this sense, node and graph embeddings have been established as standard feature representations in many learning tasks for graphs and complex networks [cai2018comprehensive, goyal2018graph]
. Node embedding methods map each node of a graph into a low-dimensional vector, that can be then used to solve downstream tasks such as edge prediction, network reconstruction and node classification.
Node embeddings have proven successful in achieving low-dimensional encoding of static network structures, but many real-world networks are inherently dynamic, with interactions among nodes changing over time [holme2012temporal]. Time-resolved networks are also the support of important dynamical processes, such as epidemic or rumor spreading, cascading failures, consensus formation, etc. [barrat2008dynamical] Time-resolved node embeddings have been shown to yield improved performance for predicting the outcome of dynamical processes over networks, such as information diffusion and disease spreading [sato2019dyane].
In this paper we propose a representation learning model that performs an implicit tensor factorization on different higher-order representations of time-varying graphs. The main contributions of this paper are as follows:
Given that the skip-gram embedding approach implicitly performs a factorization of the shifted pointwise mutual information matrix (PMI) [levy2014neural], we generalize it to perform implicit factorization of a shifted PMI tensor. We then define the steps to achieve this factorization using higher-order skip-gram with negative sampling (HOSGNS) optimization.
We show how to apply 3rd-order and 4th-order SGNS on different higher-order representations of time-varying graphs.
We show that time-varying graph representations learned through HOSGNS outperform state-of-the-art methods when used to solve downstream tasks.
We report the results of learning embeddings on empirical time-resolved face-to-face proximity data and using them as predictors for solving two different tasks: network reconstruction and predicting the outcomes of a SIR spreading process over the network. We compare these results with state-of-the art methods for time-varying graph representation learning.
Skip-gram representation learning. The skip-gram model was designed to compute word embeddings in WORD2VEC [mikolov2013distributed], and afterwards extended to graph node embeddings [perozzi2014deepwalk, tang2015line, grover2016node2vec]. Levy and Goldberg [levy2014neural] established the relation between skip-gram trained with negative sampling (SGNS) and traditional low-rank approximation methods [kolda2009tensor, anandkumar2014tensor], showing the equivalence of SGNS optimization to factorizing a shifted pointwise mutual information matrix (PMI) [church1990word]. This equivalence was later retrieved from diverse assumptions [assylbekov2019context, allen2019vec, melamud2017information, arora2016latent, li2015word], and exploited to compute closed form expressions approximated in different graph embedding models [qiu2018network]. In this work, we refer to the shifted PMI matrix also as , where is the number of negative samples.
Random walk based graph embeddings.
Given an undirected, weighted and connected graph
with edges , nodes and adjacency matrix , graph embedding methods are unsupervised models designed to map nodes into dense -dimensional representations () encoding structural properties in a vector space [hamilton2017representation].
A well known family of approaches based on the skip-gram model consists in sampling random walks from the graph and processing node sequences as textual sentences.
In
DEEPWALK [perozzi2014deepwalk] and NODE2VEC [grover2016node2vec], the skip-gram model
is used to obtain node embeddings from co-occurrences in random walk realizations.
Although the original implementation of DEEPWALK uses hierarchical softmax to compute embeddings, we will refer to the SGNS formulation given by [qiu2018network].
Since SGNS can be interpreted
as a factorization of the word-context PMI matrix [levy2014neural],
the asymptotic form of the PMI matrix implicitly decomposed in DEEPWALK can be derived [qiu2018network].
Given the 1-step transition matrix , where and is the (weighted) node degree, the expected PMI for a node-context pair occurring in a -sized window is:
(2.1) |
where is the unique stationary distribution for random walks [masuda2017random]. We will use this expression in Section 3.2 to build PMI tensors from higher-order graph representations.
Time-varying graphs and their algebraic representations. Time-varying graphs [holme2012temporal] are defined as triples , i.e. collections of events , representing undirected pairwise relations among nodes at discrete times (, ). can be seen as a temporal sequence of static adjacency matrices such that is the weight of the event . We can concatenate the list of time-stamped snapshots to obtain a single 3rd-order tensor which characterize the evolution of the graph over time. This representation has been used to discover latent community structures of temporal graphs [gauvin2014detecting] and to perform temporal link prediction [dunlavy2011temporal].Indeed, beyond the above stacked graph representation, more exhaustive representations are possible. In particular, the multi-layer approach [de2013mathematical] allows to map the topology of a time-varying graph into a static network (the supra-adjacency graph) such that vertices of correspond to pairs of the original time-dependent network. This representation can be stored in a 4th-order tensor equivalent, up to an opportune reshaping, to the adjacency matrix associated to . Multi-layer representations for time-varying networks have been used to study time-dependent centrality measures [taylor2019supracentrality] and properties of spreading processes [valdano2015analytical].
Time-varying graph representation learning. Given a time-varying graph , we denote as temporal network embedding every model capable to learn from data, implicitly or explicitly, a mapping function:
(2.2) |
which project time-stamped nodes into a latent low-rank vector space that encodes structural and temporal properties of the original evolving graph.
Many existing methods learn node representations from sequences of static snapshots through incremental updates in a streaming scenario: deep autoencoders
[goyal2018dyngem], SVD [zhang2018timers], skip-gram [du2018dynamic] and random walk sampling [beres2019node, mahdavi2018dynnode2vec, yu2018netwalk]. Another class of models learn dynamic node representations by recurrent/attention mechanisms [goyal2020dyngraph2vec, li2018deep, sankar2020dysat] or by imposing temporal stability among adjacent time intervals [zhou2018dynamic, zhu2016scalable]. DYANE [sato2019dyane] and WEG2VEC [torricelli2020weg2vec] project the dynamic graph structure into a static graph, in order to compute embeddings with WORD2VEC. Closely related to these ones are [zhan2020si, nguyen2018continuous], which learn node vectors according to time-respecting random walks or spreading trajectory paths.The method proposed in DYANE computes, given a node , one vector representation for each time-stamped node of a supra-adjacency representation which involves active nodes of . This representation is inspired by [valdano2015analytical], and the supra-adjacency matrix is defined by two rules:
For each event , if is also active at time and in no other time-stamp between the two, we add a cross-coupling edge between supra-adjacency nodes and . In addition, if the next interaction of with other nodes happens at , we add an edge between and . The weights of such edges are set to .
For every case as described above, we also add self-coupling edges and , with weights set to 1.
We will refer to this supra-adjacency representation in Section 3.2. In this representation, random itineraries correspond to temporal paths of the original time-varying graph, therefore random walk based methods (in particular DEEPWALK) are eligible to be used because they give a suitable way to learn node representations according to nodes occurrences observed in such paths.
Some methods learn a single vector representation for each node, squeezing its behaviour over all times, resulting in a quantity of embedding parameters. On the other hand models that learn time-resolved node representations require a quantity of embedding parameters to represent the system in the latent space. Compared with these methods, our approach requires a quantity of embedding parameters for disentangled node and time representations.
Given a time-varying graph , we propose a representation learning method that learns disentangled representations for nodes and time slices. More formally, we learn a function:
through a number of parameters proportional to . This embedding representation can then be reconciled with the definition in Eq. (2.2) by combining and in a single representation using any combination function .
Starting from the existing skip-gram framework for node embeddings, we propose a higher-order generalization of skip-gram with negative sampling (HOSGNS) applied to time-varying graphs. We show that this extension allows to implicitly factorize into latent variables higher-order relations that characterize tensor representations of time-varying graphs, in the same way that the classical SGNS decomposes dyadic relations associated to a static graph. Similar approaches have been applied in NLP for dynamic word embeddings [rudolph2018dynamic], and higher-order extensions of the skip-gram model have been proposed to learn context-dependent [liu2015learning] and syntactic-aware [cotterell2017explaining] word representations. Moreover tensor factorization techniques have been applied to include the temporal dimension in recommender systems [xiong2010temporal, wu2019neural] and face-to-face contact networks [sapienza2015detecting, gauvin2014detecting]. But this work is the first to merge SGNS with tensor factorization, and then apply it to learn time-varying graph embeddings.
Here we address the problem of generalizing SGNS to learn embedding representations from higher-order co-occurrences. We analyze here the 3rd-order case, giving the description of the general -order case in the Supplementary Information. Later in this work we will focus 3rd and 4th order representations since these are the most interesting for time-varying graphs.
We consider a set of training samples obtained by collecting co-occurrences among elements from three sets , and . Since in SGNS we have pairs of node-context , this is a direct extension of SGNS to three variables, where is constructed e.g. through random walks over a higher-order data structure. We denote as the number of times the triple appears in . Similarly we use , and as the number of times each distinct element occurs in , with relative frequencies , , and .
Optimization is performed as a binary classification task, where the objective is to discern occurrences actually coming from from random occurrences. We define the likelihood for a single observation by applying a sigmoid () to the higher-order inner product of corresponding -dimensional representations:
(3.1) |
where embedding vectors are respectively rows of , and . In the 4th-order case we will also have a fourth embedding matrix related to a fourth set . For negative sampling we fix an observed and independently sample and to generate negative examples . In this way, for a single occurrence , the expected contribution to the loss is:
(3.2) |
where the noise distribution is the product of independent marginal probabilities
. Thus the global objective is the sum of all the quantities of Eq. (3.2) weighted with the corresponding relative frequency. The full loss function can be expressed as:
(3.3) |
In Supplementary Information we show the steps to obtain Eq. (3.3) and that it can be optimized with respect to the embedding parameters, satisfying the low-rank tensor factorization [kolda2009tensor] of the multivariate shifted PMI tensor into factor matrices :
(3.4) |
While a static graph is uniquely represented by an adjacency matrix , a time-varying graph admits diverse possible higher-order adjacency relations (Section 2). Starting from these higher-order relations, we can either use them directly or use random walk realizations to build a dataset of higher-order co-occurrences. In the same spirit that random walk realizations give place to co-occurrences that are used to learn embeddings in SGNS, we use higher-order co-occurrences to learn embeddings via HOSGNS. Figure 1 summarizes the differences between graph embedding via classical SGNS and time-varying graph embedding via HOSGNS.
As discussed in Section 3.1, the statistics of higher-order relations can be summarized in the so-called multivariate PMI tensors, which derive from proper co-occurrence probabilities among elements. Once such PMI tensors are constructed, we can again factorize them via HOSGNS. To show the versatility of this approach, we choose PMI tensors derived from two different types of higher-order relations:
A 3rd-order tensor which gather relative frequencies of nodes occurrences in temporal edges:
(3.5) |
where is the total weight of interactions occurring in . These probabilities are associated to the snapshot sequence representation and contain information about the topological structure of .
A 4th-order tensor , which gather occurrence probabilities of time-stamped nodes over random walks of the supra-adjacency graph proposed in [valdano2015analytical] (as in DYANE). Using the numerator of Eq. (2.1) tensor entries are given by:
(3.6) |
where and are lexicographic indices of the supra-adjacency matrix corresponding to nodes and node . These probabilities encode causal dependencies among temporal nodes and are correlated with dynamical properties of spreading processes.
We also combined the two representations in a single tensor that is the average of and
(3.7) |
where is the Kronecker delta. In this framework indices correspond to triples (node, context, time) and indices correspond to (node, context, time, context-time).
The above tensors gather empirical probabilities corresponding to positive examples of observable higher-order relations. The probabilities of negative examples can be obtained as the product of marginal distributions Computing exactly the objective function in Eq. (3.3) (or the 4th-order analogous) is computationally expensive, but it can be approximated by a sampling strategy: picking positive tuples according to the data distribution and negative ones according to independent sampling , HOSGNS objective can be asymptotically approximated through the optimization of the following weighted loss:
(3.8) |
where is the number of the samples drawn in a training step and is the negative sampling constant.
For our experiments we use time-varying graphs collected by the SocioPatterns collaboration (http://www.sociopatterns.org) using wearable proximity sensors that sense the face-to-face proximity relations of individuals wearing them. After training the proposed models (HOSGNS applied to , or ) on each dataset, we extract from embedding matrices (the latter not in the case of ) the embedding vectors where and and we use them to solve different downstream tasks: node classification and temporal event reconstruction.
Datasets. We used publicly available data sets describing face-to-face proximity of individuals with a temporal resolution of 20 seconds [cattuto2010dynamics]. These datasets were collected by the SocioPatterns collaboration in a variety of contexts, namely in a school (“LYONSCHOOL”), a conference (“SFHH”), a hospital (“LH10”), a highschool (“THIERS13”), and in offices (“INVS15”) [genois2018can]. To our knowledge, this is the largest collection of open data sets sensing proximity in the same range and temporal resolution that are being used by modern contact tracing systems. We built a time-varying graph from each dataset by aggregating the data on 600 seconds time windows, and neglecting those snapshots without registered interactions at that time scale. If multiple events are recorded between nodes in a certain aggregated window , we denote the weight of the link with the number of such interactions. Table 1 shows some basic statistics for each data set.
Baselines. We compare our approach with several baseline methods from the literature of time-varying graph embeddings, which learn time-stamped node representations:
DYANE [sato2019dyane]. Learns temporal node embeddings with DEEPWALK, mapping a time-varying graph into a supra-adjacency representation. As in the original paper, we used the implementation of NODE2VEC^{1}^{1}1https://github.com/snap-stanford/snap/tree/master/examples/node2vec with .
DYNGEM [goyal2018dyngem]. Deep autoencoder architecture which dinamically reconstructs each graph snapshot initializing model weights with parameters learned in previous time frames. We used the code made available online from the authors^{2}^{2}2http://www-scf.usc.edu/~nkamra/.
DYNAMICTRIAD [zhou2018dynamic]. Captures structural information and temporal patterns of nodes, modeling the triadic closure process. We used the reference implementation available in the official repository^{3}^{3}3https://github.com/luckiezhou/DynamicTriad.
Details about hyper-parameters used in each method can be found in the Supplementary Information.
Node Classification.
In this task, we aim to classify nodes in epidemic states according to a SIR epidemic process
[barrat2008dynamical] with infection rate and recovery rate . We simulated 5 realizations of the SIR process on top of each empirical graph with different combinations of parameters . We used the same combinations of epidemic parameters and the same dynamical process to produce SIR states as described in [sato2019dyane]. Then we set a logistic regression task to classify epidemic states S-I-R assigned to each active node
during the unfolding of the spreading process. We combine the embedding vectors of HOSGNS as follows: for HOSGNS, we use the Hadamard (element-wise) product ; for HOSGNS and HOSGNS, we use . We compared with dynamic node embeddings learned from baselines. For fair comparison, all models are required produce time-stamped node representations with dimension as input to the logistic regression.Temporal Event Reconstruction. In this task, we aim to determine if an event is in , i.e., if there is an edge between nodes and at time . We create a random time-varying graph with same active nodes and a number of events that are not part of . Embedding representations learned from are used as features to train a logistic regression to predict if a given event is in or in . We combine the embedding vectors of HOSGNS as follows: for HOSGNS, we use the Hadamard product ; for HOSGNS and HOSGNS, we use . For baseline methods, we aggregate vector embeddings to obtain link-level representations with binary operators (Average, Hadamard, Weighted-L1, Weighted-L2 and Concat) as already used in previous works [grover2016node2vec, tsitsulin2018verse]. For fair comparison, all models are required produce event representations with dimension as input to the logistic regression.
Tasks were evaluated using train-test split. To avoid information leakage from training to test, we randomly split and in train and test sets and , with proportion . For node classification, only nodes in at times in were included in the train set, and only nodes in at times in were included in the test set. For temporal event reconstruction, only events with and were included in the train set, and only events with and were included in the test set.
All approaches were evaluated for both downstream tasks in terms of Macro-F1 scores in all datasets. 5 different runs of the embedding model are evaluated on 10 different train-test splits for both downstream tasks. We collect the average with standard deviation over each run of the embedding model, and report the average with standard deviation over all runs. In node classification, every SIR realization is assigned to a single embedding run to compute prediction scores.
Results for the classification of nodes in epidemic states are shown in Table 2, and are in line with the results reported in [sato2019dyane]. We report here a subset of but other combinations are available on the Supplementary Information, and they confirm the conclusions discussed here. DYNGEM and DYNAMICTRIAD have low scores, since they are not devised to learn from graph dynamics. HOSGNS is not able to capture the graph dynamics due to the static nature of . DYANE, HOSGNS and HOSGNS show good performance in this task, with these two HOSGNS variants outperforming DYANE in most of the combinations of datasets and SIR parameters.
Results for the temporal event reconstruction task are reported in Table 3. Temporal event reconstruction is not performed well by DYNGEM. DYNAMICTRIAD has better performance with Weighted-L1 and Weighted-L2 operators, while DYANE has better performance using Hadamard or Weighted-L2. Since Hadamard product is explicitly used in Eq. (3.1) to optimize HOSGNS, all HOSGNS variants show best scores with this operator. HOSGNS outperforms all approaches, setting new state-of-the-art results in this task. The representation used as input to HOSGNS does not focus on events but on dynamics, so the performance for event reconstruction is slightly below DYANE, while HOSGNS is comparable to DYANE. Results for HOSGNS models using other operators are available in the Supplementary Information.
We observe an overall good performance of HOSGNS in both downstream tasks, being in almost all cases the second highest score, compared to the other two variants which excel in one task but fail in the other one. One of the main advantages of HOSGNS is that this methodology is able to disentangle the role of nodes and time by learning representations of nodes and time intervals separately. While models that learn node-time representations (such as DYANE) need a number of parameters that is at least , HOSGNS is able to learn node and time representations separately, with a number of parameters in the order of . In the Supplementary Information we include plots with two dimensional projections of these embeddings, showing that the embedding matrices of HOSGNS approaches successfully capture both the structure and the dynamics of the time-varying graph.
In this paper, we introduce higher-order skip-gram with negative sampling (HOSGNS) for time-varying graph representation learning. We show that this method is able to disentangle the role of nodes and time, with a small fraction of the number of parameters needed by other methods. The embedding representations learned by HOSGNS outperform other methods in the literature and set new state-of-the-art results for predicting the outcome of dynamical processes and for temporal event reconstruction. We show that HOSGNS can be intuitively applied to time-varying graphs, but this methodology can be easily adapted to solve other representation learning problems that involve multi-modal data and multi-layered graph representations.
The authors would like to thank Prof. Ciro Cattuto for the fruitful discussions that helped shaping this manuscript. AP acknowledges partial support from Research Project Casa Nel Parco (POR FESR 14/20 - CANP - Cod. 320 - 16 - Piattaforma Tecnologica Salute e Benessere) funded by Regione Piemonte in the context of the Regional Platform on Health and Wellbeing and from Intesa Sanpaolo Innovation Center. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Low-rank tensor decomposition [kolda2009tensor] aims to factorize a generic tensor into a sum of rank-one tensors. For example, given a 3rd-order tensor , the rank- decomposition of takes the form of a ternary product between three factor matrices:
(A.1) |
where , and are the columns of the latent factor matrices , and and denotes the outer product. When is the rank of , Eq. (A.1) holds with an equality, and the above operation is called Canonical Polyadic (CP) decomposition. Elementwise the previous relation is written as:
(A.2) |
where , , are rows of the factor matrices. For 2nd-order tensors (matrices) the operation is equivalent to the low-rank matrix decomposition ().
For a generic -order tensor , low-rank decomposition is expressed as:
(A.3) |
where () are rows of factor matrices , , .
The skip-gram approach was initially proposed in WORD2VEC [mikolov2013distributed] to obtain low-dimensional representations of words.
Starting from a textual corpus of words from a vocabulary ,
it assigns to each word a context corresponding to words surrounding in a window of size .
Then a set of training samples is built by collecting all the observed word-context pairs, where and are the vocabularies of words and contexts respectively (normally ).
Here we denote as the number of times appears in . Similarly we use and
as the number of times each word occurs in ,
with relative frequencies ,
and
.
SGNS computes -dimensional representations for words and contexts in two matrices and , performing a binary classification task in which pairs are positive examples and pairs with randomly sampled contexts are negative examples.
The probability of the positive class is
parametrized as the sigmoid () of the inner product of embedding vectors:
(A.4) |
and each word-context pair contributes to the loss as follows:
(A.5) | ||||
(A.6) |
where the second expression uses the symmetry property inside the expected value and is the number of negative examples, sampled according to the empirical distribution of contexts . In the original formulation of WORD2VEC, negative samples are picked from a smoothed distribution instead of the unigram probability , but this smoothing has not been proved to have positive effects in graph representations.
Following results found in [levy2014neural],
the sum of all weighted with the probability each pair appears in gives the objective function asymptotically optimized:
(A.7) | ||||
(A.8) |
where
is the probability of under assumption of statistical independence.
In [levy2014neural] it has been shown that SGNS local loss
exhibits a global optimum with respect to the parameters that satisfies these relations:
(A.9) |
which tell us that SGNS optimization is equivalent to a rank- matrix decomposition of the word-context pointwise mutual information (PMI) matrix shifted by a constant. Such factorization is an approximation of the empirical PMI matrix since in the typical case .
SGNS can be generalized to learn -dimensional embeddings from collections of higher-order co-occurrences. Starting with vocabularies and a set of -order tuples , the objective is to learn factor matrices which summarize the co-occurrence statistics of .
Keeping an example , we define the loss with negative sampling scheme fixing and picking negative tuples according to the noise distribution :
where each embedding is the -th row of the matrix . The expectation term can be explicited:
Weighting the loss error for each tuple with their empirical probability , and defining
Comments
There are no comments yet.