With the advancement in internet technology, online social networks have become part of people’s everyday life. Their analysis can be used for targeted advertising, crime detection, detection of epidemics, behavioural analysis etc. Consequently, a lot of research has been devoted to computational analysis of these networks as they represent interactions between a group of people or community and it is of great interest to understand these underlying interactions. Generally, these networks are modeled as graphs where a node represents people or entity and an edge represent interactions, relationships or communication between two of them. For example, in a social network such as Facebook and Twitter, people are represented by nodes and the existence of an edge between two nodes would represent their friendship. Other examples would include a network of products purchased together on an E-commerce website like Amazon, a network of scientists publishing in a conference where an edge would represent their collaboration or a network of employees in a company working on a common project.
Inherent nature of social networks is that they are dynamic, i.e., over time new edges are added as a network grows. Therefore, understanding the likelihood of future association between two nodes is a fundamental problem and is commonly known as link prediction [liben2007link]. Concretely, link prediction is to predict whether there will be a connection between two nodes in the future based on the existing structure of the graph and the existing attribute information of the nodes. For example, in social networks, link prediction can suggest new friends; in E-commerce, link prediction can recommend products to be purchased together [chen2005link]; in bioinformatics, it can find interaction between proteins [airoldi2006mixed]; in co-authorship networks, it can suggest new collaborations and in the security domain, link prediction can assist in identifying hidden groups of terrorists or criminals [al2006link].
Over the years, a large number of link prediction methods have been proposed [lu2011link]
. These methods are classified based on different aspects such as the network evolution rules that they model, the type and amount of information they used or their computational complexity. Similarity-based methods such as Common Neighbors[liben2007link], Jaccard’s Coefficient, Adamic-Adar Index [adamic2003friends], Preferential Attachment [barabasi1999emergence], Katz Index [katz1953new] use different graph similarity metrics to predict links in a graph. Embedding learning methods [koren2009matrix, airoldi2006mixed, grover2016node2vec, perozzi2014deepwalk] take a matrix representation of the network and factorize them to learn a low-dimensional latent representation/embedding for each node. Recently proposed network embeddings such as DeepWalk [perozzi2014deepwalk] and node2vec [grover2016node2vec] are in this category since they implicitly factorize some matrices [qiu2018network].
Similar to these node embedding methods, recent years have also witnessed a rapid growth in knowledge graph embedding methods. A knowledge graph (KG) is a graph with entities of different types of nodes and various relations among them as edges. Link prediction in such a graph is known as knowledge graph completion. It is similar to link prediction in social network analysis, but more challenging because of the presence of multiple types of nodes and edges. For knowledge graph completion, we not only determine whether there is a link between two entities or not, but also predict the specific type of the link. For this reason, the traditional approaches of link prediction are not capable of knowledge graph completion. Therefore, to tackle this issue, a new research direction known as knowledge graph embedding has been proposed [nickel2011three, bordes2013translating, wang2014knowledge, lin2015learning, jenatton2012latent, bordes2014semantic, socher2013reasoning]. The main idea is to embed components of a KG including entities and relations into continuous vector spaces, so as to simplify the manipulation while preserving the inherent structure of the KG.
Neither of these two approaches, however, can generate “optimal” embeddings “quickly” for real-time link prediction on new graphs. Random walk based node embedding methods are computationally efficient but give poor results whereas KG-based methods produce optimal results but are computationally expensive. Thus, in this work, we mainly focus on embedding learning methods (i.e., Walk based node embedding methods and knowledge graph completion methods) which are capable of finding optimal embeddings quickly enough to meet real-time constraints for practical applications. To bridge the gap between computational time and performance of embeddings on link prediction, we propose the following contributions in this work:
We compare the embedding’s performance and computational cost of both Random walk based node embedding and KG-based embedding methods and empirically determine that Random walk based node embedding methods are faster but give sub-optimal results on link prediction whereas KG based embedding methods are computationally expensive but perform better on link prediction.
We propose a transformation model that takes node embeddings from Random walk based node embedding methods and output near optimal embeddings without an increase in computational cost.
We demonstrate the results of transformation through extensive experimentation on various social network datasets of different graph sizes and different combinations of node embeddings and KG embedding methods.
Ii-a Problem Definition
Let be an unweighted, undirected homogeneous graph where is the set of vertices, is the set of observed links, i.e., and is the adjacency matrix respectively. The graph represents the topological structure of the social network in which an edge represents an interaction that took place between and . Let denote the universal set containing all possible edges. Then, the set of non-existent links is . Our assumption is that there are some missing links (edges that will appear in future) in the set . Then the link prediction task is given the current network , find out these missing edges.
Similarly, let be a Knowledge Graph (KG). A KG is a directed graph whose nodes are entities and edges are subject-property-object triple facts. Each edge of the form (head entity, relation, tail entity) (denoted as ) indicates a relationship from entity to entity . For example, and . Note that the entities and relations in a KG are usually of different types. Link prediction in KGs aims to predict the missing h or t for a relation fact triple , used in [bordes2011learning, bordes2012joint, bordes2013translating]. In this task, for each position of missing entity, the system is asked to rank a set of candidate entities from the knowledge graph, instead of only giving one best result [bordes2011learning, bordes2013translating].
We then formulate the problem of link prediction on graph such that , i.e., KG with only one type of entity and relation. Link prediction is then to predict the missing or for a relation fact triple where both and are of same kind. For example or .
Ii-B Graph Embedding Methods
Graph embedding aims to represent a graph in a low dimensional space which preserves as much graph property information as possible. The differences between different graph embedding algorithms lie in how they define the graph property to be preserved. Different algorithms have different insights of the node (/edge/substructure/whole-graph) similarities and how to preserve them in the embedded space. Formally, given a graph , a node embedding is a mapping where is the dimension of the embeddings, the number of vertices and the function preserves some proximity measure defined on graph . If there are multiple types of links/relations in the graph then similar to node embeddings, relation embeddings can be obtained as where the number of types of relations.
Ii-B1 Node Embeddings using Random Walk
Random walks have been used to approximate many properties in the graph including node centrality [newman2005measure] and similarity [pirotte2007random]. Their key innovation is optimizing the node embeddings so that nodes have similar embeddings if they tend to co-occur on short random walks over the graph. Thus, instead of using a deterministic measure of graph proximity [belkin2002laplacian], these random walk methods employ a flexible, stochastic measure of graph proximity, which has led to superior performance in a number of settings [goyal2018graph]. Two well known examples of random walk based methods are node2vec [grover2016node2vec] and DeepWalk [perozzi2014deepwalk].
Ii-B2 KG Embeddings
KG embedding methods usually consists of three steps. The first step specifies the form in which entities and relations are represented in a continuous vector space. Entities are usually represented as vectors, i.e. deterministic points in the vector space [nickel2011three, bordes2013translating, wang2014knowledge]. In the second step, a scoring function is defined on each fact to measure its plausibility. Facts observed in the KG tend to have higher scores than those that have not been observed. Finally, to learn those entity and relation representations (i.e., embeddings), the third step solves an optimization problem that maximizes the total plausibility of observed facts as detailed in [wang2017knowledge]. KG embedding methods which we use for experiments in this paper are TransE [bordes2013translating], TransH [wang2014knowledge], TransD [lin2015learning], RESCAL [yang2014embedding] and SimplE [kazemi2018simple].
Transformation model is suggested to expedite fine-tuning process with KG-embedding methods. Let be a graph with vertices and edges. Given the node embeddings of the graph , we would want to transform them to optimal node embeddings.
Iii-a Node Embedding Generation
The input graph is fed into one of the random walk based graph embeddings methods (node2vec [grover2016node2vec] or DeepWalk [perozzi2014deepwalk]), which gives us the node embeddings. Let be a random walk based graph embedding method and denotes the output node embeddings:
where is the graph in the dataset of graphs and with the embedding dimension .
Iii-B Knowledge Embedding Generation
In a KG-based embedding algorithm (such as TransE), the input is a graph and the initial embeddings are randomly initialized. The algorithm uses a scoring function and optimizes the initial embeddings to output the trained embeddings for the given graph. Since we are working with homogeneous graph with only one type of relation, we don’t need to learn the embeddings for the relation, hence they are kept constant and only node embeddings are learnt. Let be the initial node embeddings, be the trained embeddings and the KG method with parameters .
where and .
Instead of using randomly initialized embeddings to obtain target embeddings , we can initialize with in Eq. (1) as
where are fine tuned output embeddings. This idea of better initialization has also been explored previously in [luo2015context, chen2018harp] where it has been shown to result in embeddings of higher quality.
Iii-C Transformation Model with Self-Attention
Using the node embeddings from Eq. (1) and fine-tuned KG embeddings from Eq. (3), we train a transformation model which can learn to transform the node embeddings from a node-based method to KG embeddings. We adopt self-attention [vaswani2017attention] on graph adjacency matrix as explained in Algorithm 1:
where are the transformed embeddings and
are the parameters of the self-attention model.
The error between the fine-tuned and transformed embeddings is calculated using squared euclidean distance as:
The loss on batch X of graphs is measured as:
where and is the batch size. Since KG embeddings are trained from facts/triplets which are obtained from the adjacency matrix of the graph, a self-attention model reinforced with information of the adjacency matrix when applied to node-embeddings is able to learn the transformation function as observed in our experiments (Figure 3). The proposed algorithm is summarized in Algorithm 2.
Yang, et. al [yang2015defining] introduced social network datasets with ground-truth communities. Each dataset is a network having a total of nodes, edges and a set of communities (Table I).
The communities in each dataset are of different sizes. They range from a small size (1-20) to bigger sizes (380-400). There are more communities with small sizes and their frequency decreases as their size increases. This trend is depicted in Figure 2.
YouTube111http://snap.stanford.edu/data/index.html#communities, Orkut††footnotemark: and LiveJournal††footnotemark: are friendship networks where each community is a user-defined group. Nodes in the community represent users, and edges represent their friendship.
DBLP††footnotemark: is a co-authorship network where two authors are connected if they publish at least one paper together. A community is represented by a publication venue, e.g., journal or conference. Authors who published to a certain journal or conference form a community.
Amazon††footnotemark: co-purchasing network is based on Customers Who Bought This Item Also Bought feature of the Amazon website. If a product is frequently co-purchased with product , the graph contains an undirected edge from to . Each connected component in a product category defined by Amazon acts as a community where nodes represent products in the same category and edges indicate that we were purchased together.
We consider each community in a dataset as an individual graph with vertices representing the entity in the community and edges representing the relationship. For training the transformation model, we select communities of particular size range which acts as dataset of graphs (Table II). We randomly disable 20% of the links (edges) in each graph to act as missing links for link prediction. In all the experiments, the embedding dimension is set to 32, which works best in our pilot test. We used OpenNE222https://github.com/thunlp/OpenNE for generating node2vec and DeepWalk embeddings and OpenKE [han2018openke] for generating KG embeddings. The dataset of graphs is split into train, validation and test split of 64%, 16%, and 20% respectively.
Iv-C Evaluation Metrics
For evaluation, we use MRR and Precision@K. The algorithm predicts a list of ranked candidates for the incoming query. To remove pre-existing triples in the knowledge graph, filtering operation cleans them up from the list. MRR computes the mean of the reciprocal rank of the correct candidate in the list, and Precision@K evaluates the rate of correct candidates appearing in the top K candidates predicted. Due to space constraints, we only present the results for MRR. Results of Precision@K can be found at our GitHub333https://github.com/ArchitParnami/GraphProject.
V Results & Discussions
From the results depicted in Figure 3, we observe that the target KG embeddings (TransE, TransH, etc.) almost always outperforms random-walk based source embeddings (node2vec and DeepWalk) except in case of SimplE and DistMult where both the methods perform poorly. This can also be observed in Figure 4.
Finetuned KG embeddings achieved better or equivalent performance as compared to target KG embeddings. This can be confirmed by ANOVA test in Figure 4 where there is no significant difference between the MRRs obtained from finetuned and target KG embeddings in most cases. Specifically, translational based methods such as TransE, TransH, and TransD have equivalent performance for finetuned and target embeddings whereas SimplE, RESCAL, and DistMult have better finetuned embeddings than target embeddings as the graph size grows.
Transformed embeddings consistently outperform source embeddings and have similar performance to finetuned embeddings at least for graphs of sizes up to 65. The performance drop starts from graph size 71-75 in the transformation to TransD from DeepWalk whereas 81-85 in the transformation to TransE from node2vec. For RESCAL, the transformation works for larger sized graphs in node2vec and till 121-125 in DeepWalk.
As the graph size increases (top to bottom), the overall MRR scores decrease for all the embeddings as expected. In Figure 5, we compare computation time and MRR performance of transformed embeddings and finetuned embeddings where source method is node2vec and target method is TransE. It can be seen that the transformed embeddings give similar performance as finetuned embeddings (without any significant increase in computational cost) up to graphs of size 71-75. Thereafter the transformed embeddings perform poorly, we attribute this to poor finetuned embeddings on which the transformation model was trained.
In this work, we have demonstrated that random-walk based node embedding (source) methods are computationally efficient but give sub-optmial results on link prediction in social networks whereas KG based embedding (target & finetuned) methods perform better but are computationally expensive. For our requirement of generating optimal embeddings quickly for real-time link prediction we proposed a self-attention based transformation model to convert walk-based embeddings to optimal KG embeddings. The proposed model works well for smaller graphs but as the complexity of the graph increases, the transformation performance decreases. For future work, our goal is to explore better transformation models for bigger graphs.