Flexible Attributed Network Embedding

11/27/2018 ∙ by Enya Shen, et al. ∙ Tsinghua University University of California, San Diego 0

Network embedding aims to find a way to encode network by learning an embedding vector for each node in the network. The network often has property information which is highly informative with respect to the node's position and role in the network. Most network embedding methods fail to utilize this information during network representation learning. In this paper, we propose a novel framework, FANE, to integrate structure and property information in the network embedding process. In FANE, we design a network to unify heterogeneity of the two information sources, and define a new random walking strategy to leverage property information and make the two information compensate. FANE is conceptually simple and empirically powerful. It improves over the state-of-the-art methods on Cora dataset classification task by over 5 than 10 results improve more than the state-of-the-art methods as increasing training size. Moreover, qualitative visualization show that our framework is helpful in network property information exploration. In all, we present a new way for efficiently learning state-of-the-art task-independent representations in complex attributed networks. The source code and datasets of this paper can be obtained from https://github.com/GraphWorld/FANE.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

Code Repositories

FANE

Flexible Attributed Network Embedding


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Network embedding is an important and ubiquitous research problem with applications ranging from drug design to commodities or friendship recommendations [11, 9, 3, 31]. For most practical networks in the form of graphs, nodes have more than one attributes that greatly determine their roles in the system. For example, individuals in a social network have various properties, such as sexuality, educational background, and partisanship. Moreover, the social science [17, 19] has shown that attributes of nodes can reflect and affect their community structures [8, 13, 29].

Current network embedding studies include structure-preserving methods and property-preserving methods [7]. After decades of research, many structure-preserving approaches, such as DeepWalk [22], node2vec [10] and struc2vec [24], have been proposed to learn network features based on structure information. However, these methods only consider network structures, failing to take advantages of node attributes during encoding [11].

Recently, few property-preserving methods are proposed. These methods could be further categorized into matrix factorization and deep learning based methods. Matrix factorization based network embedding represent network property in the form of a matrix and factorize this matrix to obtain node embedding

[9], such as TADW [28] and HSCA [30]. These methods is time and space consuming [3]. Deep learning based network embedding, such as SNE [14], DANE [8] and DVNE [34]

, get inspiration from existing neural network models and (or) design new model to learn network features. These methods get high accuracy result at the cost of high training time requirement.

In response, we propose FANE, a scalable and flexible attributed network embedding framework to integrate both structure and property information to learn features. Briefly, we design a network to unify heterogeneity of the structure and property information sources, and define a new random walking strategy to leverage property information and make them compensate and flexible. Overall, our paper makes the following contributions:

  • We propose FANE, an efficient and flexible framework that integrates network attribute and structure for feature learning in networks.

  • We analyze and verify that FANE can learn features as state of the art structure-preserving and property-preserving methods.

  • We extend our method onto attribute space. Relationships between attributes can also be explored under our framework.

  • We evaluate our framework on multi-label classification task and conduct visual analysis on several real-world datasets.

2 Related works

One key problem in network embedding is what to preserve in learning. Here we discuss related works based on what they aim to preserve.

Structure-preserving network embedding. DeepWalk [22] generalizes language modeling SkipGram [20] for network embedding, which uses random walks to learn latent representations by treating walks as the equivalent of sentences. Instead of exploiting random walks to capture network structure, LINE [26] learns vertex representations by explicitly modeling the first-order and second-order proximity. In addition, struc2vec [24] first encodes the vertex structural role similarity into a multilayer network, where the weights of edges at each layer are determined by the structural role difference at the corresponding scale. Moreover, node2vec [10]

presents a random walking method to interpolates between Breadth-first Sampling and Depth-first Sampling. Inspired by those methods, we have designed a new random walking strategy to leverage property information and make them compensate in the embedding process.

Property-preserving network embedding. TADW [28] extends DeepWalk [22] to get a vertex-context matrix. HSCA [30] integrates homophily, structural context, and vertex content to learn effective network representations. However, calculate factorization on large real-world network matrix with millions of rows and columns is expensive and unscalable. SNE [14] includes two similar deep neural network models to deal with structure and attribute information separately in embedding layer, the result then processed by same hidden layer to learn features. Differently, DANE [8]

use two separate deep neural network models for structure and attribute information, and use joint distribution to optimize the result. DVNE

[34]

learns a Gaussian distribution in the Wasserstein space as the latent representation of each node. These methods get high accuracy result at the cost of high training time requirement.

Attributed network clustering. Network clustering aims to divide a given set of objects into groups of similar objects. Lots of related algorithms have been proposed, such as Minimum Cut Algorithm [2], Multi-way network Partition [27]

, k-medoid and k-means algorithm

[23]

, Spectral Clustering method

[1]. SA-cluster [33] and its extended version [4, 32] presented a way to class attributed network by extracting attributes as separate node. Inspired by these works, we construct a new network to integrate structure and attribute for network embedding.

3 Fane

In this section, we first give the problem definition of attributed network embedding, and then discuss our solutions for the key challenges.

3.1 Problem definition

An attributed network is formally denoted as , where is the set of vertices, is the set of edges, is the weights of edges, and is the set of attributes associated with vertices in for describing vertex properties. Each vertex is associated with an attribute vector where is the attribute value of vertex on attribute .

Our goal is to learn a mapping function from network to feature representations . Here is the dimension of the feature representations. should support:

  1. Scalability. As a network compression approach, The network embedding inherently should be scalable to deal with large scale of network. Meanwhile, the size of network nodes ranges from tens to millions, even billions in practise [3].

  2. Integration. Network structures and properties are the fundamental factors that need to be considered in network embedding. However, preserving these properties in a network embedding space is challenging due to the disparity and inhomogeneity between the network space and the embedding vector space [5].

  3. Adaption. Various domain, data and applications require different network embedding methods, such as structure-preserving, property-preserving or both. Is there a way to provide an framework which is flexible enough to learn features as needed?

We proceed by extending the scalable skip-gram architecture (T1) to attributed networks [20, 22, 10]. Given node and its context nodes , the representation for

is learned by maximizing the conditional probability (objective function):

(1)

The key challenge is how to define the context nodes for attributed nodes, so we are able to integrate property information effectively (T2). We deal with this by construct a new network as discussed below.

3.2 Network construction

As stated, the goal of network construction here is to effectively integrate structure and property information. But how to get there. Before that, let us go back to the original network and take another way of looking at the problem. In network, different nodes are connected with edges, which in fact reflect the connection between these nodes in property space (structure information). Meanwhile, if two nodes have same attribute, we could say that these nodes have connection in attribute space (property information). So attributes play as edge in attribute space, like actual edges in network. In order to integrate attribute information in network embedding. We try to concrete the connection in attribute space by appending special virtual edges to represent it. So the heterogeneous edges in the result network could be used to reflect connections between nodes in both structure and property space.

Consequently, the simplest way is to add an edge between nodes that have same attribute. However, by doing so, we increase the worst case edge size from to ). We resolve the worst case edge size problem by introducing virtual attribute nodes. As shown in Figure 1, attribute network is constructed based on attribute information from the raw network by taking attributes as special virtual nodes : if node has attribute , there will exist a virtual edge between and . Thus, the worst case edge size of resultant network would be reduced to ), which is less than ) generally.

Figure 1: Illustration of FANE. The FANE network (right) is constructed by mapping node attributes from the original network (left) to virtual attribute nodes. Nodes and their corresponding attribute nodes are connected via virtual edges. It is convenient to take advantage of the state-of-the-art network embedding methods, such as node2vec [10], struc2vec [24], to leverage attribute information in diverse network embedding applications.
(a) FANE-sf
(b) FANE-tf
(c) FANE-stf
Figure 2: Illustrations of Source-Focused (FANE-sf), Target-Focused (FANE-tf) and Source and Target Focused (FANE-stf) strategies in FANE from top to bottom respectively. The upper-left (green) and the middle nodes (gray) of the network represent the previous and source node separately. The rest are the target nodes. Figures from left to right demonstrate various scenarios in random walking regard to all three strategies.
(a) FANE-sf
(b) FANE-tf
(c) FANE-stf
Figure 3: Visual analysis of different random walking strategies of FANE with same dimensionality reduction technique t-SNE [6]. Node colors represent different attributes, word classes, of English words [21]. As we can see, part (b-c), which corresponding to the strategy 2 and 3, yield distinct separation between two different attributes.
(a)
(b)
(c)
(d)
Figure 4: Illustration of different in FANE with Dataset Adjnoun [21]. Node colors represent different property attributes of English words. As we can see, with smaller , the embedding results in more attribute homophily.

Let the constructed network be denoted as , where is the set of raw nodes and virtual attribute node, is the set of raw edges and virtual edges between nodes and corresponding attribute nodes, is same with besides node properties of attribute nodes. Each attribute node is associated with an attribute vector where only is nonzero. includes and weights for attribute edges, which can be defined as needed. The objective function evolves as:

(2)

So the aim is to compute the new mapping function . It is interesting that we get an side effect that network properties can be embedded too. So we could also learn features specifically on network property. It is non-trial as shown in experiments in Section 4.

3.3 Context generation

After network construction, we can integrate property information into various state-of-the-art structure-preserving network embedding methods. However, We confront the second key challenge: how to provide the flexibility (T3) in various situations. That is to say, how FANE support user get structure- or property-preserving feature learning or both as needed. Inspired by node2vec [10], we designed a new random walking strategy which allows continuous transition between attribute-preserving and structure-preserving network embedding.

Given a source node and a fixed random walking length . The th node in the walk , which starts with , is generated by the following distribution:

(3)

where is the unnormalized transition probability between nodes and , and is the normalizing constant. Notice that includes constructed attribute edges.

As illustrated in Figure 1, a random walking that just traversed to node through edge . Notice that node and have same attribute , so there will be two more virtual edges connecting the virtual attribute node with node and with node in network separately. One key here is how to define the probability for attribute node related walking. In the next section, we will exhaustively discuss three possible random walking strategies and analyze the effectiveness following our goal (T3).

3.4 Random walking strategies

After enumeration case by case, we find that there are three different random walking strategies in the constructed attributed network :

  1. Source Focused (FANE-sf). Let denote the set of attribute nodes in and let denotes the probability of the next step from the source node to the target node . And . The probability, , is defined as only if the source node . In practice, there are four conditions as shown in Figure (a)a. Formally, the probability of the next step is designed as:

    (4)

    where denotes the shortest path distance between nodes and , and

    (5)

    where which is the previous node of .

  2. Target Focused (FANE-tf). As illustrated in Figure (b)b, the probability is defined as only if the target node . The probability of the next step is designed as:

    (6)

    where is the same as equation 5.

  3. Source & Target Focused (FANE-stf). As illustrated in Figure (c)c, the probability is defined as if the source or target node . The probability of the next step is designed as:

    (7)

    Again, is the same as equation 5.

Under our proposed framework, the random walking should be able to biased toward the attribute nodes in order to be attribute-preserving. However, the probability of random walking from the source node to the attribute node, by definition, depends on the values of , and . Consequently, it is hard to define the probability of random walking such that the walking is always biased toward the attribute node. So strategy FANE-sf could be structure-preserving by adjusting the values of and , it is unlikely to be attribute-preserving. This result contradicts with our objectives. Experiment also confirms our analysis as shown in Figure 3, in which strategy FANE-sf yields little separation between two subsets of nodes with different attributes.

In strategies FANE-sf and FANE-stf, the probability of property-preserving is defined by . By adjusting its values, it is possible to make the random walking be structure-preserving or attribute-preserving. We will thoroughly discuss the effect of in Section 3.5. This result satisfies our objectives. In Figure 3, we can see that by setting smaller than and , the biased random walking is attribute-preserving, both strategies yield distinct separation between two different attributes. We choose strategy FANE-tf biased random walking strategy in the following experiments.

3.5 Attribute Parameter,

While and control the likelihood of walking to local and structure nodes separately [10], property parameter decides the bias between structure- and property-preserving. Let us consider two cases:

  • Case 1: By setting to be large, it is unlikely for the source node walking to the attribute nodes during the random walking process. Thus, the random walking is more biased toward preserving the local and the structural information of the network based on the values of and .

  • Case 2: By setting to be small, we increase the probability of walking from the source nodes to the attribute nodes. Thus, nodes sharing similar attribute information are more likely to be linked together by the attribute nodes. Consequently, the result of network embedding will be more property-preserving. Figure 4 validates the embedding effectiveness of FANE with different on real datasets. As we can see, the embedding result will be more property homophily by decreasing .

We can see that the hyper-parameter functions as a slider between structure- and attribute-preserving. In Section 4, we will conduct an experiment to elaborate how our method can integrate both structure-preserving and property-preserving feature learning (T2) and make continuous transition between both (T3).

3.6 Algorithm

The pseudocode of FANE is shown in Algorithm 1. We first construct an property-enhanced network

. By importing attribute node in-out hyperparameter

, we could control the weights of structure-preserving and property-preserving random walking. The source code and datasets of this paper can be obtained from https://github.com/GraphWorld/FANE.

Input : Network , Dimensions , Walks per node , Walk length , Context size , Return , Raw node in-out , Attribute node in-out
Output : Embedding matrix
1 = ;
2 for  do
3       Append node to ;
4       for  do
5             if  has attribute  then
6                   Append edge to ;
7                  
8             end if
9            
10       end for
11      
12 end for
13 = PreprocessModifiedWeights();
14 = ;
15 Initialize walks to Empty;
16 for iter = 1 to w do
17       for  do
18             = faseWalk();
19             Append to ;
20            
21       end for
22      
23 end for
24f = StochasticGradientDescent();
25 return ;
Algorithm 1 FANE

4 Experiments

The proposed method is flexible enough for embedding of network datasets from different domains as discussed below.

4.1 Datasets

We evaluate our method on several datasets from different domains, as listed in Table 1.

  • Adjnoun [21]: Nodes represent the most commonly occurring adjectives and nouns in novel David Copperfield. Edges connect any pair of words that occur in adjacent position. The property attributes, adjectives and nouns, are used as node classification information.

  • WebKB [15]: Nodes and edges represent web-pages and citation network separately. Attributes are described as 0/1-valued word vectors indicating the absence/presence of the corresponding word from the dictionary.

  • Cora [15]: Nodes represent scientific publications and edges reflect the citation relationships. Similar with dataset WebKB, attributes are described as 0/1-valued word vectors indicating the absence/presence of the corresponding word from the dictionary.

  • CiteSeer [25]: Nodes represent scientific publications and edges reveal the citation networks. Similarly, attributes are described as 0/1-valued word vectors indicating the absence/presence of the corresponding word from the dictionary.

  • ego-Facebook [18]: Nodes represent Facebook survey participants. The friend list is shown as edge. Attributes are anonymous personal information of those participants.

  • ego-Gplus [18]: Nodes represent Google users who decide to share their circles. The friend list is shown as edge too. Attributes are their personal information.

  • ego-Twitter [18]: Nodes are participants from Twitter. Edge represent following list. Attributes are hashtags or user themselves in twitter.

Name
adjnoun 112 425 2 -
WebKB 877 1,608 1,703 5
Cora 2,708 5,278 1,433 7
CiteSeer 3,312 4,732 3,703 6
ego-Facebook 4,039 88,234 1,283 191
ego-Gplus 7,856 321,268 2,024 91
ego-Twitter 2,511 37,154 9,073 132
Table 1: Statistics of the datasets
(a) node2vec [10]
(b) struc2vec [24]
(c) FANE with
(d) TADW [28]
(e) HSCA [30]
(f) FANE with
(g) FANE with
(h) FANE with
(i) FANE with
Figure 5: Illustrations of FANE, which is flexible enough to get results as STAR structure-preserving and attribute-preserving network embedding methods with Dataset Adjnoun [21]. Node shapes and colors represent different property attributes and clustering results of English words separately. As we can see, structure-preserving methods, node2vec (a) and struc2vec (b), represent more structure homophily, attribute-preserving, TADW (d) and HSCA (e), represent extreme attribute homophily. Our method provides flexibility in integrating structure homophily and attribute homophily, which could learn structure-preserving features (c) as node2vec and struc2vec and attribute-preserving features (f) as TADW and HSCA. Moreover, FANE provides the ability of smoothly transiting between structure homophily and attribute homophily (g-i).

4.2 Baselines

We compare FANE with several state-of-the-art network embedding methods. The implements of these methods are from the original authors.

  • node2vec [10]: This approach provide a way to integrate Breadth-first Sampling and Depth-first Sampling in random walking, which introduces the Skip-Gram algorithm to learn the node representation vectors.

  • struc2vec [24]: This approach first encodes the vertex structural role similarity into a multilayer network. The weights of edges at each layer are determined by the structural role difference at the corresponding scale.

  • TADW [28]: This approach extends DeepWalk [22] by encoding node text features into the matrix factorization.

  • HSCA [30]: This approach simultaneously integrates homophily, structural context, and vertex content to learn effective network representations.

Figure 6: Evaluation results (higher is better) on WebKB, Cora and Citeseer datasets.

4.3 Case Study: Adjnoun

Different with existing methods, FANE could realize flexible structure homophily and attribute homophily. Firstly, we empirically test our embedding results in comparison to structure homophily methods, node2vec( = = 1) and struc2vec, and attribute homophily methods, TADW( = 8, textRank = 20, lambda = 0.2, train_ratio = 0.5, = 5) and HSCA( = 8, textRank = 20, lambda = 0.2, train_ratio = 0.5, mu = 0.1, = 5) as shown in Figure 5. Notice that we set the embedding dimension, , and textRank to be low for TADW and HSCA given the small node size of dataset adjnoun.

To begin with, our method can be either mainly structural homophily or attribute homophily by tuning the values of . In Figure (a)a to (c)c, we can see that by setting to be large, FANE can yield similar result as those of node2vec and struc2vec in terms of preserving structural relationship. In addition, in Figure (d)d to (f)f, when is set to be small, the embedding result of FANE can also be property-preserving like those of TADW and HSCA. Moreover, FANE can extract aptitudinal features reflecting both structure and attribute homophily by adjusting the hyper-parameter , as shown in Figure (i)i to (g)g (T2). When , the embedding result integrates relatively more attribute information while for , relatively more structural information is preserved. For further explain the effects of FANE in integrating structure and attribute homophily, we take as an example, as shown in Figure (h)h, the result classes (shown with different colors) are mostly consistent with property attributes (shown with different shapes). Notice that in this situation, there is still one node (Green circle) which represent adjective word “first”

is classified with nouns. The reason is that the word are connected closely with nouns in structure. It proves that FANE preserves structure homophily even in this extreme setting (T3).

The Word network is an effective example demonstrating the functionality of our proposed method. In the following section, we will conduct additional experiments to evaluate the effectiveness of our embedding method on network classification.

(a) node2vec
(b) struc2vec
(c) TADW
(d) HSCA
(e) FANE
Figure 7: Visualization of Cora dataset. Different colors correspond to different classes.

4.4 Classification (Quantitative Analysis)

Network classification is one of the main application for network embedding. We test FANE on several popular network datasets with ground truth: WebKB, Cora, and Citeseer. We use Support Vector Machine (SVM)

[12] for classification. We let the training ratio varies from 10 % to 90%. Other parameters are set as follows.

For node2vec, we did a grid search to find the best combination of (0.25 0.50 1.0 2.0 4.0) and (0.25 0.50 1.0 2.0 4.0) that yields the best results. The values of and are: (0.25, 2.0) in WebKB, (4.0, 2.0) in Cora, and (4.0, 0.25) in Citeseer. For struc2vec, we use the default parameters in its code. For TADW, as instructed in its code, setting textRank = 200, lambda = 0, = 5 for Cora and Citeseer. For WebKB, we tried all values of that appear in its paper and finally decide to set = 15, by which yield the best results of TADW. Similarly, for HSCA, setting textRank = 200, lambda = 0.2, mu = 0.1, = 5 for Cora and Citeseer, following the instructions given in the code. For WebKB, we tried all common values of mu and that appear in its paper and finally decided to use mu = 0.1 and = 15. Setting = 8 if the benchmark methods have the embedding dimension parameter, including FANE.

Datasets p q r C
WebKB 1.0 0.50 2.0 1.0
Cora 3.0 0.15 2.0 0.1
Citeseer 4.0 0.30 6.0 0.2
Table 2: FANE parameters for datasets WebKB, Cora and Citeseer

For FANE, we set values of parameters for different datasets as shown in Table 2. Note that is the penalty parameter in SVM training. Figure 6 shows the Micro-F and Macro-F [16] results on the datasets. There are mainly three observations from the result:

(a) ego-Facebook
(b) ego-Facebook attributes
(c) ego-Facebook attributes part
(d) ego-GPlus
(e) ego-GPlus attributes
(f) ego-GPlus attributes part
(g) ego-Twitter
(h) ego-Twitter attributes
(i) ego-Twitter attributes part
Figure 8: Visualization and Exploration of the embedding results of raw network and attributes with FANE. Left column: embedding results of ego-Facebook, ego-GPlus and ego-Twitter from top to bottom; middle column: embedding results of attributes of corresponding datasets; right column: enlarged parts of embedding results of attributes which are marked with dashed black rectangle. It is worth to notice that attribute nodes are also separate to different groups, which telling that these attributes are closely related. (Limited by the space, please magnify corresponding part to see the labels.)
  1. FANE consistently outperforms all other benchmark methods for all datasets and for all training data. In dataset Cora, FANE even has average 10% to 20% better performance than that of HSCA and node2vec, which are the second highest results, under the same training ratio. It is worth mentioning that even at very low training ratio, FANE can still yields high accuracy. For example, in dataset WebKB and Cora, our method with training data equal to 10% could match or even beat other benchmark methods with 90% training data.

  2. FANE’s performance of classification significantly increase as the number of samples increases. Notice that the benchmark methods generally reach their extreme after certain number of training samples. For example, in dataset Cora, the classification accuracy of the benchmark methods start to fluctuate up and down beyond 60% training ratio. Similarly, in dataset Citeseer, the fluctuation patterns even start at 10% training data. On the contrary, FANE still yields a significant increase in classification accuracy as the training samples increase.

  3. FANE has more significant improvement for Macro-F in cases like WebKB and Cora. As shown in Figure 6, FANE outperforms HSCA, which is the second best results, for nearly averaged 20%. Macro-F treats each class equally and computes the average result of each class. Thus, a high Macro-F result implies that FANE accurately classify nodes for their ground truths for every class.

Figure 7 shows a visual illustration of classification results. The embedding results are classified with K-Means. As we can see, the different classes are reasonably separated after classification with FANE. Furthermore, recall that our main purpose is to provide a flexible framework to realize the transition and the integration between structural-preserving and attribute-preserving. The parameter is free to be decreased or increased depending which information we want to preserve.

4.5 Visualization (Qualitative Analysis)

Network Visualization has been widely used in network interaction and analysis. It is a powerful tool to reveal the content of a network in a easily interpretable way by finding patterns, marking connections, and showing clustering results. Besides visual analysis of network as existing methods. our method are inherently suitable to visual analyze network attributes.

The visual analysis technique helps us design, evaluate, and explore our proposed framework. To be more specific, Figure 3 to Figure 4 help us visualize our different random walking strategies and their effects on network embedding. Thus, we can directly see how changing strategies or parameters alters the embedding results. This offers great help in the designing process. Moreover, Figure 5 and Figure 7 provide another viable way to evaluate our results: we can now straightforwardly view how the embedded nodes are mixed, separated, and classified as different groups.

Figure 8 also offer some insights to be explored. For example, Figure (a)a, Figure (d)d, and Figure (g)g demonstrate the embedding result for dataset ego-Facebook, ego-Gplus, and ego-Twitter. As we can see, nodes are grouped into different areas. Recall that FANE treat attribute as nodes during the embedding process. Therefore, we can also adopt the same visual analysis techniques to explore information about attributes in a 2D figure as shown in Figure (b)b, Figure (e)e, and Figure (h)h. We find the visualized patterns for those attributes are interesting to be discussed.

For attributes in Facebook and Twitter, we find that attributes are also separated to different groups just like those embedded nodes in Figure (a)a and (g)g. Thus, we enlarge the small selected region in Figure (b)b to see if we can uncover some relationships between those attributes. In Figure (c)c, attribute number 163(work; end_date), 65(education; year), 24(education; school), 164(work; start_date), 148(work; employer), 214(education; concentration), 213(education;concentration), and 143(work;employer) 111213 and 214 represent two different concentrations (majors) while 143 and 148 represent two different employers. are closely grouped together. This result is not surprising because we know one’s majors is somehow related to his/her schools and decide which type of enterprises they will enter in the future. Besides, the years they graduate from schools also affect the years they start or end their works. Even though all attributes in the Facebook dataset are anonymous, it is interesting to see that attributes are grouped based on their information.

Nevertheless, the embedded attribute pattern for Google is different to those of Facebook and Twitter: most of the attributes are grouped together with only few attributes placed far away from them. We also enlarge some of the isolated attributes to see if attributes in those isolated areas are still grouped based on their meanings. There are four attributes in that selected area in Figure (f)f: 893(job_title: music), 298(job_title: dj), 872(job_title: dj) 222893 and 298 means two different attributes with same type of jobs, and 238(job_title: producer). There is an intense correlation among working as DJ, working in the field of music, and working as a producer. This example confirms that attributes are also grouped based on their meaning even in this isolated area.

Figure 9: Parameter sensitivities of embedding dimension, , and penalty parameter, , in FANE.

We keep exploring the attributes information for Twitter. Notice that unlike those of Facebook or Google+, attributes in Twitter are actually hashtags and users. Consequently, we noticed some patterns in which hashtags and users are clustered according to their contents and their personal interests. In Figure (i)i, all attributes are related to BBC in terms of BBC News and BBC Sports. For example, 7004(@bbcsport) is the official BBC Sport channel in Twitter, 7030(@georgeyboy) is a British famous sport broadcaster and used to work for BBC 5 live, and 6990(@bbc5live) is the official channel for BBC 5 Live. Thus, different types of attributes, personal information or user themselves, could be embedded and analyzed as long as there exits some shared information between them.

From the above examples, we proved that it is possible to uncover some nontrivial information about attributes if we also embed the attribute and visualize them. Thus, our proposed framework, FANE, is not limited in flexibly integrating attribute and structural information of the nodes; instead, it could be extended to reflect more information of attributes themselves.

4.6 Parameter Sensitivity

To test the sensitivities of parameter and , we fixed the values of , and for datasets Cora, Citeseer, and Wiki. We let the embedding dimension, , vary from 20 to 100 and let the penalty parameter vary from 0.1 to 1.0. The result is shown in Figure 9, where the x-axis represents the variations of values of and the different lines represents the variations of values of .

As we can see, the testing accuracies is relatively stable with respect to the penalty parameter, , in most cases when the embedding dimension is kept unchanged. However, in some cases, also heavily impacts the result. For example, in Dataset WebKB , testing result fluctuate over 15% with values of changed. Moreover, the embedding dimension, , has a relatively high impact on the testing accuracy in Figure 9. Overall, our testing results are relatively stable regard to the values of while are heavily impacted by the values of .

4.7 Scalability

To evaluate the scalability of FANE, we learn representations from Erdos-Renyi graphs with increasing node size ranging from to at degree of 10. As we can see, in Figure 10, the computational time scales up linearly with increasing number of nodes. Recall that and stand for number of nodes and attributes separately. This result proves that our proposed framework is scalable with respect to number of nodes.

Figure 10: Scalability of FANE on Erdos-Renyi network.

Since we treat attributes as nodes during our network embedding process, we also test the scalability with size of attributes per node. Figure 10 plot a network of computational time versus number of attribute nodes to fully prove the scalability of our framework. Based on Erdos-Renyi network, we fix the number of nodes to 1000 and add the number of attributes from to for each nodes. As we can see, the computational time also vary linearly with the number of attribute nodes. Consequently, these two experiments confirm that our proposed framework is scalable over number of nodes and number of attributes. Thus, our method could handle large scale network embedding with controllable amount of time.

5 Conclusion

In this paper, we proposed an attributed network embedding framework which could flexibly integrate structure information and attribute information. Thus, it could learn features based on structure, attributes or both, and could provide a smooth transition between attribute-preserving and structure-preserving embedding.

Experiments confirm that our proposed method outperforms the listed STAR methods on network classification. To our best knowledge, FANE is the first method to provide flexible adjustment between attribute-preserving and structural-preserving. Under our proposed framework, we can actively intervene the embedding process to determine which type of information or which kind of integration we want. Moreover, we provide a visual analysis approach to design, optimize, and evaluate our method. This intuitive way is non-trivial in network analysis and interaction.

In this paper, we restrict our discussions on undirected attributed graphs but our method can be easily extended to process directed attributed graphs. In addition, we assume that every attribute shares same importance. However, the attribute parameter could also be extended to reflect the relative importance between different attributes. Moreover, we treat attribute as normal nodes, it is interesting to process attribute further, such as classification or clustering. It maybe useful in problems such as attribute compression, which is important in processing network with thousands attributes.

References

  • [1] C. C. Aggarwal and H. Wang. A survey of clustering algorithms for graph data. In Managing and Mining Graph Data, pages 275–301. Springer, 2010.
  • [2] R. K. Ahuja, T. L. Magnanti, and J. B. Orlin. Network flows - theory, algorithms and applications. Prentice Hall, 1993.
  • [3] H. Cai, V. W. Zheng, and K. C. Chang. A comprehensive survey of graph embedding: Problems, techniques, and applications. IEEE Transactions on Knowledge and Data Engineering, 30(9):1616–1637, 2018.
  • [4] H. Cheng, Y. Zhou, and J. X. Yu. Clustering large attributed graphs: A balance between structural and attribute similarities. TKDD, 5(2):12:1–12:33, 2011.
  • [5] P. Cui, X. Wang, J. Pei, and W. Zhu. A survey on network embedding. CoRR, abs/1711.08752, 2017.
  • [6] L. V. Der Maaten and G. E. Hinton. Visualizing data using t-sne.

    Journal of Machine Learning Research

    , 9:2579–2605, 2008.
  • [7] L. Du, Z. Lu, Y. Wang, G. Song, Y. Wang, and W. Chen. Galaxy network embedding: A hierarchical community structure preserving approach. In

    Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence

    , pages 2079–2085, 2018.
  • [8] H. Gao and H. Huang. Deep attributed network embedding. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, pages 3364–3370, 2018.
  • [9] P. Goyal and E. Ferrara. Graph embedding techniques, applications, and performance: A survey. Knowledge-Based Systems, 151:78–94, 2018.
  • [10] A. Grover and J. Leskovec. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd International Conference on Knowledge Discovery and Data Mining, pages 855–864, 2016.
  • [11] W. L. Hamilton, R. Ying, and J. Leskovec. Representation learning on graphs: Methods and applications. IEEE Data Engineering Bulletin, 40(3):52–74, 2017.
  • [12] M. A. Hearst, S. T. Dumais, E. Osuna, J. Platt, and B. Scholkopf. Support vector machines. IEEE Intelligent Systems and their Applications, 13(4):18–28, July 1998.
  • [13] X. Huang, J. Li, and X. Hu. Accelerated attributed network embedding. In Proceedings of the 2017 SIAM International Conference on Data Mining, pages 633–641, 2017.
  • [14] L. Liao, X. He, H. Zhang, and T. Chua. Attributed social network embedding. CoRR, abs/1705.04969, 2017.
  • [15] Q. Lu and L. Getoor. Link-based classification. In Proceedings of the 20th International Conference on Machine Learning, pages 496–503, 2003.
  • [16] C. D. Manning, P. Raghavan, and H. Schütze. Introduction to information retrieval. Cambridge University Press, 2008.
  • [17] P. V. MARSDEN and N. E. FRIEDKIN. Network studies of social influence. Sociological Methods & Research, 22(1):127–151, 1993.
  • [18] J. J. McAuley and J. Leskovec. Learning to discover social circles in ego networks. In The Twenty-Fifth Annual Conference on Neural Information Processing Systems, pages 548–556, 2012.
  • [19] M. McPherson, L. Smith-Lovin, and J. M. Cook. Birds of a feather: Homophily in social networks. Annual Review of Sociology, 27(1):415–444, 2001.
  • [20] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781, 2013.
  • [21] M. E. J. Newman.

    Finding community structure in networks using the eigenvectors of matrices.

    Physical Review E, 74(3):036104, 2006.
  • [22] B. Perozzi, R. Al-Rfou, and S. Skiena. Deepwalk: online learning of social representations. In Proceedings of the 20th ACM International Conference on Knowledge Discovery and Data Mining, pages 701–710, 2014.
  • [23] M. J. Rattigan, M. E. Maier, and D. D. Jensen. Graph clustering with network structure indices. In Proceedings of the 24th International Conference on Machine Learning, pages 783–790, 2007.
  • [24] L. F. R. Ribeiro, P. H. P. Saverese, and D. R. Figueiredo. struc2vec: Learning node representations from structural identity. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 385–394, 2017.
  • [25] P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Gallagher, and T. Eliassi-Rad. Collective classification in network data. AI Magazine, 29(3):93–106, 2008.
  • [26] J. Tang, M. Qu, M. Wang, M. Zhang, J. Yan, and Q. Mei. LINE: large-scale information network embedding. In Proceedings of the 24th International Conference on World Wide Web, pages 1067–1077, 2015.
  • [27] L. Tao and Y. Zhao. Multi-way graph partition by stochastic probe. Computers & Operations Research, 20(3):321–347, 1993.
  • [28] C. Yang, Z. Liu, D. Zhao, M. Sun, and E. Y. Chang. Network representation learning with rich text information. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, pages 2111–2117, 2015.
  • [29] Z. Yang, J. Guo, K. Cai, J. Tang, J. Li, L. Zhang, and Z. Su. Understanding retweeting behaviors in social networks. In Proceedings of the 19th ACM Conference on Information and Knowledge Management, pages 1633–1636, 2010.
  • [30] D. Zhang, J. Yin, X. Zhu, and C. Zhang. Homophily, structure, and content augmented network representation learning. In IEEE 16th International Conference on Data Mining, pages 609–618, 2016.
  • [31] D. Zhang, J. Yin, X. Zhu, and C. Zhang. Network representation learning: A survey. CoRR, abs/1801.05852, 2018.
  • [32] D. Zhou, S. Niu, and S. Chen. Efficient graph computation for node2vec. CoRR, abs/1805.00280, 2018.
  • [33] Y. Zhou, H. Cheng, and J. X. Yu. Graph clustering based on structural/attribute similarities. PVLDB, 2(1):718–729, 2009.
  • [34] D. Zhu, P. Cui, D. Wang, and W. Zhu. Deep variational network embedding in wasserstein space. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2827–2836, 2018.