Feature Propagation on Graph: A New Perspective to Graph Representation Learning

by   Biao Xiang, et al.
Ant Financial

We study feature propagation on graph, an inference process involved in graph representation learning tasks. It's to spread the features over the whole graph to the t-th orders, thus to expand the end's features. The process has been successfully adopted in graph embedding or graph neural networks, however few works studied the convergence of feature propagation. Without convergence guarantees, it may lead to unexpected numerical overflows and task failures. In this paper, we first define the concept of feature propagation on graph formally, and then study its convergence conditions to equilibrium states. We further link feature propagation to several established approaches such as node2vec and structure2vec. In the end of this paper, we extend existing approaches from represent nodes to edges (edge2vec) and demonstrate its applications on fraud transaction detection in real world scenario. Experiments show that it is quite competitive.



There are no comments yet.


page 1

page 2

page 3

page 4


Co-embedding of Nodes and Edges with Graph Neural Networks

Graph, as an important data representation, is ubiquitous in many real w...

Quantifying Challenges in the Application of Graph Representation Learning

Graph Representation Learning (GRL) has experienced significant progress...

Graph Neural Networks with Feature and Structure Aware Random Walk

Graph Neural Networks (GNNs) have received increasing attention for repr...

Topological Regularization for Graph Neural Networks Augmentation

The complexity and non-Euclidean structure of graph data hinder the deve...

TTAGN: Temporal Transaction Aggregation Graph Network for Ethereum Phishing Scams Detection

In recent years, phishing scams have become the most serious type of cri...

Tracing the Propagation Path: A Flow Perspective of Representation Learning on Graphs

Graph Convolutional Networks (GCNs) have gained significant developments...

Simple and Effective Relation-based Embedding Propagation for Knowledge Representation Learning

Relational graph neural networks have garnered particular attention to e...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In this paper, we study the feature propagation on graph, which forms the building blocks in many graph representation learning tasks. Typically, the graph representation learning tasks aim to learn a function to somehow utilize the additional graph structure in space , compared with traditional learning tasks by only considering each sample independently. The successes of graph representation approaches [Grover and Leskovec2016, Dai et al.2016, Kipf and Welling2016, Hamilton et al.2017] have proven to be successful on citation networks [Sen et al.2008], biological networks [Zitnik and Leskovec2017], and transaction networks [Liu et al.2017] that can be formulated in graph structures.

One major process of graph representation learning tasks involves the feature propagation over the graph up to -th orders. Those approaches define various propagation manners based on such as, adjacency matrices [Belkin and Niyogi2002], -order adjacency matrices [Cao et al.2015], expected co-occurency matrices [Perozzi et al.2014][Grover and Leskovec2016] by conducting random walks. Recently, graph convolutional networks have shown their promising results on various datasets. They rely on either graph Laplacians [Kipf and Welling2016] or on carefully-designed operators like mean, max operators over adjacency matrix [Hamilton et al.2017].

However, few of graph representation learning tasks study the propagation process used in their inference procedures. For instance, GCN [Kipf and Welling2016] or structure2vec [Dai et al.2016] implicitly involve this procedure in the form

where denotes the learned embeddings of

nodes in vector space

, the denotes the -th iteration, defines the operator on adjacency matrix given graph . This propagation process is parameterized by . This iterative propagation process essentially propagate and spread each node ’s signals to ’s -th step neighborhood over the graph. Without the careful designs of the process under certain conditions, the propagation could be under risk of numeric issues.

In this paper, we are interested in the convergence condition of the propagation process to equilibrium state [Langville and Meyer2006], hopefully can help the understanding of existing literatures in this domain: (1) we first formulate the generic framework of feature propagation on graphs; (2) we connect existing classic approaches such as node2vec [Grover and Leskovec2016], a random walk based graph embedding approach, and structure2vec [Dai et al.2016], a graph convolution based approach, to our feature propagation framework; (3) we study the convergence condition of feature propagation over graph to equilibrium state with by using theory of M-matrix [Plemmons1977], which is quite simple and easy to implement by gradient projection; (4) we further extend the existing node representation approaches to edge representation, i.e. we propose “edge2vec” and show its applications on fraud transaction detection in a real world transaction networks, which is essentially important in any financial systems. More importantly, “edge2vec” can deal with multiple links (transaction among two accounts over a time period) among two nodes, which is essentially different from traditional settings like recommender systems (the user could have only one rating on the item , i.e. only one link among two nodes).

This paper is organized as follows. In section 2, we sets up the preliminary of this paper, and propose pairs of general definitions for feature expansion and feature propagation in a unified learning framework. In section 3, we discuss a typical feature propagation way, and propose the sufficient conditions for its convergence. In section 4, we explore the connection between feature propagation and two types of graph representation approaches. We finally extend the node embedding to edge embedding, and demonstrated its effectiveness by conducting experiments on fraud transaction detection in section 5 and section 6 respectively.

2 Preliminary

Suppose the graph is , where is the node set, is the edge set, and the adjacency matrix is where if , and 0 otherwise. is the degree matrix of graph where is the degree of node (or ). The feature set for node is where responds to node , and denotes its responding feature matrix as . If exists, we denote the feature set for edge as where responds to the edge and denote its responding feature matrix as . If the label locates in node, we denote the label vector as , and if the label locates in edge, we denote the label vector as .

For the traditional learning tasks (with or without graph topology), the typical way to build the fitting model is as follows

However, this way only utilizes the features of node or edge itself. In a context-aware perspective, the features of neighbor or the neighbor’s neighbor may also be useful. For example, in a social network, assume that one person didn’t fill her age, it may be hard to get this feature once we only utilize the features of herself; but if we utilize her neighbors’ features, we may estimate this feature by averaging her neighbors’ ages or take their median. We denote the expanded feature as

and according to the raw feature and respectively. And call the expanded process from and to and as feature expansion. We define this concept as follows

Definition 1

(Feature Expansion). Suppose the raw feature of graph are and , responding to node and edge respectively, if

then we call or as expanded features, and call the function as feature expansion function.

With expanded features, the fitting model will be


which contains two sets of parameters and , where is parameters for feature expansion and is for fitting the final label. And the learning framework with feature expansion is as follows

  • Initialize parameters and ;

  • Expand the raw feature to by expansion function ;

  • Compute the prediction ;

  • Back propagate the to update and ;

  • Repeat step 2-4 until minimized;

In graph, the feature expansion is usually propagated via the graph topology, and the feature of node or edge is expanded by its neighbors in the -th orders. Since this feature expansion process relies on the feature propagation through the graph topology, we call this process as feature propagation with definition as follows:

Definition 2

(Feature Propagation). Suppose the raw feature of graph are and , responding to node and edge respectively, if for each and


, then we call or as propagation-expanded feature, and call the function as feature propagation function.

Although in feature propagation each node/edge only takes advantage of its neighbors’ information, it still could get the information farther away through the iteratively propagation of definition 2.

In this section, we propose the general definitions for feature expansion and feature propagation in graph and propose the learning framework with feature expansion. In the next section, we will discuss a typical feature propagation way, which has strong connection with the recent popular graph representation learning method.

3 A Typical Way for Feature Propagation

The typical way to expand node’s features by propagation is as follows, which is a generalization of pagerank equation [Page et al.1999, Xiang et al.2013],


where is the neighbor set of node , and are the parameters of Eq. 3(thus, the dimension of is ). For the convenience, we call as propagation matrix in this paper. This equation group could be rewritten as


Breaking up the group of equations, we have


And let , , and with


Equation 5 could be rewritten as

After summing up, it becomes

If matrix is invertible, we will get


However, is not invertible naturally, we should set some conditions to make it be. From Equation 6, we could see that, only propagation matrix will affect the invertibility of .

From the theory of M-matrix, if satisfies the following two conditions, it will be invertible.

  • , which, by Eq. 6, is equivalent to should be a nonnegative matrix;

  • , which, with the derivation in footnote111let’s denote , from Eq. 6,

    , is equivalent to

However, condition 2 is a very demanding condition. If there exists a node with very large degree, the row sum of will have to be very small. To solve this issue, we could make the below changes to Eq. 4, i.e. replace the matrix as . Then, Eq. 4 changes to


Dive into each , we have


Under this feature propagation process, the above condition 2 will change to . Comparatively, this condition is easier to be guaranteed.

Summing up the above derivations, we form the following theorem.

Theorem 1

For feature propagation method as Eq. 8 or 9, the propagation matrix if satisfy the following conditions, the propagation process will be convergent.

  • condition 1. is nonnegative.

  • condition 2. .

Theorem 1 proposed a pair of sufficient conditions to guarantee the convergence of feature propagation, but they are not necessary conditions. When the propagation matrix satisfies the conditions in theorem 1, the feature propagation process as Eq. 8 will be convergent. Otherwise, the feature expansion may lead to explode which actually has been confirmed by the practical experiences.

4 Relationship to Graph Representation Learning

Recent years have seen a surge of research on graph representation and node embedding. These works could be roughly categorized into two types: 1) embeddings with graph structure only [Perozzi et al.2014, Grover and Leskovec2016, Abu-El-Haija et al.2017], and 2) embeddings with both structure and features (or attributes) [Kipf and Welling2016, Dai et al.2016, Hamilton et al.2017]. In this section, we discuss the relationship between feature propagation and graph representation.

4.1 With Graph Structure Only

For the typical feature propagation way as Eq. 8, if we let each node feature as a one-hot vector222which means node contains no feature, but only its identity. (i.e. ), as a randomly initialized matrix , (must satisfy the two conditions in Theorem 1), and denote 333traditionally, is called as transition matrix, there is , then Eq. 8 will be


If substituting the above equation into its left side recursively, we will get


Let’s denote

Because , the infinite sequence of will be converged gradually. Approximately, is the

-step transition probability matrix between any pair of nodes. Thus,

is the weighted sum of -step transition matrix with weight and we call as proximity matrix. Its entry depicts the transition probability from node to node by 0-step, 1-step, up to -steps, and depicts the transition probability from node to any node in the graph. Thus, if node and node close to each other in the graph, and will be close too. From Eq. 11, we have


If node is close to node in graph (which means is close to 444for must be larger than and must be larger than , it will impact the comparison between and . The better way is to adjust as or adjust as .), then will close to 0 no matter how the is initialized. In [Abu-El-Haija et al.2017], the authors revisited DeepWalk [Perozzi et al.2014] and GloVe [Pennington et al.2014], and find that their proximity matrices are:


respectively. Compared with the two proximity matrices above, the major differences between ours and theirs is the decay weight of . And our weight is as reasonable as or . Thus, is a reasonable first type embedding.

Figure 1: Node embedding for Zachary Karate Club social network. Different color indicates different community. Each row of figures is a group of results with the same initialization of and different .

Figure 1 displays an example embedding for the famous Zachary Karate Club social network [Perozzi et al.2014], where we use two dimensional node embeddings to capture the community structure implicit in the social network. We changed the initialization of and in , and could see that:

  • no matter how the is initialized, the embeddings can capture the community structures in the network pretty well;

  • as the propagate parameter becomes larger, the nodes in a community will tend to aggregate;

Feature propagation as Eq. 8 could be a simple way of the first type embedding when and satisfy center conditions.

4.2 With Structure and Features on Graph Simultaneously

In structure2vec [Dai et al.2016], a graph convolution based approach, the node embedding was formulated as


is a rectified linear unit function. Suppose the dimension of

is , i.e. . Using the similar derivations in section 3, we can get


where . Without loss of generality, let’s suppose there are variables in Eq. (13) are nonzeros, while the other variables are equal to s. Then, Eq. (13) could be rewritten as


This equation also could be resolved by the similar derivations in section 3. The final solution is in the form of

where and is the matrix in section 3 after removing corresponding rows and columns.

Similarly, if we want the node embeddings converge and do not get explode, the matrix also needs to satisfy certain conditions like in section 3

. The relu function

decreased the scale of equations, but it hasn’t changed the essence of linear system.

5 Extension to Edge

The above section discussed the feature propagation when the graph only contains node features(i.e. ). However, in the many real scenarios, the graph may contains edge features(i.e. ) too. If we neglect the edge features, it may weaken the model’s performance. What’s more, the label may locate in edge directly, we have to utilize the edge features especially when there exits multiple links between two nodes. This section we will discuss the feature propagation when the graph contains edge features in multiple-links settings.

For each edge , suppose and are source node and target node of respectively, in mathematical form, i.e. . Suppose is the set of edge which takes node as source node and is the set of edge which takes node as target node. Suppose


and we call and as source incidence matrix and target incidence matrix respectively. Obviously, there is


We could expand the features by the following way


which could be rewritten as


combine Eq. 16 and Eq. 18, we will get


Follow the similar derivations in section 3, we could know that if we want the feature propagation to be convergent, we should satisfy certain conditions. Based on theorem 1, the following two conditions can guarantee the convergence of the above feature propagation process:

  • should be nonnegative;

The above condition 2 is not easy to be satisfied since it depends on the interaction between different matrices. And, the feature propagation with edge features on graph is easy to explode. To eliminate this obstacle, we could simplify the feature propagation process by setting and to be 0, which means the expanded features on nodes won’t rely on features of edges. Then, we have the following feature propagation equation:


We call this feature propagation way for edge as edge2vec in this paper. For edge2vec, we only need to guarantee the following conditions

  • ;

  • ;

to guarantee its convergence.

6 Applications in Fraud Transaction Detection

In the previous sections, we first proposed the concept “feature propagation” in a unified framework. We link feature propagation as a basic building block to several graph representation tasks, and point out that the convergence conditions involved in generic graph representation tasks. We further propose a simple extension of feature propagation to edge2vec where features and labels located on edges. In this section, we conduct experiments on real world data to demonstrate the performance of edge2vec and its convergence.

6.1 DataSet

In this section, we study a real world data at a leading casheless payment platform in the world, served more than hundred millions of users. As a financial services provider, one of major problems faced is the risk control of fraudulent transactions. Detecting and identifying the risk of fraud for each transaction plays the fundamental importance of the platform.

In particular, we study the fraud transaction in the online shopping setting, where sellers sell fake items to customers to reap undeserved profits. Independently considering each transaction between a seller and a buyer cannot characterize useful information from the whole transaction network. Considering the problem in the feature propagation framework over graph can help us understand underlying aggregation pattern of the fraudulent transactions.

The experimental fraud transaction data555the data is randomly sampled over a time period with complete data desensitization (no personal profile, no user id). contains three types of features: 1) buyer’s features 2) seller’s feautres and 3) characterizations on each transaction. We treat each buyer and seller as a node of the graph, and each transaction is an edge between buyer and seller. If one transaction is fraud, we label its corresponding edge as , otherwise label the transaction as . Our task is to predict whether or not one edge is a fraud. The detailed statistics of the data is described in Table 2.

Note that there could be multiple edges between a seller and a buyer, thus make the setting a bit different from traditional recommendation setting. Our edge2vec can embed each edge into a vector space, so that it can help us to infer the risk of each edge in the graph.

#Nodes #Edges #Fraud #Normal
Training Data 626,003 1,720,180 31,737 1,688,441
Testing Data 1,355,824 4,034,962 86,721 3,948,241
Table 1: Fraud Transaction Detection Data Description.

6.2 Treatment and Control Groups

As discussed in section 2, the learning framework is

where denotes the feature propagation process. In order to make a fair comparison, we use the same linear link function parameterized by for all of the feature propagation processes

, and finally feed to the cross-entropy loss function:


We will change the feature propagation function to study the performance of different types of feature propagation processes. Specifically, we design the following two feature propagation processes in the control group, and compare with edge2vec as the treatment.

Control1. The first type is no feature propagation, i.e. we do not expand the edge feature at all. That is,

Control2. The second type is to only expand the edge feature by concatenating its source and target node features, that is,

Treatment (edge2vec). The third type is to expand the edge feature by the propagation process defined in section 5, that is,

where is computed by Eq. (20).

6.3 Results and Analyses

We plot the PR-curves 666https://en.wikipedia.org/wiki/Precision_and_recall of comparison approaches in Figure 2. We could see that the result of the treatment method edge2vec with an appropriate performs much better than Control1 and a little better than Control2. Although the gain between treatment and Control2 is not such significant, it is in line with our expectations that feature propagation could improve the performance of prediction model.

Figure 2: Precision-Recall Curve

We also analyze the potential numerical issues by testing the structure2vec method [Dai et al.2016]. For the structure2vec method, its loss function introduced a penalty parameter to constrain the value of and . If is set up as a small value, the numerical issue will rise up. We take four numbers of and then test in which order of steps (see into section 4.1) will lead to numerical overflow.

The following table displays the test results. We can find out that when becomes small enough( or ), the numerical overflow issue happens. However, setting a large is not a good method to handle this issue, for a large may weaken the model’s performance very sharply (see the curve of edge2vec under in Figure 2.

1-order 2-orders 3-orders 4-orders 5-orders
Table 2: Numeric Overflow (Overflow or Not).

7 Conclusion and Future Work

In this paper, we proposed a new concept “feature propagation” and a typical way for feature propagation. We proved that convergence is a noteworthy issue for feature propagation and proposed certain conditions to guarantee its convergence. Then we revisited the two types of graph representation learning methods and found both of them have strong connections with feature propagation. Although we only revisited very limited graph representation learning methods, we provided a new perspective for understanding the essence of graph representation learning. The experiment on fraud transaction detection demonstrated the method with feature propagation could do better than the method without it. We also tested the numerical overflow issue in structure2vec. It’s a pity that we only pointed out the issue but haven’t proposed a practical way to make it. We think it is a worthy direction to explore in the future.


  • [Abu-El-Haija et al.2017] Sami Abu-El-Haija, Bryan Perozzi, Rami Al-Rfou, and Alex Alemi. Watch your step: Learning graph embeddings through attention. arXiv preprint arXiv:1710.09599, 2017.
  • [Belkin and Niyogi2002] Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering. In Advances in neural information processing systems, pages 585–591, 2002.
  • [Cao et al.2015] Shaosheng Cao, Wei Lu, and Qiongkai Xu. Grarep: Learning graph representations with global structural information. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pages 891–900. ACM, 2015.
  • [Dai et al.2016] Hanjun Dai, Bo Dai, and Le Song. Discriminative embeddings of latent variable models for structured data. In

    International Conference on Machine Learning

    , pages 2702–2711, 2016.
  • [Grover and Leskovec2016] Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pages 855–864. ACM, 2016.
  • [Hamilton et al.2017] William L Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs. arXiv preprint arXiv:1706.02216, 2017.
  • [Kipf and Welling2016] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
  • [Langville and Meyer2006] Amy N Langville and Carl D Meyer.

    Updating markov chains with an eye on google’s pagerank.

    SIAM journal on matrix analysis and applications, 27(4):968–987, 2006.
  • [Liu et al.2017] Ziqi Liu, Chaochao Chen, Jun Zhou, Xiaolong Li, Feng Xu, Tao Chen, and Le Song. Poster: Neural network-based graph embedding for malicious accounts detection. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS ’17, pages 2543–2545, New York, NY, USA, 2017. ACM.
  • [Page et al.1999] Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. The pagerank citation ranking: Bringing order to the web. Technical report, Stanford InfoLab, 1999.
  • [Pennington et al.2014] Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global vectors for word representation. In

    Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)

    , pages 1532–1543, 2014.
  • [Perozzi et al.2014] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 701–710. ACM, 2014.
  • [Plemmons1977] Robert J Plemmons. M-matrix characterizations. i—nonsingular m-matrices. Linear Algebra and its Applications, 18(2):175–188, 1977.
  • [Sen et al.2008] Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. AI magazine, 29(3):93, 2008.
  • [Xiang et al.2013] Biao Xiang, Qi Liu, Enhong Chen, Hui Xiong, Yi Zheng, and Yu Yang. Pagerank with priors: An influence propagation perspective. In IJCAI, pages 2740–2746, 2013.
  • [Zitnik and Leskovec2017] Marinka Zitnik and Jure Leskovec. Predicting multicellular function through multi-layer tissue networks. Bioinformatics, 33(14):i190–i198, 2017.