I Introduction
The quality of machine learning methods largely depends on the particular representation (or features) chosen for the data. The majority of modern machine learning methods work with the objects represented as numerical vectors. In some problems, such as object recognition on images and video, speech recognition and natural language processing, the initial feature space formally can be considered as a vector space. The obstacle is that it has a complex structure and a very high dimension which requires the construction of methods to transform the original representation into a more concise and informative one. Problems with objects of a discrete nature (in particular, with graphs) also require that informative continuous numerical representations have to be found. Recently network representation learning attracted a lot of attention which lead the development of many new methods (e.g., see recent reviews
[8, 3]).The essence of the network representation learning problem (or embedding problem) is to represent a graph, a subgraph or a node as a point in lowdimensional Euclidean space; in such a form they can be further used in traditional machine learning pipelines. In what follows we will focus on node embeddings. A usual assumption is that nodes can be represented in a space with dimension , where is the number of nodes in the graph. The obtained embeddings can be further used for node classification [7], community detection [6], link prediction [1] or visualization [5].
The majority of now existing graph embedding algorithms focus on proximity preserving embeddings, in which the nodes positioned in a close network proximity are considered to be similar and their embedding points should be placed close to each other in embedding space. We base our research on this assumption and, meanwhile, use the area of representation learning for images as source of inspiration. Recently the histogram loss approach was proposed [18], where the embeddings are learned by minimizing a certain distance between inter and intraclass similarity distributions. We extend their approach to graphs by introducing a more suitable distance between the distributions which is inspired by a Wasserstein distance. We demonstrate the efficiency of the proposed approach on a series of experiments with realworld graphs.
Ii Related work
Currently the methods of learning mode representations in graphs are rapidly developing. The earlier approaches are based on classical dimension reduction and on finding node embeddings via matrix factorization [2, 14]
. Matrix factorization can be really time consuming for large graphs, and a usual way to speedup embedding learning is to use random walks over the graph. The idea is to find such vector representations of vertices which can describe well the probabilities of particular vertex sequences in these walks. This idea is the base for the whole family of methods, including DeepWalk
[15], LINE [16] and node2vec [7]. Note that all these methods use internally the algorithm word2vec [12], which is based on the optimization of the logisticlike loss and was initially proposed for word embeddings. Further developments of random walkbased methods mainly focus on various schemes of random walk through the graph, which allow them to take into account various structural features of graphs. Recently, many attempts were made to adopt neural networks to graphstructured data (see
[19, 4] among many others). In the next section we specify the considered framework and further discuss some related embedding learning approaches.Iii Learning graph embeddings in unsupervised way
We start by formalizing the problem of node representation learning in graphs following [8]. We observe an undirected and unweighted graph with vertices and denote by the adjacency matrix of graph with nodes. We also assume that certain similarity matrix is given , where the value determines how similar the nodes and are, and, respectively, how close their embeddings should be. The considered framework consists of several important parts:

encoder function which maps nodes to the latent representations [c] E:V→R^d;

decoder function which maps pairs of node embeddings to node proximity measure [c] D:R^d ×R^d →R;

loss function which measures how close reconstructed proximity values are to the corresponding reference values .
In what follows we discuss different types of encoders and loss functions as the choice of them largely distinguishes modern embedding learning algorithms.
Iiia Loss functions
We describe two important types of losses considered in the literature.
IiiA1 Pairwise loss
in this case, the goal of optimization is to minimize the sum of reconstruction errors for pairwise similarities of nodes : [c] L(Θ) = ∑_(i, j) ∈Ω ℓ(D(E_i, E_j), s_ij), where decoder is usually considered to be nonparametrized and optimization is done over the parameters of encoder .
IiiA2 Nodewise autoencoder loss
this approach was recently proposed by [17], where the authors consider the similarity of node embeddings as distributions: [c] L(Θ) = ∑_i = 1^n ℓ(D(E_i, ⋅), s_i), where and are vectors of decoded similarities and observed similarities of
th node respectively. One of the possible choices for the loss function is KullbackLeibler divergence. A similar approach is considered in
[20, 4], where autoencoders were constructed considering the classical
loss.IiiB Encoders
The standard approach is the socalled embedding lookup, where [c] E_i = Z e_i, where is an embedding matrix and is a vector with on the th place and zeros elsewhere.
The other possible approach is to treat the local neighborhood of the node as a feature vector and consider [c] E_i = f(a_i), where is the th column of the adjacency matrix and is some function. The particular choice of function might be a neural network [20, 4].
Also one can consider linear function : [c] E_i = Wa_i + b, where is a parameter matrix, is an intercept vector and a full set of encoder parameters is . If adjacency matrix has rank at least then the expressive ability of such an encoder is exactly equal to the one in a direct embedding approach (IIIB).
Iv Discrimination of Similarity Distributions
In this paper we introduce a different approach which considers a whole set of similarities between nodes and works out a discriminative loss between the distributions of in the pairs of similar and non similar nodes. Currently, this approach assumes that the graph is sparse so that similarity matrix is also sparse (we consider either adjacency matrix or second order proximities ). Consider the set of all positive similarities and the set of all pairs of nodes with zero similarity .
Our main assumption is that the embedding should allow us to distinguish between similar and non similar nodes. In particular, decoded similarities should be higher for similar nodes. If we treat respective positive similarity as a weight for the considered decoded similarity value , then we consider distributions and of decoded similarities in and , respectively, and define the loss function as [c] L(Θ) = D(P^+(Θ), P^(Θ)), where is a distance between distributions. It might be KLdivergence, Hellinger distance or, for example, Wasserstein distance. Thus, our aim is to maximize the distance between distributions of positive and negative pairs.
We are going to proceed with linear encoder (IIIB) and suggest to use Pearson correlation as a decoder function : [c] D(E_i, E_j) = EiTEj∥Ei∥ ∥Ej∥. In this case which is convenient for our purposes.
For the implementation of discriminative loss (IV) we follow [18] and approximate distributions of decoded similarities in and
by histogram estimators
and with linear slope in each bin.As a distance between distributions we suggest to use 1D Wasserstein distance (also known as an “earth mover distance”, EMD) which for the histogram case can be computed as [11]: [c] EMD(P^+, P^) = ∑_i = 1^N_b —φ_i—, where is the number of bins in the histogram and [c] φ_i = ∑_j = 1^i (P+j∥P+∥1  Pj∥P∥1). However, in our case it is required that the distribution of similar pairs should be to the right from the distribution of dissimilar pairs . The following simple modification allows to avoid unnecessary local optima: [c] EMD_asym(P^+, P^) = ∑_i=1^N_b φ_i. Maximization of modified Wasserstein distance (IV) makes distribution concentrating near , while distribution concentrates near . However, it is natural to assume that nodes, which are far away in the graph, should have the embeddings that are independent from each other rather than opposite. That is why we propose to keep in the computation of Wasserstein distance only the part of negative distribution corresponding to nonnegative decoded similarities .
V Experiments
In this section we will discuss the experimental evaluation of the proposed algorithm. The whole algorithm was implemented in Tensorflow
^{1}^{1}1The code of the algorithm and all the experiments is available at https://github.com/premolab/GraphEmbeddings., while optimization of the functional (IV) over parameters (embedding matrix and vector) was performed via stochastic gradient descent.
The further speedup of the algorithm was achieved by subsampling set (the socalled negative sampling [12]) which is necessary as realworld graphs are usually sparse with .
Va Experimental setup
For the experiments we used several realworld networks with number of nodes varying from 100 to 4000. The information about the datasets is summarized in the Table I.
Name  Number nodes  Number edges 

Books about US Politics [9]  105  441 
American College Football [13]  115  613 
Email EU [21]  986  25552 
Facebook [10]  4039  88234 
In our experiments we focus on the link prediction problem which offers an universal way to estimate the quality of the embedding for any network as it does not require any additional data except for the graph itself. The solution pipeline starts with constructing the embedding based on the part of graph edges and then checks how well the missing edges can be predicted basing on the embeddings. More precisely, the pipeline is as follows:

The set of all edges is randomly divided into two parts: and ^{2}^{2}2In our experiments we set ..

The embedding is constructed basing on the subgraph .

Define variables y_uv &=& {1, if (u, v) ∈E,0, if (u, v) ∉E,
X_uv &=& ⟨E_u, E_v⟩, where means concatenation. 
Construct classifier (logistic regression in our experiments) basing on the training data set
and estimate the classification quality on the test set .
VB Results
We compare the results of the proposed algorithm (DDoS) with several stateoftheart algorithms belonging to different subcategories of embedding algorithms: random walk based algorithms (DeepWalk [15]), direct matrix factorization approaches (HOPE [14]) and a neural network based autoencoders (SDNE [19]).
The results are summarized in Table II. As we can see, in the majority of cases DDoS embeddings allow us to achieve better results than their competitors. Interestingly the usage of the parametrized encoder (IIIB) instead of the embedding lookup (IIIB) has resulted in a much faster convergence of the algorithm, see Figure 1.
Dataset  HOPE  DeepWalk  SDNE  DDoS  

Books  4  0.89  0.87  0.78  0.90 
about  8  0.90  0.89  0.79  0.90 
US Politics  16  0.92  0.90  0.81  0.90 
32  0.93  0.90  0.75  0.90  
American  4  0.75  0.85  0.77  0.87 
College  8  0.86  0.90  0.82  0.90 
Football  16  0.90  0.91  0.84  0.91 
32  0.92  0.92  0.84  0.93  
4  0.74  0.81  0.89  0.90  
Email EU  8  0.83  0.86  0.90  0.92 
16  0.90  0.89  0.92  0.93  
32  0.93  0.90  0.93  0.93  
4  0.84  0.94  0.96  0.97  
8  0.92  0.98  0.96  0.98  
16  0.96  0.99  0.97  0.99  
32  0.97  0.99  0.98  1.00 
Vi Conclusions and Outlook
In this work we propose a simple but powerful approach for constructing the graph embeddings based on discrimination of similarity distributions. We show the way to implement the general idea by using maximization of a specially tuned Wasserstein distance. The series of experiments with the link prediction in realworld graphs convincingly demonstrate the superiority of the proposed approach over its competitors.
Our works offers a number of directions for further development. Among them is the scalability of the algorithm that should be improved first of all. It can be achieved by combining the proposed criterion with sampling graph nodes via random walks. The other direction is to test DDoS embeddings in other network analysis tasks such as community detection and semisupervised node classification. Extensions to directed and weighted graphs also seem to be of a great interest. Finally, the usage of a nonlinear encoder (IIIB) parametrized by a neural network can be a promising direction for further investigation and improvement.
References
 [1] Lars Backstrom and Jure Leskovec. Supervised random walks: predicting and recommending links in social networks. In Proceedings of the fourth ACM international conference on Web search and data mining, pages 635–644. ACM, 2011.
 [2] Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering. In Advances in neural information processing systems, pages 585–591, 2002.
 [3] Hongyun Cai, Vincent W Zheng, and Kevin Chang. A comprehensive survey of graph embedding: problems, techniques and applications. IEEE Transactions on Knowledge and Data Engineering, 2018.

[4]
Shaosheng Cao, Wei Lu, and Qiongkai Xu.
Deep neural networks for learning graph representations.
In
Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence
, pages 1145–1152. AAAI Press, 2016.  [5] MC Ferreira De Oliveira and Haim Levkowitz. From visual data exploration to visual data mining: a survey. IEEE Transactions on Visualization and Computer Graphics, 9(3):378–394, 2003.
 [6] Santo Fortunato. Community detection in graphs. Physics reports, 486(35):75–174, 2010.
 [7] Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pages 855–864. ACM, 2016.
 [8] William L Hamilton, Rex Ying, and Jure Leskovec. Representation learning on graphs: Methods and applications. arXiv preprint arXiv:1709.05584, 2017.
 [9] Valdis Krebs. Books about us politics, 2004.
 [10] Jure Leskovec and Julian J Mcauley. Learning to discover social circles in ego networks. In Advances in neural information processing systems, pages 539–547, 2012.

[11]
Manuel Martinez, Makarand Tapaswi, and Rainer Stiefelhagen.
A Closedform Gradient for the 1D Earth Mover’s Distance for Spectral Deep Learning on Biological Data.
In ICML 2016 Workshop on Computational Biology (CompBio@ICML16), Jun. 2016.  [12] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119, 2013.
 [13] Mark EJ Newman and Michelle Girvan. Finding and evaluating community structure in networks. Physical review E, 69(2):026113, 2004.
 [14] Mingdong Ou, Peng Cui, Jian Pei, Ziwei Zhang, and Wenwu Zhu. Asymmetric transitivity preserving graph embedding. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, pages 1105–1114, New York, NY, USA, 2016. ACM.
 [15] Bryan Perozzi, Rami AlRfou, and Steven Skiena. Deepwalk: Online learning of social representations. CoRR, abs/1403.6652, 2014.
 [16] Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Largescale information network embedding. In Proceedings of the 24th International Conference on World Wide Web, pages 1067–1077. International World Wide Web Conferences Steering Committee, 2015.
 [17] Anton Tsitsulin, Davide Mottin, Panagiotis Karras, and Emmanuel Müller. VERSE. In Proceedings of the 2018 World Wide Web Conference on World Wide Web  WWW ’18, pages 539–548, New York, New York, USA, 2018. ACM Press.
 [18] Evgeniya Ustinova and Victor Lempitsky. Learning deep embeddings with histogram loss. In Advances in Neural Information Processing Systems, pages 4170–4178, 2016.
 [19] Daixin Wang, Peng Cui, and Wenwu Zhu. Structural deep network embedding. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1225–1234. ACM, 2016.
 [20] Daixin Wang, Peng Cui, and Wenwu Zhu. Structural Deep Network Embedding. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining  KDD ’16, pages 1225–1234, New York, New York, USA, 2016. ACM Press.
 [21] Hao Yin, Austin R Benson, Jure Leskovec, and David F Gleich. Local higherorder graph clustering. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 555–564. ACM, 2017.
Comments
There are no comments yet.