1 Introduction
Most data in the modern world can be treated as an information network, thus network node similarity measuring has wide range of applications: search [1], recommendation systems [2], research publication networks analysis [3], biology [4], transportation and logistics [5] and others.
Consider a semantic network: set of types , each type is a set of entities; set of relations , each relation is 2order predicate defined on two types from :
both types in relation can be equal (), few relations can share the same pair of types (). That structure may be considered as a graph with colored vertices and colored edges: vertex color is its entity type, edge color correponds to a relation.
The question that we address is how to define similarity functions
that would reflect the closeness of objects based on "similarity of relations" they enter, and at the same time not mixing different relations as soon as "objects of different types and links carry different semantic meanings, and it does not make sense to mix them to measure the similarity without distinguishing their semantics" [6].
1.1 Related work
The basic graph structure similarity measure is the classical SimRank [7] over a homogeneous graph which is defined as follows:
The main drawback of this approach is that we cannot induce multiple relations or object types, so the only option is mixing them up into blobs "relation exists" and "all objects" that is completely not applicable in the case we have multiple relations with different semantics, for example the OpenCyc ontology node of the concept "Game" (see Figure 1) cannot be easily expressed via a single type of relations and objects.
Personalized PageRank [8] is also often used to measure similarity in homogeneous graphs:
that it same as PageRank, except random jumps are made into some prechosen node , rather then into random node.
Another option is PathRank [6] that measures pathsimilarity between objects picked from the same class of the heterogeneous information network given a symmetric metapath (set of paths that satisfy composition of relations that , so ) as a number of paths from the object to the object (each step must satisfy corresponding relation in ) normed over the number of paths from to plus the number of paths from to given :
That approach can handle several relations and object types and is very useful when we know the structure of relations we want our similarity measure to be based on. In case we want just to "put our relations into a black box" that would find similarity that would capture all network relations as a whole, we might want to use something different. Recently, an approach [9] for building an optimal linear combination of metapaths has been proposed.
There are several works on measuring similarity between objects from different classes, see, for example, [10].
2 Tensor SimRank
2.1 Problem statement
Let us consider a function that assigns similarity score for two objects from the same class as follows: objects are similar (value is high) if they relate to objects which are similar too. That interdependence can be expressed via the following definition:
where is the relation between classes , is the neighbourhood function that returns set of objects from the class that are related to the object via the relation , are the weights corresponding to the relation , is the normalization constant.
This can be rewritten as a Tensor SimRank equation:
(1) 
where is a blockdiagonal matrix (one block per each entity type), are the relation weights, are the stochastic relation tensors ^{1}^{1}1We have to use tensors instead of matrices to have multiple relations on the same pair of classes (which have nonzero blocks where relations exist).
Similarity scores between elements of different classes are equal to zero by the definition. Relation between objects of unrelated classes is equal to zero by definition too. Equation (1) is basically the classical SimRank equation with the adjacency tensor instead of the adjacency matrix: each nonzero layer of tensor encodes some relation on the same pair of types. If one has more than a single relation between types , then r would have multiple nonzero layers on the intersection of indices associated with the classes — one adjacency matrix per layer. In (1) the index stands for (weighted) summation over all layers of the tensor. That can be equivalently rewritten explicitly:
(2) 
where the diagonal matrix has to be chosen in a such way that .
2.2 Computational algorithm
Simple iterations for (1) are computationally demanding due to largescale matrixbymatrix products, thus we propose a a method that exploits the fact that is block diagonal and r is a threedimensional block tensor with size of the last dimension (number of layers) much less then the overall amount of objects. On each iteration for each we recompute updates independently (assuming all other fixed), see Algorithm 1.
So we just update the similarity score for each class assuming all other classes similarities are fixed in a way that the objects from the target class () that are related to objects from some other class that are close ( is high) become closer too (.
To show actual vectorized algorithm of similarity computation, let us introduce some additional notations: set of entity types
, each entity type is a set of entities, set of symmetric relation functions where ,is the order; columnstochastic matrix of pairwise types impacts (weights)
; operator that maps relation into corresponding columnstochastic adjacency matrix. If is not defined for some , then .To achieve better results (see above) on sparse relations we adopted the LowRank SimRank approximation [11]
that uses Probabilistic Singular Value Decomposition
[12] to perform fast approximate projections on lowrank matrix manifold at each step of the iterative process (Algorithm 3).The only difference with Algorithm 2 is that on each step we perform probabilistic SVD decomposition of the matrix , so that , and project it onto the manifold of matrices of rank .
2.3 Convergence conditions
Recall that the classical SimRank can be computed as a solution of the equation:
Fixedpoint iteration converges if is a columnstochastic matrix. In the vector form ( operator maps an matrix into a vector by taking column by column) that can be written as^{2}^{2}2:
if matrix is stochastic, then is stochastic too.
Tensor SimRank (2) computation can be equivalently written in the form:
(3) 
or in the vectorized for
Moreover, SimRank is also commonly approximated by the solution of the discrete Lyapunov equation:
which can be generalized to the tensor case as
and a fixedpoint iteration converges [13] if:
We conjecture that fixedpoint iterations for (3) converge if:

Each is stochastic

= 1
In the simplest form (we have no preferences among relations and classes) it reduces to (relations weight):
3 Computational experiment
3.1 Synthetic data: convergence test
To test convergence conditions we conducted series of tests on randomly generated sparse networks with different number of classes: and with randomly chosen number of objects in each , , full network of relation types (all possible types relations exists) with randomly chosen edges in each and default matrix (no priority). All generated networks successfully converged that illustrates that convergent sufficient conditions listed in previous section were adequate, see Figures 2,3.
3.2 Synthetic data: similarity reconstruction
To determine if model is capable of similarity reconstruction we generated a tree graph from randomly distributed points on a plane and tested if model can reconstruct points spatial similarity basing only on their relations.
On Figure 4 blue point represent 0level point that are connected to 1level point (red), that are connected to 2level points (green).
We have measured the following similarity reconstruction quality compared to real obtained from generated point coordinates:
that actually shows how many " is closer to then to " relations were preserved.
From Figure 5 one can see that at level model gets saturated, but at the level models that use lowrank version of Tensor SimRank perform way better than the "pure" algorithm. The numbers in the brackets denote the dimensionality of the matrix space into which the similarity matrices were projected on each step (rank of approximation).
3.3 BookCrossing Dataset test
The model was run on subsample from the BookCrossing Dataset [14]. We have extracted only those authors who had highest (top100) number of books in the collection. The final network had the following structure:
Model convergence is shown on Figure 6, where successfull convergence to the best possible lowrank approximation can be seen. The similarity structure is clearly visible on Year similarity matrix heatmap (Figure (6)). We expect diagonal dominance as soon as temporarily close years should be more or less similar in terms of authors and publishers characteristic of that period. Tables 1 and 2 are examples of "closest book" requests, we want to notice that no NLPpreprocessing was conducted, nevertheless model treated books from same storybook as similar basing on author/publisher/year similarities.
Psychic Sisters 
(Sweet Valley Twins and Friends, No 70) 
The Love Potion 
(Sweet Valley Twins and Friends, No 72) 
The Curse of the Ruby Necklace 
(Sweet Valley Twins and Friends Super, No 5) 
She’s Not What She Seems 
(Sweet Valley High No. 92) 
Are We in Love? 
(Sweet Valley High,No 94) 
Don’t Go Home With John 
(Sweet Valley High No. 90) 
In Love With a Prince 
(Sweet Valley High, No 91) 
The Girl Who Loved Tom Gordon 

Hearts In Atlantis (All You Want to Know) 
Blood And Smoke 
Blood And Smoke Cd 
Atlantis. 
The Body (Penguin Readers: Level 5) 
Storm of the Century 
4 Discussion and further work
Proposed model can be used in various problem areas where most of the information is available in the form of relations between entities rather than features of individual entities and no trivial vector representation of those entities can be induced. One can use the vector representation
to embed the notion of relations into classical machine learning algorithms. Also, the proposed model can be used for relation generalisation, that might give interesting results since we work on heterogeneous graphs.
Further model improvements might also include treating relations as objects too (probably, via heterogeneous hypergraphs) and defining similarity matrix on relations.
5 Conclusion
This paper proposes the generalization of SimRank for heterogeneous networks and a method for its computation that exploits the fact that the resulting similarity matrix is blockdiagonal, thus its components might be computed in an iterative fashion. The convergence conditions are proposed and successfully tested. Few perspective application areas are suggested.
References
 [1] L. Page, S. Brin, R. Motwani, and T. Winograd, “The PageRank citation ranking: Bringing order to the web.,” 1999.
 [2] J. A. Konstan, B. N. Miller, D. Maltz, J. L. Herlocker, L. R. Gordon, and J. Riedl, “Grouplens: applying collaborative filtering to usenet news,” Communications of the ACM, vol. 40, no. 3, pp. 77–87, 1997.
 [3] C. L. Giles, “The future of Citeseer: Citeseer X,” in Proceedings of the 10th European conference on Principle and Practice of Knowledge Discovery in Databases, pp. 2–2, SpringerVerlag, 2006.

[4]
S. Roy, T. Lane, and M. WernerWashburne, “Integrative construction and
analysis of conditionspecific biological networks.,” in
Proceedings of the National Conference on Artificial Intelligence
, vol. 22, p. 1898, Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, 2007.  [5] W. Jiang, J. Vaidya, Z. Balaporia, C. Clifton, and B. Banich, “Knowledge discovery from transportation network data,” in Data Engineering, 2005. ICDE 2005. Proceedings. 21st International Conference on, pp. 1061–1072, IEEE, 2005.
 [6] S. Lee, S. Park, M. Kahng, and S.g. Lee, “Pathrank: Ranking nodes on a heterogeneous graph for flexible hybrid recommender systems,” Expert Systems with Applications, vol. 40, no. 2, pp. 684–697, 2013.
 [7] G. Jeh and J. Widom, “Simrank: a measure of structuralcontext similarity,” in Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 538–543, ACM, 2002.
 [8] G. Jeh and J. Widom, “Scaling personalized web search,” in Proceedings of the 12th international conference on World Wide Web, pp. 271–279, ACM, 2003.
 [9] Y. Sun and J. Han, “Mining heterogeneous information networks: a structural analysis approach,” ACM SIGKDD Explorations Newsletter, vol. 14, no. 2, pp. 20–28, 2013.
 [10] C. Shi, X. Kong, Y. Huang, S. Y. Philip, and B. Wu, “Hetesim: A general framework for relevance measure in heterogeneous networks,” IEEE Transactions on Knowledge & Data Engineering, no. 10, pp. 2479–2492, 2014.
 [11] I. V. Oseledets and G. V. Ovchinnikov, “Fast, memory efficient lowrank approximation of simrank,” CoRR, vol. abs/1410.0717, 2014.
 [12] N. Halko, P.G. Martinsson, and J. A. Tropp, “Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions,” SIAM review, vol. 53, no. 2, pp. 217–288, 2011.

[13]
J. Bierkens, O. v. Gaans, and S. V. Lunel, “Estimate on the pathwise lyapunov exponent of linear stochastic differential equations with constant coefficients,”
Stochastic Analysis and Applications, vol. 28, no. 5, pp. 747–762, 2010.  [14] C.N. Ziegler, S. M. McNee, J. A. Konstan, and G. Lausen, “Improving recommendation lists through topic diversification,” in Proceedings of the 14th international conference on World Wide Web, pp. 22–32, ACM, 2005.
Comments
There are no comments yet.