Free Energy Node Embedding via Generalized Skip-gram with Negative Sampling

by   Yu Zhu, et al.

A widely established set of unsupervised node embedding methods can be interpreted as consisting of two distinctive steps: i) the definition of a similarity matrix based on the graph of interest followed by ii) an explicit or implicit factorization of such matrix. Inspired by this viewpoint, we propose improvements in both steps of the framework. On the one hand, we propose to encode node similarities based on the free energy distance, which interpolates between the shortest path and the commute time distances, thus, providing an additional degree of flexibility. On the other hand, we propose a matrix factorization method based on a loss function that generalizes that of the skip-gram model with negative sampling to arbitrary similarity matrices. Compared with factorizations based on the widely used ℓ_2 loss, the proposed method can better preserve node pairs associated with higher similarity scores. Moreover, it can be easily implemented using advanced automatic differentiation toolkits and computed efficiently by leveraging GPU resources. Node clustering, node classification, and link prediction experiments on real-world datasets demonstrate the effectiveness of incorporating free-energy-based similarities as well as the proposed matrix factorization compared with state-of-the-art alternatives.



There are no comments yet.


page 5


Generalized Neural Graph Embedding with Matrix Factorization

Recent advances in language modeling such as word2vec motivate a number ...

Network Embedding as Matrix Factorization: Unifying DeepWalk, LINE, PTE, and node2vec

Since the invention of word2vec, the skip-gram model has significantly a...

Global Vectors for Node Representations

Most network embedding algorithms consist in measuring co-occurrences of...

Graph Embedding with Shifted Inner Product Similarity and Its Improved Approximation Capability

We propose shifted inner-product similarity (SIPS), which is a novel yet...

Improving Skip-Gram based Graph Embeddings via Centrality-Weighted Sampling

Network embedding techniques inspired by word2vec represent an effective...

Time-varying Graph Representation Learning via Higher-Order Skip-Gram with Negative Sampling

Representation learning models for graphs are a successful family of tec...

Learning Graph Embeddings from WordNet-based Similarity Measures

We present a new approach for learning graph embeddings, that relies on ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.