On representation power of neural network-based graph embedding and beyond

05/31/2018
by   Akifumi Okuno, et al.
0

The representation power of similarity functions used in neural network-based graph embedding is considered. The inner product similarity (IPS) with feature vectors computed via neural networks is commonly used for representing the strength of association between two nodes. However, only a little work has been done on the representation capability of IPS. A very recent work shed light on the nature of IPS and reveals that IPS has the capability of approximating any positive definite (PD) similarities. However, a simple example demonstrates the fundamental limitation of IPS to approximate non-PD similarities. We then propose a novel model named Shifted IPS (SIPS) that approximates any Conditionally PD (CPD) similarities arbitrary well. CPD is a generalization of PD with many examples such as negative Poincare distance and negative Wasserstein distance, thus SIPS has a potential impact to significantly improve the applicability of graph embedding without taking great care in configuring the similarity function. Our numerical experiments demonstrate the SIPS's superiority over IPS. In theory, we further extend SIPS beyond CPD by considering the inner product in Minkowski space so that it approximates more general similarities.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/04/2018

Graph Embedding with Shifted Inner Product Similarity and Its Improved Approximation Capability

We propose shifted inner-product similarity (SIPS), which is a novel yet...
research
02/27/2019

Representation Learning with Weighted Inner Product for Universal Approximation of General Similarities

We propose weighted inner product similarity (WIPS) for neural-network b...
research
01/11/2019

Retrieving Similar E-Commerce Images Using Deep Learning

In this paper, we propose a deep convolutional neural network for learni...
research
07/09/2022

Wasserstein Graph Distance based on L_1-Approximated Tree Edit Distance between Weisfeiler-Lehman Subtrees

The Weisfeiler-Lehman (WL) test has been widely applied to graph kernels...
research
05/19/2020

Neural Collaborative Filtering vs. Matrix Factorization Revisited

Embedding based models have been the state of the art in collaborative f...
research
04/09/2021

Householder orthogonalization with a non-standard inner product

Householder orthogonalization plays an important role in numerical linea...
research
07/22/2019

Hyperlink Regression via Bregman Divergence

A collection of U (∈N) data vectors is called a U-tuple, and the assoc...

Please sign up or login with your details

Forgot password? Click here to reset