Representation Learning with Weighted Inner Product for Universal Approximation of General Similarities

02/27/2019
by   Geewook Kim, et al.
0

We propose weighted inner product similarity (WIPS) for neural-network based graph embedding, where we optimize the weights of the inner product in addition to the parameters of neural networks. Despite its simplicity, WIPS can approximate arbitrary general similarities including positive definite, conditionally positive definite, and indefinite kernels. WIPS is free from similarity model selection, yet it can learn any similarity models such as cosine similarity, negative Poincaré distance and negative Wasserstein distance. Our extensive experiments show that the proposed method can learn high-quality distributed representations of nodes from real datasets, leading to an accurate approximation of similarities as well as high performance in inductive tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/04/2018

Graph Embedding with Shifted Inner Product Similarity and Its Improved Approximation Capability

We propose shifted inner-product similarity (SIPS), which is a novel yet...
research
05/31/2018

On representation power of neural network-based graph embedding and beyond

The representation power of similarity functions used in neural network-...
research
07/25/2022

Generative Subgraph Contrast for Self-Supervised Graph Representation Learning

Contrastive learning has shown great promise in the field of graph repre...
research
10/28/2019

Neural Similarity Learning

Inner product-based convolution has been the founding stone of convoluti...
research
04/25/2020

Convex Representation Learning for Generalized Invariance in Semi-Inner-Product Space

Invariance (defined in a general sense) has been one of the most effecti...
research
04/18/2019

ProductNet: a Collection of High-Quality Datasets for Product Representation Learning

ProductNet is a collection of high-quality product datasets for better p...
research
06/01/2023

Coneheads: Hierarchy Aware Attention

Attention networks such as transformers have achieved state-of-the-art p...

Please sign up or login with your details

Forgot password? Click here to reset