DeepAI
Log In Sign Up

A Self-Attention Network based Node Embedding Model

06/22/2020
by   Dai Quoc Nguyen, et al.
0

Despite several signs of progress have been made recently, limited research has been conducted for an inductive setting where embeddings are required for newly unseen nodes – a setting encountered commonly in practical applications of deep learning for graph networks. This significantly affects the performances of downstream tasks such as node classification, link prediction or community extraction. To this end, we propose SANNE – a novel unsupervised embedding model – whose central idea is to employ a transformer self-attention network to iteratively aggregate vector representations of nodes in random walks. Our SANNE aims to produce plausible embeddings not only for present nodes, but also for newly unseen nodes. Experimental results show that the proposed SANNE obtains state-of-the-art results for the node classification task on well-known benchmark datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

09/26/2019

Unsupervised Universal Self-Attention Network for Graph Classification

Existing graph embedding models often have weaknesses in exploiting grap...
11/12/2019

A Capsule Network-based Model for Learning Node Embeddings

In this paper, we focus on learning low-dimensional embeddings of entity...
03/03/2022

Pay Attention to Relations: Multi-embeddings for Attributed Multiplex Networks

Graph Convolutional Neural Networks (GCNs) have become effective machine...
12/11/2020

Pair-view Unsupervised Graph Representation Learning

Low-dimension graph embeddings have proved extremely useful in various d...
07/20/2020

PanRep: Universal node embeddings for heterogeneous graphs

Learning unsupervised node embeddings facilitates several downstream tas...
09/26/2020

Inductive Graph Embeddings through Locality Encodings

Learning embeddings from large-scale networks is an open challenge. Desp...
09/15/2020

Cascaded Semantic and Positional Self-Attention Network for Document Classification

Transformers have shown great success in learning representations for la...