A Self-Attention Network based Node Embedding Model

06/22/2020
by   Dai Quoc Nguyen, et al.
0

Despite several signs of progress have been made recently, limited research has been conducted for an inductive setting where embeddings are required for newly unseen nodes – a setting encountered commonly in practical applications of deep learning for graph networks. This significantly affects the performances of downstream tasks such as node classification, link prediction or community extraction. To this end, we propose SANNE – a novel unsupervised embedding model – whose central idea is to employ a transformer self-attention network to iteratively aggregate vector representations of nodes in random walks. Our SANNE aims to produce plausible embeddings not only for present nodes, but also for newly unseen nodes. Experimental results show that the proposed SANNE obtains state-of-the-art results for the node classification task on well-known benchmark datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/26/2019

Unsupervised Universal Self-Attention Network for Graph Classification

Existing graph embedding models often have weaknesses in exploiting grap...
research
11/12/2019

A Capsule Network-based Model for Learning Node Embeddings

In this paper, we focus on learning low-dimensional embeddings of entity...
research
03/03/2022

Pay Attention to Relations: Multi-embeddings for Attributed Multiplex Networks

Graph Convolutional Neural Networks (GCNs) have become effective machine...
research
12/11/2020

Pair-view Unsupervised Graph Representation Learning

Low-dimension graph embeddings have proved extremely useful in various d...
research
03/11/2023

Space-Invariant Projection in Streaming Network Embedding

Newly arriving nodes in dynamics networks would gradually make the node ...
research
09/26/2020

Inductive Graph Embeddings through Locality Encodings

Learning embeddings from large-scale networks is an open challenge. Desp...
research
09/15/2020

Cascaded Semantic and Positional Self-Attention Network for Document Classification

Transformers have shown great success in learning representations for la...

Please sign up or login with your details

Forgot password? Click here to reset