Direct Multi-hop Attention based Graph Neural Network

09/29/2020
by   Guangtao Wang, et al.
13

Introducing self-attention mechanism in graph neural networks (GNNs) achieved state-of-the-art performance for graph representation learning. However, at every layer, attention is only computed between two connected nodes and depends solely on the representation of both nodes. This attention computation cannot account for the multi-hop neighbors which supply graph structure context information and have influence on the node representation learning as well. In this paper, we propose Direct Multi-hop Attention based Graph neural Network (DAGN) for graph representation learning, a principled way to incorporate multi-hop neighboring context into attention computation, enabling long-range interactions at every layer. To compute attention between nodes that are multiple hops away, DAGN diffuses the attention scores from neighboring nodes to non-neighboring nodes, thus increasing the receptive field for every message passing layer. Unlike previous methods, DAGN uses a diffusion prior on attention values, to efficiently account for all paths between the pair of nodes when computing multi-hop attention weights. This helps DAGN capture large-scale structural information in a single layer, and learn more informative attention distribution. Experimental results on standard semi-supervised node classification as well as the knowledge graph completion show that DAGN achieves state-of-the-art results: DAGN achieves up to 5.7 relative error reduction over the previous state-of-the-art on Cora, Citeseer, and Pubmed. DAGN also obtains the best performance on a large-scale Open Graph Benchmark dataset. On knowledge graph completion DAGN advances state-of-the-art on WN18RR and FB15k-237 across four different performance metrics.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/22/2020

Get Rid of Suspended Animation Problem: Deep Diffusive Neural Network on Graph Semi-Supervised Classification

Existing graph neural networks may suffer from the "suspended animation ...
research
08/23/2021

Graph Attention Multi-Layer Perceptron

Graph neural networks (GNNs) have recently achieved state-of-the-art per...
research
04/07/2020

Is Graph Structure Necessary for Multi-hop Reasoning?

Recently, many works attempt to model texts as graph structure and intro...
research
11/01/2021

RMNA: A Neighbor Aggregation-Based Knowledge Graph Representation Learning Model Using Rule Mining

Although the state-of-the-art traditional representation learning (TRL) ...
research
05/22/2023

Distributed Learning over Networks with Graph-Attention-Based Personalization

In conventional distributed learning over a network, multiple agents col...
research
10/25/2019

Improving Graph Attention Networks with Large Margin-based Constraints

Graph Attention Networks (GATs) are the state-of-the-art neural architec...
research
05/30/2019

Neural Consciousness Flow

The ability of reasoning beyond data fitting is substantial to deep lear...

Please sign up or login with your details

Forgot password? Click here to reset