Are More Layers Beneficial to Graph Transformers?

03/01/2023
by   Haiteng Zhao, et al.
0

Despite that going deep has proven successful in many neural architectures, the existing graph transformers are relatively shallow. In this work, we explore whether more layers are beneficial to graph transformers, and find that current graph transformers suffer from the bottleneck of improving performance by increasing depth. Our further analysis reveals the reason is that deep graph transformers are limited by the vanishing capacity of global attention, restricting the graph transformer from focusing on the critical substructure and obtaining expressive features. To this end, we propose a novel graph transformer model named DeepGraph that explicitly employs substructure tokens in the encoded representation, and applies local attention on related nodes to obtain substructure based attention encoding. Our model enhances the ability of the global attention to focus on substructures and promotes the expressiveness of the representations, addressing the limitation of self-attention as the graph transformer deepens. Experiments show that our method unblocks the depth limitation of graph transformers and results in state-of-the-art performance across various graph benchmarks with deeper models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/02/2023

Centered Self-Attention Layers

The self-attention mechanism in transformers and the message-passing mec...
research
02/14/2023

Energy Transformer

Transformers have become the de facto models of choice in machine learni...
research
06/05/2023

Representational Strengths and Limitations of Transformers

Attention layers, as commonly used in transformers, form the backbone of...
research
01/06/2022

TransVPR: Transformer-based place recognition with multi-level attention aggregation

Visual place recognition is a challenging task for applications such as ...
research
05/27/2022

Transformers from an Optimization Perspective

Deep learning models such as the Transformer are often constructed by he...
research
03/10/2020

ReZero is All You Need: Fast Convergence at Large Depth

Deep networks have enabled significant performance gains across domains,...
research
10/02/2022

Wide Attention Is The Way Forward For Transformers

The Transformer is an extremely powerful and prominent deep learning arc...

Please sign up or login with your details

Forgot password? Click here to reset