DOTIN: Dropping Task-Irrelevant Nodes for GNNs

04/28/2022
by   Shaofeng Zhang, et al.
0

Scalability is an important consideration for deep graph neural networks. Inspired by the conventional pooling layers in CNNs, many recent graph learning approaches have introduced the pooling strategy to reduce the size of graphs for learning, such that the scalability and efficiency can be improved. However, these pooling-based methods are mainly tailored to a single graph-level task and pay more attention to local information, limiting their performance in multi-task settings which often require task-specific global information. In this paper, departure from these pooling-based efforts, we design a new approach called DOTIN (Dropping Task-Irrelevant Nodes) to reduce the size of graphs. Specifically, by introducing K learnable virtual nodes to represent the graph embeddings targeted to K different graph-level tasks, respectively, up to 90% raw nodes with low attentiveness with an attention model – a transformer in this paper, can be adaptively dropped without notable performance decreasing. Achieving almost the same accuracy, our method speeds up GAT by about 50% on graph-level tasks including graph classification and graph edit distance (GED) with about 60% less memory, on D&D dataset. Code will be made publicly available in https://github.com/Sherrylone/DOTIN.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/08/2022

Improving Graph Neural Networks at Scale: Combining Approximate PageRank and CoreRank

Graph Neural Networks (GNNs) have achieved great successes in many learn...
research
09/16/2022

SPGP: Structure Prototype Guided Graph Pooling

While graph neural networks (GNNs) have been successful for node classif...
research
05/27/2019

Edge Contraction Pooling for Graph Neural Networks

Graph Neural Network (GNN) research has concentrated on improving convol...
research
06/11/2021

Graph Transformer Networks: Learning Meta-path Graphs to Improve GNNs

Graph Neural Networks (GNNs) have been widely applied to various fields ...
research
02/21/2023

MulGT: Multi-task Graph-Transformer with Task-aware Knowledge Injection and Domain Knowledge-driven Pooling for Whole Slide Image Analysis

Whole slide image (WSI) has been widely used to assist automated diagnos...
research
09/21/2023

SALSA-CLRS: A Sparse and Scalable Benchmark for Algorithmic Reasoning

We introduce an extension to the CLRS algorithmic learning benchmark, pr...
research
12/24/2019

Multi-Graph Transformer for Free-Hand Sketch Recognition

Learning meaningful representations of free-hand sketches remains a chal...

Please sign up or login with your details

Forgot password? Click here to reset