Are All Edges Necessary? A Unified Framework for Graph Purification

11/09/2022
by   Zishan Gu, et al.
0

Graph Neural Networks (GNNs) as deep learning models working on graph-structure data have achieved advanced performance in many works. However, it has been proved repeatedly that, not all edges in a graph are necessary for the training of machine learning models. In other words, some of the connections between nodes may bring redundant or even misleading information to downstream tasks. In this paper, we try to provide a method to drop edges in order to purify the graph data from a new perspective. Specifically, it is a framework to purify graphs with the least loss of information, under which the core problems are how to better evaluate the edges and how to delete the relatively redundant edges with the least loss of information. To address the above two problems, we propose several measurements for the evaluation and different judges and filters for the edge deletion. We also introduce a residual-iteration strategy and a surrogate model for measurements requiring unknown information. The experimental results show that our proposed measurements for KL divergence with constraints to maintain the connectivity of the graph and delete edges in an iterative way can find out the most edges while keeping the performance of GNNs. What's more, further experiments show that this method also achieves the best defense performance against adversarial attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/15/2020

GNNGuard: Defending Graph Neural Networks against Adversarial Attacks

Deep learning methods for graphs achieve remarkable performance on many ...
research
09/28/2020

RoGAT: a robust GNN combined revised GAT with adjusted graphs

Graph Neural Networks(GNNs) are useful deep learning models to deal with...
research
05/23/2022

Learning heterophilious edge to drop: A general framework for boosting graph neural networks

Graph Neural Networks (GNNs) aim at integrating node contents with graph...
research
08/05/2023

Adversarial Erasing with Pruned Elements: Towards Better Graph Lottery Ticket

Graph Lottery Ticket (GLT), a combination of core subgraph and sparse su...
research
03/22/2022

Exploring High-Order Structure for Robust Graph Structure Learning

Recent studies show that Graph Neural Networks (GNNs) are vulnerable to ...
research
06/26/2023

Interpretable Sparsification of Brain Graphs: Better Practices and Effective Designs for Graph Neural Networks

Brain graphs, which model the structural and functional relationships be...
research
12/30/2022

Self-organization Preserved Graph Structure Learning with Principle of Relevant Information

Most Graph Neural Networks follow the message-passing paradigm, assuming...

Please sign up or login with your details

Forgot password? Click here to reset