Turning Strengths into Weaknesses: A Certified Robustness Inspired Attack Framework against Graph Neural Networks

03/10/2023
by   Binghui Wang, et al.
0

Graph neural networks (GNNs) have achieved state-of-the-art performance in many graph learning tasks. However, recent studies show that GNNs are vulnerable to both test-time evasion and training-time poisoning attacks that perturb the graph structure. While existing attack methods have shown promising attack performance, we would like to design an attack framework to further enhance the performance. In particular, our attack framework is inspired by certified robustness, which was originally used by defenders to defend against adversarial attacks. We are the first, from the attacker perspective, to leverage its properties to better attack GNNs. Specifically, we first derive nodes' certified perturbation sizes against graph evasion and poisoning attacks based on randomized smoothing, respectively. A larger certified perturbation size of a node indicates this node is theoretically more robust to graph perturbations. Such a property motivates us to focus more on nodes with smaller certified perturbation sizes, as they are easier to be attacked after graph perturbations. Accordingly, we design a certified robustness inspired attack loss, when incorporated into (any) existing attacks, produces our certified robustness inspired attack counterpart. We apply our framework to the existing attacks and results show it can significantly enhance the existing base attacks' performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/23/2022

GANI: Global Attacks on Graph Neural Networks via Imperceptible Node Injections

Graph neural networks (GNNs) have found successful applications in vario...
research
06/21/2022

Transferable Graph Backdoor Attack

Graph Neural Networks (GNNs) have achieved tremendous success in many gr...
research
02/06/2023

Collective Robustness Certificates: Exploiting Interdependence in Graph Neural Networks

In tasks like node classification, image segmentation, and named-entity ...
research
02/25/2022

Projective Ranking-based GNN Evasion Attacks

Graph neural networks (GNNs) offer promising learning methods for graph-...
research
03/27/2019

Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks

Modern neural networks are highly non-robust against adversarial manipul...
research
12/03/2018

Measuring the Robustness of Graph Properties

In this paper, we propose a perturbation framework to measure the robust...
research
03/22/2022

Exploring High-Order Structure for Robust Graph Structure Learning

Recent studies show that Graph Neural Networks (GNNs) are vulnerable to ...

Please sign up or login with your details

Forgot password? Click here to reset