Sparse Vicious Attacks on Graph Neural Networks

09/20/2022
by   Giovanni Trappolini, et al.
9

Graph Neural Networks (GNNs) have proven to be successful in several predictive modeling tasks for graph-structured data. Amongst those tasks, link prediction is one of the fundamental problems for many real-world applications, such as recommender systems. However, GNNs are not immune to adversarial attacks, i.e., carefully crafted malicious examples that are designed to fool the predictive model. In this work, we focus on a specific, white-box attack to GNN-based link prediction models, where a malicious node aims to appear in the list of recommended nodes for a given target victim. To achieve this goal, the attacker node may also count on the cooperation of other existing peers that it directly controls, namely on the ability to inject a number of “vicious” nodes in the network. Specifically, all these malicious nodes can add new edges or remove existing ones, thereby perturbing the original graph. Thus, we propose SAVAGE, a novel framework and a method to mount this type of link prediction attacks. SAVAGE formulates the adversary's goal as an optimization task, striking the balance between the effectiveness of the attack and the sparsity of malicious resources required. Extensive experiments conducted on real-world and synthetic datasets demonstrate that adversarial attacks implemented through SAVAGE indeed achieve high attack success rate yet using a small amount of vicious nodes. Finally, despite those attacks require full knowledge of the target model, we show that they are successfully transferable to other black-box methods for link prediction.

READ FULL TEXT
research
08/21/2021

A Hard Label Black-box Adversarial Attack Against Graph Neural Networks

Graph Neural Networks (GNNs) have achieved state-of-the-art performance ...
research
12/27/2022

EDoG: Adversarial Edge Detection For Graph Neural Networks

Graph Neural Networks (GNNs) have been widely applied to different tasks...
research
01/18/2021

GraphAttacker: A General Multi-Task GraphAttack Framework

Graph Neural Networks (GNNs) have been successfully exploited in graph a...
research
08/22/2023

Multi-Instance Adversarial Attack on GNN-Based Malicious Domain Detection

Malicious domain detection (MDD) is an open security challenge that aims...
research
02/24/2023

HyperAttack: Multi-Gradient-Guided White-box Adversarial Structure Attack of Hypergraph Neural Networks

Hypergraph neural networks (HGNN) have shown superior performance in var...
research
08/21/2023

Spear and Shield: Adversarial Attacks and Defense Methods for Model-Based Link Prediction on Continuous-Time Dynamic Graphs

Real-world graphs are dynamic, constantly evolving with new interactions...
research
10/30/2018

Data Poisoning Attack against Unsupervised Node Embedding Methods

Unsupervised node embedding methods (e.g., DeepWalk, LINE, and node2vec)...

Please sign up or login with your details

Forgot password? Click here to reset