Adversarial Camouflage for Node Injection Attack on Graphs

08/03/2022
by   Shuchang Tao, et al.
0

Node injection attacks against Graph Neural Networks (GNNs) have received emerging attention as a practical attack scenario, where the attacker injects malicious nodes instead of modifying node features or edges to degrade the performance of GNNs. Despite the initial success of node injection attacks, we find that the injected nodes by existing methods are easy to be distinguished from the original normal nodes by defense methods and limiting their attack performance in practice. To solve the above issues, we devote to camouflage node injection attack, i.e., camouflaging injected malicious nodes (structure/attributes) as the normal ones that appear legitimate/imperceptible to defense methods. The non-Euclidean nature of graph data and the lack of human prior brings great challenges to the formalization, implementation, and evaluation of camouflage on graphs. In this paper, we first propose and formulate the camouflage of injected nodes from both the fidelity and diversity of the ego networks centered around injected nodes. Then, we design an adversarial CAmouflage framework for Node injection Attack, namely CANA, to improve the camouflage while ensuring the attack performance. Several novel indicators for graph camouflage are further designed for a comprehensive evaluation. Experimental results demonstrate that when equipping existing node injection attack methods with our proposed CANA framework, the attack performance against defense methods as well as node camouflage is significantly improved.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/30/2021

Single Node Injection Attack against Graph Neural Networks

Node injection attack on Graph Neural Networks (GNNs) is an emerging and...
research
09/14/2019

Node Injection Attacks on Graphs via Reinforcement Learning

Real-world graph applications, such as advertisements and product recomm...
research
09/01/2023

Cross-temporal Detection of Novel Ransomware Campaigns: A Multi-Modal Alert Approach

We present a novel approach to identify ransomware campaigns derived fro...
research
02/16/2022

Understanding and Improving Graph Injection Attack by Promoting Unnoticeability

Recently Graph Injection Attack (GIA) emerges as a practical attack scen...
research
04/22/2020

Scalable Attack on Graph Data by Injecting Vicious Nodes

Recent studies have shown that graph convolution networks (GCNs) are vul...
research
02/16/2023

Graph Adversarial Immunization for Certifiable Robustness

Despite achieving great success, graph neural networks (GNNs) are vulner...
research
11/15/2022

Resisting Graph Adversarial Attack via Cooperative Homophilous Augmentation

Recent studies show that Graph Neural Networks(GNNs) are vulnerable and ...

Please sign up or login with your details

Forgot password? Click here to reset