Dyn-Backdoor: Backdoor Attack on Dynamic Link Prediction

10/08/2021
by   Jinyin Chen, et al.
0

Dynamic link prediction (DLP) makes graph prediction based on historical information. Since most DLP methods are highly dependent on the training data to achieve satisfying prediction performance, the quality of the training data is crucial. Backdoor attacks induce the DLP methods to make wrong prediction by the malicious training data, i.e., generating a subgraph sequence as the trigger and embedding it to the training data. However, the vulnerability of DLP toward backdoor attacks has not been studied yet. To address the issue, we propose a novel backdoor attack framework on DLP, denoted as Dyn-Backdoor. Specifically, Dyn-Backdoor generates diverse initial-triggers by a generative adversarial network (GAN). Then partial links of the initial-triggers are selected to form a trigger set, according to the gradient information of the attack discriminator in the GAN, so as to reduce the size of triggers and improve the concealment of the attack. Experimental results show that Dyn-Backdoor launches successful backdoor attacks on the state-of-the-art DLP models with success rate more than 90 defense against Dyn-Backdoor to testify its resistance in defensive settings, highlighting the needs of defenses for backdoor attacks on DLP.

READ FULL TEXT

page 1

page 8

page 11

research
08/14/2022

Link-Backdoor: Backdoor Attack on Link Prediction via Node Injection

Link prediction, inferring the undiscovered or potential links of the gr...
research
09/01/2020

Reinforcement Learning-based Black-Box Evasion Attacks to Link Prediction in Dynamic Graphs

Link prediction in dynamic graphs (LPDG) is an important research proble...
research
01/18/2021

GraphAttacker: A General Multi-Task GraphAttack Framework

Graph Neural Networks (GNNs) have been successfully exploited in graph a...
research
05/19/2019

The Maestro Attack: Orchestrating Malicious Flows with BGP

We present the Maestro attack, a novel Link Flooding Attack (LFA) that l...
research
10/30/2018

Data Poisoning Attack against Unsupervised Node Embedding Methods

Unsupervised node embedding methods (e.g., DeepWalk, LINE, and node2vec)...
research
10/18/2022

Not All Poisons are Created Equal: Robust Training against Data Poisoning

Data poisoning causes misclassification of test time target examples by ...
research
02/21/2019

Learning requirements for stealth attacks

The learning data requirements are analyzed for the construction of stea...

Please sign up or login with your details

Forgot password? Click here to reset