Rethinking the Trigger-injecting Position in Graph Backdoor Attack

04/05/2023
by   Jing Xu, et al.
0

Backdoor attacks have been demonstrated as a security threat for machine learning models. Traditional backdoor attacks intend to inject backdoor functionality into the model such that the backdoored model will perform abnormally on inputs with predefined backdoor triggers and still retain state-of-the-art performance on the clean inputs. While there are already some works on backdoor attacks on Graph Neural Networks (GNNs), the backdoor trigger in the graph domain is mostly injected into random positions of the sample. There is no work analyzing and explaining the backdoor attack performance when injecting triggers into the most important or least important area in the sample, which we refer to as trigger-injecting strategies MIAS and LIAS, respectively. Our results show that, generally, LIAS performs better, and the differences between the LIAS and MIAS performance can be significant. Furthermore, we explain these two strategies' similar (better) attack performance through explanation techniques, which results in a further understanding of backdoor attacks in GNNs.

READ FULL TEXT
research
04/08/2021

Explainability-based Backdoor Attacks Against Graph Neural Networks

Backdoor attacks represent a serious threat to neural network models. A ...
research
06/21/2021

Adversarial Attack on Graph Neural Networks as An Influence Maximization Problem

Graph neural networks (GNNs) have attracted increasing interests. With b...
research
09/01/2020

Evasion Attacks to Graph Neural Networks via Influence Function

Graph neural networks (GNNs) have achieved state-of-the-art performance ...
research
03/24/2023

PoisonedGNN: Backdoor Attack on Graph Neural Networks-based Hardware Security Systems

Graph neural networks (GNNs) have shown great success in detecting intel...
research
09/09/2022

The Space of Adversarial Strategies

Adversarial examples, inputs designed to induce worst-case behavior in m...
research
01/16/2023

BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense

Deep Learning backdoor attacks have a threat model similar to traditiona...
research
09/09/2023

Good-looking but Lacking Faithfulness: Understanding Local Explanation Methods through Trend-based Testing

While enjoying the great achievements brought by deep learning (DL), peo...

Please sign up or login with your details

Forgot password? Click here to reset