Indirect Adversarial Attacks via Poisoning Neighbors for Graph Convolutional Networks

02/19/2020
by   Tsubasa Takahashi, et al.
0

Graph convolutional neural networks, which learn aggregations over neighbor nodes, have achieved great performance in node classification tasks. However, recent studies reported that such graph convolutional node classifier can be deceived by adversarial perturbations on graphs. Abusing graph convolutions, a node's classification result can be influenced by poisoning its neighbors. Given an attributed graph and a node classifier, how can we evaluate robustness against such indirect adversarial attacks? Can we generate strong adversarial perturbations which are effective on not only one-hop neighbors, but more far from the target? In this paper, we demonstrate that the node classifier can be deceived with high-confidence by poisoning just a single node even two-hops or more far from the target. Towards achieving the attack, we propose a new approach which searches smaller perturbations on just a single node far from the target. In our experiments, our proposed method shows 99 rate within two-hops from the target in two datasets. We also demonstrate that m-layer graph convolutional neural networks have chance to be deceived by our indirect attack within m-hop neighbors. The proposed attack can be used as a benchmark in future defense attempts to develop graph convolutional neural networks with having adversary robustness.

READ FULL TEXT
research
02/25/2019

Batch Virtual Adversarial Training for Graph Convolutional Networks

We present batch virtual adversarial training (BVAT), a novel regulariza...
research
06/28/2019

Certifiable Robustness and Robust Training for Graph Convolutional Networks

Recent works show that Graph Neural Networks (GNNs) are highly non-robus...
research
04/20/2022

GUARD: Graph Universal Adversarial Defense

Recently, graph convolutional networks (GCNs) have shown to be vulnerabl...
research
03/12/2020

Topological Effects on Attacks Against Vertex Classification

Vertex classification is vulnerable to perturbations of both graph topol...
research
03/07/2022

Defending Graph Convolutional Networks against Dynamic Graph Perturbations via Bayesian Self-supervision

In recent years, plentiful evidence illustrates that Graph Convolutional...
research
12/11/2020

I-GCN: Robust Graph Convolutional Network via Influence Mechanism

Deep learning models for graphs, especially Graph Convolutional Networks...
research
07/16/2021

EGC2: Enhanced Graph Classification with Easy Graph Compression

Graph classification plays a significant role in network analysis. It al...

Please sign up or login with your details

Forgot password? Click here to reset