Anti-perturbation of Online Social Networks by Graph Label Transition

10/27/2020
by   Jun Zhuang, et al.
0

Numerous popular online social networks (OSN) would classify users into different categories and recommend users to each other with similar interests. A small number of users, so-called perturbators, may perform some types of behaviors, which significantly disturb such an OSN classifier. Manual annotation by OSN administrators is one kind of potential solutions. However, the manual annotation unavoidably brings into noise. Besides, such perturbators are not Sybil users, and therefore their accounts cannot be frozen. To improve the robustness of such an OSN classifier, we generalize this issue as the defense of Graph Convolutional Networks (GCNs) on the node classification task. Most existing defenses on this task can be divided into the adversarial-based method and the detection-based method. The adversarial-based method improves the robustness of GCNs by training with adversarial samples. However, in our case, the perturbators are hard to be distinguished by OSN administrators and thus we cannot use adversarial samples in the training phase. By contrast, the detection-based method aims at detecting the attacker nodes or edges and alleviates the negative impact by removing them. In our scenario, nevertheless, the perturbators are not the attacker and thus cannot be eliminated. Both methods could not solve the aforementioned problems. To address these issues, we propose a novel graph label transition model, named GraphLT, to improve the robustness of the OSN classifier by transiting the node latent representation based on dynamic conditional label transition. Extensive experiments demonstrate that GraphLT can not only considerably enhance the performance of the node classifier in a clean environment but also successfully remedy the classifier with superior performance over competing methods on seven benchmark datasets after graph perturbation.

READ FULL TEXT
research
03/07/2022

Defending Graph Convolutional Networks against Dynamic Graph Perturbations via Bayesian Self-supervision

In recent years, plentiful evidence illustrates that Graph Convolutional...
research
08/21/2022

Robust Node Classification on Graphs: Jointly from Bayesian Label Transition and Topology-based Label Propagation

Node classification using Graph Neural Networks (GNNs) has been widely a...
research
09/14/2019

Node Injection Attacks on Graphs via Reinforcement Learning

Real-world graph applications, such as advertisements and product recomm...
research
11/11/2019

GraphDefense: Towards Robust Graph Convolutional Networks

In this paper, we study the robustness of graph convolutional networks (...
research
09/21/2021

Modelling Adversarial Noise for Adversarial Defense

Deep neural networks have been demonstrated to be vulnerable to adversar...
research
03/06/2019

Safeguarded Dynamic Label Regression for Generalized Noisy Supervision

Learning with noisy labels, which aims to reduce expensive labors on acc...

Please sign up or login with your details

Forgot password? Click here to reset