Unsupervised Adversarially-Robust Representation Learning on Graphs

12/04/2020
by   Jiarong Xu, et al.
0

Recent works have demonstrated that deep learning on graphs is vulnerable to adversarial attacks, in that imperceptible perturbations on input data can lead to dramatic performance deterioration. In this paper, we focus on the underlying problem of learning robust representations on graphs via mutual information. In contrast to previous works measure the task-specific robustness based on the label space, we here take advantage of the representation space to study a task-free robustness measure given the joint input space w.r.t graph topology and node attributes. We formulate this problem as a constrained saddle point optimization problem and solve it efficiently in a reduced search space. Furthermore, we provably establish theoretical connections between our task-free robustness measure and the robustness of downstream classifiers. Extensive experiments demonstrate that our proposed method is able to enhance robustness against adversarial attacks on graphs, yet even increases natural accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/21/2022

Robust Unsupervised Graph Representation Learning via Mutual Information Maximization

Recent studies have shown that GNNs are vulnerable to adversarial attack...
research
07/19/2020

Adversarial Immunization for Improving Certifiable Robustness on Graphs

Despite achieving strong performance in the semi-supervised node classif...
research
05/21/2018

Adversarial Attacks on Classification Models for Graphs

Deep learning models for graphs have achieved strong performance for the...
research
12/20/2021

Unifying Model Explainability and Robustness for Joint Text Classification and Rationale Extraction

Recent works have shown explainability and robustness are two crucial in...
research
05/21/2018

Adversarial Attacks on Neural Networks for Graph Data

Deep learning models for graphs have achieved strong performance for the...
research
10/05/2020

InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective

Large-scale language models such as BERT have achieved state-of-the-art ...
research
07/14/2020

Towards a Theoretical Understanding of the Robustness of Variational Autoencoders

We make inroads into understanding the robustness of Variational Autoenc...

Please sign up or login with your details

Forgot password? Click here to reset