EGC2: Enhanced Graph Classification with Easy Graph Compression

07/16/2021
by   Jinyin Chen, et al.
11

Graph classification plays a significant role in network analysis. It also faces potential security threat like adversarial attacks. Some defense methods may sacrifice algorithm complexity for robustness like adversarial training, while others may sacrifice the clean example performance such as smoothing-based defense. Most of them are suffered from high-complexity or less transferability. To address this problem, we proposed EGC^2, an enhanced graph classification model with easy graph compression. EGC^2 captures the relationship between features of different nodes by constructing feature graphs and improving aggregate node-level representation. To achieve lower complexity defense applied to various graph classification models, EGC^2 utilizes a centrality-based edge importance index to compress graphs, filtering out trivial structures and even adversarial perturbations of the input graphs, thus improves its robustness. Experiments on seven benchmark datasets demonstrate that the proposed feature read-out and graph compression mechanisms enhance the robustness of various basic models, thus achieving the state-of-the-art performance of accuracy and robustness in the threat of different adversarial attacks.

READ FULL TEXT

page 10

page 12

research
04/20/2022

GUARD: Graph Universal Adversarial Defense

Recently, graph convolutional networks (GCNs) have shown to be vulnerabl...
research
07/19/2020

Adversarial Immunization for Improving Certifiable Robustness on Graphs

Despite achieving strong performance in the semi-supervised node classif...
research
09/05/2020

Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp Adversarial Attacks

Adversarial training is a popular defense strategy against attack threat...
research
05/21/2018

Adversarial Attacks on Classification Models for Graphs

Deep learning models for graphs have achieved strong performance for the...
research
08/29/2023

Advancing Adversarial Robustness Through Adversarial Logit Update

Deep Neural Networks are susceptible to adversarial perturbations. Adver...
research
02/19/2020

Indirect Adversarial Attacks via Poisoning Neighbors for Graph Convolutional Networks

Graph convolutional neural networks, which learn aggregations over neigh...
research
02/28/2021

Adversarial Information Bottleneck

The information bottleneck (IB) principle has been adopted to explain de...

Please sign up or login with your details

Forgot password? Click here to reset