Graph Robustness Benchmark: Benchmarking the Adversarial Robustness of Graph Machine Learning

11/08/2021
by   Qinkai Zheng, et al.
0

Adversarial attacks on graphs have posed a major threat to the robustness of graph machine learning (GML) models. Naturally, there is an ever-escalating arms race between attackers and defenders. However, the strategies behind both sides are often not fairly compared under the same and realistic conditions. To bridge this gap, we present the Graph Robustness Benchmark (GRB) with the goal of providing a scalable, unified, modular, and reproducible evaluation for the adversarial robustness of GML models. GRB standardizes the process of attacks and defenses by 1) developing scalable and diverse datasets, 2) modularizing the attack and defense implementations, and 3) unifying the evaluation protocol in refined scenarios. By leveraging the GRB pipeline, the end-users can focus on the development of robust GML models with automated data processing and experimental evaluations. To support open and reproducible research on graph adversarial learning, GRB also hosts public leaderboards across different scenarios. As a starting point, we conduct extensive experiments to benchmark baseline techniques. GRB is open-source and welcomes contributions from the community. Datasets, codes, leaderboards are available at https://cogdl.ai/grb/home.

READ FULL TEXT

page 8

page 10

research
05/02/2020

Open Graph Benchmark: Datasets for Machine Learning on Graphs

We present the Open Graph Benchmark (OGB), a diverse set of challenging ...
research
05/29/2023

From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework

Textual adversarial attacks can discover models' weaknesses by adding se...
research
06/17/2022

A Unified Evaluation of Textual Backdoor Learning: Frameworks and Benchmarks

Textual backdoor attacks are a kind of practical threat to NLP systems. ...
research
04/20/2022

GUARD: Graph Universal Adversarial Defense

Recently, graph convolutional networks (GCNs) have shown to be vulnerabl...
research
03/10/2020

A Survey of Adversarial Learning on Graphs

Deep learning models on graphs have achieved remarkable performance in v...
research
06/25/2022

BackdoorBench: A Comprehensive Benchmark of Backdoor Learning

Backdoor learning is an emerging and important topic of studying the vul...
research
10/19/2020

RobustBench: a standardized adversarial robustness benchmark

Evaluation of adversarial robustness is often error-prone leading to ove...

Please sign up or login with your details

Forgot password? Click here to reset