Unlearnable Graph: Protecting Graphs from Unauthorized Exploitation

03/05/2023
by   Yixin Liu, et al.
0

While the use of graph-structured data in various fields is becoming increasingly popular, it also raises concerns about the potential unauthorized exploitation of personal data for training commercial graph neural network (GNN) models, which can compromise privacy. To address this issue, we propose a novel method for generating unlearnable graph examples. By injecting delusive but imperceptible noise into graphs using our Error-Minimizing Structural Poisoning (EMinS) module, we are able to make the graphs unexploitable. Notably, by modifying only 5% at most of the potential edges in the graph data, our method successfully decreases the accuracy from 77.33% to 42.47% on the COLLAB dataset.

READ FULL TEXT

page 1

page 2

research
01/13/2021

Unlearnable Examples: Making Personal Data Unexploitable

The volume of "free" data on the internet has been key to the current su...
research
10/08/2022

Learning the Network of Graphs for Graph Neural Networks

Graph neural networks (GNNs) have achieved great success in many scenari...
research
01/17/2021

Heterogeneous Similarity Graph Neural Network on Electronic Health Records

Mining Electronic Health Records (EHRs) becomes a promising topic becaus...
research
11/27/2021

A Review on Graph Neural Network Methods in Financial Applications

Keeping the individual features and the complicated relations, graph dat...
research
07/25/2023

Finding Money Launderers Using Heterogeneous Graph Neural Networks

Current anti-money laundering (AML) systems, predominantly rule-based, e...
research
04/24/2022

Graph Neural Network-based Early Bearing Fault Detection

Early detection of faults is of importance to avoid catastrophic acciden...
research
08/30/2021

Adversarial Stein Training for Graph Energy Models

Learning distributions over graph-structured data is a challenging task ...

Please sign up or login with your details

Forgot password? Click here to reset