Unlearnable Graph: Protecting Graphs from Unauthorized Exploitation

03/05/2023
by   Yixin Liu, et al.
0

While the use of graph-structured data in various fields is becoming increasingly popular, it also raises concerns about the potential unauthorized exploitation of personal data for training commercial graph neural network (GNN) models, which can compromise privacy. To address this issue, we propose a novel method for generating unlearnable graph examples. By injecting delusive but imperceptible noise into graphs using our Error-Minimizing Structural Poisoning (EMinS) module, we are able to make the graphs unexploitable. Notably, by modifying only 5% at most of the potential edges in the graph data, our method successfully decreases the accuracy from 77.33% to 42.47% on the COLLAB dataset.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset