Privacy-Preserved Neural Graph Similarity Learning

10/21/2022
by   Yupeng Hou, et al.
0

To develop effective and efficient graph similarity learning (GSL) models, a series of data-driven neural algorithms have been proposed in recent years. Although GSL models are frequently deployed in privacy-sensitive scenarios, the user privacy protection of neural GSL models has not drawn much attention. To comprehensively understand the privacy protection issues, we first introduce the concept of attackable representation to systematically characterize the privacy attacks that each model can face. Inspired by the qualitative results, we propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning. To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices. Instead, we learn multi-perspective graph representations based on learnable context vectors. To alleviate the attacks to graph properties, the obfuscated features that contain information from both graphs are communicated. In this way, the private properties of each graph can be difficult to infer. Based on the node-graph matching techniques while calculating the obfuscated features, PPGM can also be effective in similarity measuring. To quantitatively evaluate the privacy-preserving ability of neural GSL models, we further propose an evaluation protocol via training supervised black-box attack models. Extensive experiments on widely-used benchmarks show the effectiveness and strong privacy-protection ability of the proposed model PPGM. The code is available at: https://github.com/RUCAIBox/PPGM.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/30/2020

Adversarial Privacy Preserving Graph Embedding against Inference Attack

Recently, the surge in popularity of Internet of Things (IoT), mobile de...
research
08/16/2023

Independent Distribution Regularization for Private Graph Embedding

Learning graph embeddings is a crucial task in graph mining tasks. An ef...
research
08/28/2018

Privacy-preserving Neural Representations of Text

This article deals with adversarial attacks towards deep learning system...
research
06/20/2020

Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart Privacy Attacks

This paper investigates capabilities of Privacy-Preserving Deep Learning...
research
07/03/2021

Privacy-Preserving Representation Learning on Graphs: A Mutual Information Perspective

Learning with graphs has attracted significant attention recently. Exist...
research
08/24/2022

On the Design of Privacy-Aware Cameras: a Study on Deep Neural Networks

In spite of the legal advances in personal data protection, the issue of...
research
08/21/2023

CSM-H-R: An Automatic Context Reasoning Framework for Interoperable Intelligent Systems and Privacy Protection

Automation of High-Level Context (HLC) reasoning for intelligent systems...

Please sign up or login with your details

Forgot password? Click here to reset