Complex-Valued Neural Networks for Privacy Protection

01/28/2019
by   Liyao Xiang, et al.
12

This paper proposes a generic method to revise traditional neural networks for privacy protection. Our method is designed to prevent inversion attacks, i.e., avoiding recovering private information from intermediate-layer features of a neural network. Our method transforms real-valued features of an intermediate layer into complex-valued features, in which private information is hidden in a random phase of the transformed features. To prevent the adversary from recovering the phase, we adopt an adversarial-learning algorithm to generate the complex-valued feature. More crucially, the transformed feature can be directly processed by the deep neural network, but without knowing the true phase, people cannot recover either the input information or the prediction result. Preliminary experiments with various neural networks (including the LeNet, the VGG, and residual networks) on different datasets have shown that our method can successfully defend feature inversion attacks while preserving learning accuracy.

READ FULL TEXT

page 5

page 6

page 7

research
11/03/2021

Neural network is heterogeneous: Phase matters more

We find a heterogeneity in both complex and real valued neural networks ...
research
11/23/2020

Complex-valued Iris Recognition Network

In this work, we design a complex-valued neural network for the task of ...
research
05/29/2019

Complex-valued neural networks for machine learning on non-stationary physical data

Deep learning has become an area of interest in most scientific areas, i...
research
12/14/2022

Fully Complex-valued Fully Convolutional Multi-feature Fusion Network (FC2MFN) for Building Segmentation of InSAR images

Building segmentation in high-resolution InSAR images is a challenging t...
research
02/20/2022

An Analysis of Complex-Valued CNNs for RF Data-Driven Wireless Device Classification

Recent deep neural network-based device classification studies show that...
research
01/26/2022

Variational Model Inversion Attacks

Given the ubiquity of deep neural networks, it is important that these m...
research
02/16/2022

Measuring Unintended Memorisation of Unique Private Features in Neural Networks

Neural networks pose a privacy risk to training data due to their propen...

Please sign up or login with your details

Forgot password? Click here to reset