Audit and Improve Robustness of Private Neural Networks on Encrypted Data

09/20/2022
by   Jiaqi Xue, et al.
0

Performing neural network inference on encrypted data without decryption is one popular method to enable privacy-preserving neural networks (PNet) as a service. Compared with regular neural networks deployed for machine-learning-as-a-service, PNet requires additional encoding, e.g., quantized-precision numbers, and polynomial activation. Encrypted input also introduces novel challenges such as adversarial robustness and security. To the best of our knowledge, we are the first to study questions including (i) Whether PNet is more robust against adversarial inputs than regular neural networks? (ii) How to design a robust PNet given the encrypted input without decryption? We propose PNet-Attack to generate black-box adversarial examples that can successfully attack PNet in both target and untarget manners. The attack results show that PNet robustness against adversarial inputs needs to be improved. This is not a trivial task because the PNet model owner does not have access to the plaintext of the input values, which prevents the application of existing detection and defense methods such as input tuning, model normalization, and adversarial training. To tackle this challenge, we propose a new fast and accurate noise insertion method, called RPNet, to design Robust and Private Neural Networks. Our comprehensive experiments show that PNet-Attack reduces at least 2.5× queries than prior works. We theoretically analyze our RPNet methods and demonstrate that RPNet can decrease ∼ 91.88% attack success rate.

READ FULL TEXT
research
09/07/2022

On the Transferability of Adversarial Examples between Encrypted Models

Deep neural networks (DNNs) are well known to be vulnerable to adversari...
research
08/31/2018

MULDEF: Multi-model-based Defense Against Adversarial Examples for Neural Networks

Despite being popularly used in many application domains such as image r...
research
07/26/2023

Enhanced Security against Adversarial Examples Using a Random Ensemble of Encrypted Vision Transformer Models

Deep neural networks (DNNs) are well known to be vulnerable to adversari...
research
12/17/2018

Spartan Networks: Self-Feature-Squeezing Neural Networks for increased robustness in adversarial settings

Deep learning models are vulnerable to adversarial examples which are in...
research
08/20/2021

ASAT: Adaptively Scaled Adversarial Training in Time Series

Adversarial training is a method for enhancing neural networks to improv...
research
01/31/2018

Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach

The robustness of neural networks to adversarial examples has received g...
research
08/17/2023

Dynamic Neural Network is All You Need: Understanding the Robustness of Dynamic Mechanisms in Neural Networks

Deep Neural Networks (DNNs) have been used to solve different day-to-day...

Please sign up or login with your details

Forgot password? Click here to reset