DeepAI AI Chat
Log In Sign Up

Towards Privacy-Preserving Neural Architecture Search

by   Fuyi Wang, et al.

Machine learning promotes the continuous development of signal processing in various fields, including network traffic monitoring, EEG classification, face identification, and many more. However, massive user data collected for training deep learning models raises privacy concerns and increases the difficulty of manually adjusting the network structure. To address these issues, we propose a privacy-preserving neural architecture search (PP-NAS) framework based on secure multi-party computation to protect users' data and the model's parameters/hyper-parameters. PP-NAS outsources the NAS task to two non-colluding cloud servers for making full advantage of mixed protocols design. Complement to the existing PP machine learning frameworks, we redesign the secure ReLU and Max-pooling garbled circuits for significantly better efficiency (3 ∼ 436 times speed-up). We develop a new alternative to approximate the Softmax function over secret shares, which bypasses the limitation of approximating exponential operations in Softmax while improving accuracy. Extensive analyses and experiments demonstrate PP-NAS's superiority in security, efficiency, and accuracy.


page 1

page 2

page 3

page 4


From Xception to NEXcepTion: New Design Decisions and Neural Architecture Search

In this paper, we present a modified Xception architecture, the NEXcepTi...

MPCViT: Searching for MPC-friendly Vision Transformer with Heterogeneous Attention

Secure multi-party computation (MPC) enables computation directly on enc...

FDNAS: Improving Data Privacy and Model Diversity in AutoML

To prevent the leakage of private information while enabling automated m...

Private Speech Characterization with Secure Multiparty Computation

Deep learning in audio signal processing, such as human voice audio sign...

Monte Carlo execution time estimation for Privacy-preserving Distributed Function Evaluation protocols

Recent developments in Machine Learning and Deep Learning depend heavily...