XONN: XNOR-based Oblivious Deep Neural Network Inference

02/19/2019
by   M. Sadegh Riazi, et al.
0

Advancements in deep learning enable cloud servers to provide inference-as-a-service for clients. In this scenario, clients send their raw data to the server to run the deep learning model and send back the results. One standing challenge in this setting is to ensure the privacy of the clients' sensitive data. Oblivious inference is the task of running the neural network on the client's input without disclosing the input or the result to the server. This paper introduces XONN, a novel end-to-end framework based on Yao's Garbled Circuits (GC) protocol, that provides a paradigm shift in the conceptual and practical realization of oblivious inference. In XONN, the costly matrix-multiplication operations of the deep learning model are replaced with XNOR operations that are essentially free in GC. We further provide a novel algorithm that customizes the neural network such that the runtime of the GC protocol is minimized without sacrificing the inference accuracy. We design a user-friendly high-level API for XONN, allowing expression of the deep learning model architecture in an unprecedented level of abstraction. Extensive proof-of-concept evaluation on various neural network architectures demonstrates that XONN outperforms prior art such as Gazelle (USENIX Security'18) by up to 7x, MiniONN (ACM CCS'17) by 93x, and SecureML (IEEE S&P'17) by 37x. State-of-the-art frameworks require one round of interaction between the client and the server for each layer of the neural network, whereas, XONN requires a constant round of interactions for any number of layers in the model. XONN is first to perform oblivious inference on Fitnet architectures with up to 21 layers, suggesting a new level of scalability compared with state-of-the-art. Moreover, we evaluate XONN on four datasets to perform privacy-preserving medical diagnosis.

READ FULL TEXT

page 7

page 9

research
08/19/2023

Flamingo: Multi-Round Single-Server Secure Aggregation with Applications to Private Federated Learning

This paper introduces Flamingo, a system for secure aggregation of data ...
research
03/28/2022

MixNN: A design for protecting deep learning models

In this paper, we propose a novel design, called MixNN, for protecting d...
research
06/06/2022

Towards Practical Privacy-Preserving Solution for Outsourced Neural Network Inference

When neural network model and data are outsourced to cloud server for in...
research
04/06/2021

Enabling Inference Privacy with Adaptive Noise Injection

User-facing software services are becoming increasingly reliant on remot...
research
10/04/2020

NLP Service APIs and Models for Efficient Registration of New Clients

State-of-the-art NLP inference uses enormous neural architectures and mo...
research
03/30/2020

A Privacy-Preserving Distributed Architecture for Deep-Learning-as-a-Service

Deep-learning-as-a-service is a novel and promising computing paradigm a...
research
06/11/2019

Learning Selection Masks for Deep Neural Networks

Data have often to be moved between servers and clients during the infer...

Please sign up or login with your details

Forgot password? Click here to reset