Secure Evaluation of Quantized Neural Networks

by   Anders Dalskov, et al.

Image classification using Deep Neural Networks that preserve the privacy of both the input image and the model being used, has received considerable attention in the last couple of years. Recent work in this area have shown that it is possible to perform image classification with realistically sized networks using e.g., Garbled Circuits as in XONN (USENIX '19) or MPC (CrypTFlow, Eprint '19). These, and other prior work, require models to be either trained in a specific way or postprocessed in order to be evaluated securely. We contribute to this line of research by showing that this postprocessing can be handled by standard Machine Learning frameworks. More precisely, we show that quantization as present in Tensorflow suffices to obtain models that can be evaluated directly and as-is in standard off-the-shelve MPC. We implement secure inference of these quantized models in MP-SPDZ, and the generality of our technique means we can demonstrate benchmarks for a wide variety of threat models, something that has not been done before. In particular, we provide a comprehensive comparison between running secure inference of large ImageNet models with active and passive security, as well as honest and dishonest majority. The most efficient inference can be performed using a passive honest majority protocol which takes between 0.9 and 25.8 seconds, depending on the size of the model; for active security and an honest majority, inference is possible between 9.5 and 147.8 seconds.



There are no comments yet.


page 1

page 2

page 3

page 4


CrypTFlow: Secure TensorFlow Inference

We present CrypTFlow, a first of its kind system that converts TensorFlo...

Secure Medical Image Analysis with CrypTFlow

We present CRYPTFLOW, a system that converts TensorFlow inference code i...

Secure Quantized Training for Deep Learning

We have implemented training of neural networks in secure multi-party co...

CrypTen: Secure Multi-Party Computation Meets Machine Learning

Secure multi-party computation (MPC) allows parties to perform computati...

Degree-Quant: Quantization-Aware Training for Graph Neural Networks

Graph neural networks (GNNs) have demonstrated strong performance on a w...

CheckNet: Secure Inference on Untrusted Devices

We introduce CheckNet, a method for secure inference with deep neural ne...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.