Gazelle: A Low Latency Framework for Secure Neural Network Inference

01/16/2018
by   Chiraag Juvekar, et al.
0

The growing popularity of cloud-based machine learning raises a natural question about the privacy guarantees that can be provided in such a setting. Our work tackles this problem in the context where a client wishes to classify private images using a convolutional neural network (CNN) trained by a server. Our goal is to build efficient protocols whereby the client can acquire the classification result without revealing their input to the server, while guaranteeing the privacy of the server's neural network. To this end, we design Gazelle, a scalable and low-latency system for secure neural network inference, using an intricate combination of homomorphic encryption and traditional two-party computation techniques (such as garbled circuits). Gazelle makes three contributions. First, we design the Gazelle homomorphic encryption library which provides fast algorithms for basic homomorphic operations such as SIMD (single instruction multiple data) addition, SIMD multiplication and ciphertext permutation. Second, we implement the Gazelle homomorphic linear algebra kernels which map neural network layers to optimized homomorphic matrix-vector multiplication and convolution routines. Third, we design optimized encryption switching protocols which seamlessly convert between homomorphic and garbled circuit encodings to enable implementation of complete neural network inference. We evaluate our protocols on benchmark neural networks trained on the MNIST and CIFAR-10 datasets and show that Gazelle outperforms the best existing systems such as MiniONN (ACM CCS 2017) by 20 times and Chameleon (Crypto Eprint 2017/1164) by 30 times in online runtime. Similarly when compared with fully homomorphic approaches like CryptoNets (ICML 2016) we demonstrate three orders of magnitude faster online run-time.

READ FULL TEXT
research
08/11/2018

Secure Convolutional Neural Network using FHE

In this paper, a secure Convolutional Neural Network classifier is propo...
research
05/13/2022

Impala: Low-Latency, Communication-Efficient Private Deep Learning Inference

This paper proposes Impala, a new cryptographic protocol for private inf...
research
07/05/2021

Popcorn: Paillier Meets Compression For Efficient Oblivious Neural Network Inference

Oblivious inference enables the cloud to provide neural network inferenc...
research
02/03/2023

TT-TFHE: a Torus Fully Homomorphic Encryption-Friendly Neural Network Architecture

This paper presents TT-TFHE, a deep neural network Fully Homomorphic Enc...
research
04/10/2017

Prover efficient public verification of dense or sparse/structured matrix-vector multiplication

With the emergence of cloud computing services, computationally weak dev...
research
02/27/2022

Split HE: Fast Secure Inference Combining Split Learning and Homomorphic Encryption

This work presents a novel protocol for fast secure inference of neural ...

Please sign up or login with your details

Forgot password? Click here to reset