Enabling Homomorphically Encrypted Inference for Large DNN Models

The proliferation of machine learning services in the last few years has raised data privacy concerns. Homomorphic encryption (HE) enables inference using encrypted data but it incurs 100x-10,000x memory and runtime overheads. Secure deep neural network (DNN) inference using HE is currently limited by computing and memory resources, with frameworks requiring hundreds of gigabytes of DRAM to evaluate small models. To overcome these limitations, in this paper we explore the feasibility of leveraging hybrid memory systems comprised of DRAM and persistent memory. In particular, we explore the recently-released Intel Optane PMem technology and the Intel HE-Transformer nGraph to run large neural networks such as MobileNetV2 (in its largest variant) and ResNet-50 for the first time in the literature. We present an in-depth analysis of the efficiency of the executions with different hardware and software configurations. Our results conclude that DNN inference using HE incurs on friendly access patterns for this memory configuration, yielding efficient executions.

READ FULL TEXT

page 10

page 11

research
04/07/2021

Plinius: Secure and Persistent Machine Learning Model Training

With the increasing popularity of cloud based machine learning (ML) tech...
research
05/30/2023

NicePIM: Design Space Exploration for Processing-In-Memory DNN Accelerators with 3D-Stacked-DRAM

With the widespread use of deep neural networks(DNNs) in intelligent sys...
research
09/30/2020

System measurement of Intel AEP Optane DIMM

In recent years, memory wall has been a great performance bottleneck of ...
research
11/05/2021

Fighting COVID-19 in the Dark: Methodology for Improved Inference Using Homomorphically Encrypted DNN

Privacy-preserving deep neural network (DNN) inference is a necessity in...
research
10/01/2018

Privado: Practical and Secure DNN Inference

Recently, cloud providers have extended support for trusted hardware pri...
research
01/03/2023

A Theory of I/O-Efficient Sparse Neural Network Inference

As the accuracy of machine learning models increases at a fast rate, so ...
research
08/12/2019

nGraph-HE2: A High-Throughput Framework for Neural Network Inference on Encrypted Data

In previous work, Boemer et al. introduced nGraph-HE, an extension to th...

Please sign up or login with your details

Forgot password? Click here to reset