Neural Network Quantisation for Faster Homomorphic Encryption

04/19/2023
by   Wouter Legiest, et al.
0

Homomorphic encryption (HE) enables calculating on encrypted data, which makes it possible to perform privacypreserving neural network inference. One disadvantage of this technique is that it is several orders of magnitudes slower than calculation on unencrypted data. Neural networks are commonly trained using floating-point, while most homomorphic encryption libraries calculate on integers, thus requiring a quantisation of the neural network. A straightforward approach would be to quantise to large integer sizes (e.g. 32 bit) to avoid large quantisation errors. In this work, we reduce the integer sizes of the networks, using quantisation-aware training, to allow more efficient computations. For the targeted MNIST architecture proposed by Badawi et al., we reduce the integer sizes by 33 accuracy, while for the CIFAR architecture, we can reduce the integer sizes by 43 scheme using SEAL, we could reduce the execution time of an MNIST neural network by 80

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/28/2020

NITI: Training Integer Neural Networks Using Integer-only Arithmetic

While integer arithmetic has been widely adopted for improved performanc...
research
02/03/2023

TT-TFHE: a Torus Fully Homomorphic Encryption-Friendly Neural Network Architecture

This paper presents TT-TFHE, a deep neural network Fully Homomorphic Enc...
research
12/25/2020

Neural Network Training With Homomorphic Encryption

We introduce a novel method and implementation architecture to train neu...
research
04/26/2021

Two-Server Delegation of Computation on Label-Encrypted Data

Catalano and Fiore propose a scheme to transform a linearly-homomorphic ...
research
07/19/2017

Secure SURF with Fully Homomorphic Encryption

Cloud computing is an important part of today's world because offloading...
research
02/03/2018

Mixed Precision Training of Convolutional Neural Networks using Integer Operations

The state-of-the-art (SOTA) for mixed precision training is dominated by...
research
01/07/2019

Efficient Winograd Convolution via Integer Arithmetic

Convolution is the core operation for many deep neural networks. The Win...

Please sign up or login with your details

Forgot password? Click here to reset