Secure Quantized Training for Deep Learning

07/01/2021
by   Marcel Keller, et al.
0

We have implemented training of neural networks in secure multi-party computation (MPC) using quantization commonly used in the said setting. To the best of our knowledge, we are the first to present an MNIST classifier purely trained in MPC that comes within 0.2 percent of the accuracy of the same convolutional neural network trained via plaintext computation. More concretely, we have trained a network with two convolution and two dense layers to 99.2 (under one hour for 99

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/08/2020

MPC Protocol for G-module and its Application in Secure Compare and ReLU

Secure multi-party computation (MPC) is a subfield of cryptography. Its ...
research
01/28/2022

Perfectly-Secure Synchronous MPC with Asynchronous Fallback Guarantees

Secure multi-party computation (MPC) is a fundamental problem in secure ...
research
09/16/2019

CrypTFlow: Secure TensorFlow Inference

We present CrypTFlow, a first of its kind system that converts TensorFlo...
research
04/02/2021

PolyDNN: Polynomial Representation of NN for Communication-less SMPC Inference

The structure and weights of Deep Neural Networks (DNN) typically encode...
research
10/28/2019

Secure Evaluation of Quantized Neural Networks

Image classification using Deep Neural Networks that preserve the privac...
research
07/24/2023

PUMA: Secure Inference of LLaMA-7B in Five Minutes

With ChatGPT as a representative, tons of companies have began to provid...
research
02/09/2020

Improving Neural Network Learning Through Dual Variable Learning Rates

This paper introduces and evaluates a novel training method for neural n...

Please sign up or login with your details

Forgot password? Click here to reset