FALCON: Honest-Majority Maliciously Secure Framework for Private Deep Learning

04/05/2020
by   Sameer Wagh, et al.
0

This paper aims to enable training and inference of neural networks in a manner that protects the privacy of sensitive data. We propose FALCON - an end-to-end 3-party protocol for fast and secure computation of deep learning algorithms on large networks. FALCON presents three main advantages. It is highly expressive. To the best of our knowledge, it is the first secure framework to support high capacity networks with over a hundred million parameters such as VGG16 as well as the first to support batch normalization, a critical component of deep learning that enables training of complex network architectures such as AlexNet. Next, FALCON guarantees security with abort against malicious adversaries, assuming an honest majority. It ensures that the protocol always completes with correct output for honest participants or aborts when it detects the presence of a malicious adversary. Lastly, FALCON presents new theoretical insights for protocol design that make it highly efficient and allow it to outperform existing secure deep learning solutions. Compared to prior art for private inference, we are about 8x faster than SecureNN (PETS '19) on average and comparable to ABY3 (CCS '18). We are about 16-200x more communication efficient than either of these. For private training, we are about 6x faster than SecureNN, 4.4x faster than ABY3 and about 2-60x more communication efficient. This is the first paper to show via experiments in the WAN setting, that for multi-party machine learning computations over large networks and datasets, compute operations dominate the overall latency, as opposed to the communication.

READ FULL TEXT

page 12

page 19

research
10/18/2022

STAMP: Lightweight TEE-Assisted MPC for Efficient Privacy-Preserving Machine Learning

In this paper, we propose STAMP, an end-to-end 3-party MPC protocol for ...
research
06/08/2022

Communication Efficient Semi-Honest Three-Party Secure Multiparty Computation with an Honest Majority

In this work, we propose a novel protocol for secure three-party computa...
research
03/06/2023

A Comparison of Methods for Neural Network Aggregation

Deep learning has been successful in the theoretical aspect. For deep le...
research
07/08/2019

QUOTIENT: Two-Party Secure Neural Network Training and Prediction

Recently, there has been a wealth of effort devoted to the design of sec...
research
03/27/2018

Hiding in the Crowd: A Massively Distributed Algorithm for Private Averaging with Malicious Adversaries

The amount of personal data collected in our everyday interactions with ...
research
10/16/2022

VerifyML: Obliviously Checking Model Fairness Resilient to Malicious Model Holder

In this paper, we present VerifyML, the first secure inference framework...
research
01/03/2019

Scalable Information-Flow Analysis of Secure Three-Party Affine Computations

Elaborate protocols in Secure Multi-party Computation enable several par...

Please sign up or login with your details

Forgot password? Click here to reset