Primer: Fast Private Transformer Inference on Encrypted Data

03/23/2023
by   Mengxin Zheng, et al.
0

It is increasingly important to enable privacy-preserving inference for cloud services based on Transformers. Post-quantum cryptographic techniques, e.g., fully homomorphic encryption (FHE), and multi-party computation (MPC), are popular methods to support private Transformer inference. However, existing works still suffer from prohibitively computational and communicational overhead. In this work, we present, Primer, to enable a fast and accurate Transformer over encrypted data for natural language processing tasks. In particular, Primer is constructed by a hybrid cryptographic protocol optimized for attention-based Transformer models, as well as techniques including computation merge and tokens-first ciphertext packing. Comprehensive experiments on encrypted language modeling show that Primer achieves state-of-the-art accuracy and reduces the inference latency by 90.6 over previous methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/01/2022

THE-X: Privacy-Preserving Transformer Inference with Homomorphic Encryption

As more and more pre-trained language models adopt on-cloud deployment, ...
research
05/25/2023

MERGE: Fast Private Text Generation

Recent years have seen increasing concerns about the private inference o...
research
08/25/2023

Falcon: Accelerating Homomorphically Encrypted Convolutions for Efficient Private Mobile Network Inference

Efficient networks, e.g., MobileNetV2, EfficientNet, etc, achieves state...
research
05/28/2023

LLMs Can Understand Encrypted Prompt: Towards Privacy-Computing Friendly Transformers

Prior works have attempted to build private inference frameworks for tra...
research
11/25/2022

MPCViT: Searching for MPC-friendly Vision Transformer with Heterogeneous Attention

Secure multi-party computation (MPC) enables computation directly on enc...
research
08/31/2022

Efficient Sparsely Activated Transformers

Transformer-based neural networks have achieved state-of-the-art task pe...
research
01/23/2023

AttMEMO : Accelerating Transformers with Memoization on Big Memory Systems

Transformer models gain popularity because of their superior inference a...

Please sign up or login with your details

Forgot password? Click here to reset