DeepAI AI Chat
Log In Sign Up

CheckNet: Secure Inference on Untrusted Devices

by   Marcus Comiter, et al.
Harvard University

We introduce CheckNet, a method for secure inference with deep neural networks on untrusted devices. CheckNet is like a checksum for neural network inference: it verifies the integrity of the inference computation performed by untrusted devices to 1) ensure the inference has actually been performed, and 2) ensure the inference has not been manipulated by an attacker. CheckNet is completely transparent to the third party running the computation, applicable to all types of neural networks, does not require specialized hardware, adds little overhead, and has negligible impact on model performance. CheckNet can be configured to provide different levels of security depending on application needs and compute/communication budgets. We present both empirical and theoretical validation of CheckNet on multiple popular deep neural network models, showing excellent attack detection (0.88-0.99 AUC) and attack success bounds.


page 1

page 2

page 3

page 4


Partially Oblivious Neural Network Inference

Oblivious inference is the task of outsourcing a ML model, like neural-n...

Deploying Convolutional Networks on Untrusted Platforms Using 2D Holographic Reduced Representations

Due to the computational cost of running inference for a neural network,...

Serdab: An IoT Framework for Partitioning Neural Networks Computation across Multiple Enclaves

Recent advances in Deep Neural Networks (DNN) and Edge Computing have ma...

RABA: A Robust Avatar Backdoor Attack on Deep Neural Network

With the development of Deep Neural Network (DNN), as well as the demand...

Towards Efficient and Secure Delivery of Data for Deep Learning with Privacy-Preserving

Privacy recently emerges as a severe concern in deep learning, that is, ...

Secure Evaluation of Quantized Neural Networks

Image classification using Deep Neural Networks that preserve the privac...