Serdab: An IoT Framework for Partitioning Neural Networks Computation across Multiple Enclaves

05/12/2020
by   Tarek Elgamal, et al.
0

Recent advances in Deep Neural Networks (DNN) and Edge Computing have made it possible to automatically analyze streams of videos from home/security cameras over hierarchical clusters that include edge devices, close to the video source, as well as remote cloud compute resources. However, preserving the privacy and confidentiality of users' sensitive data as it passes through different devices remains a concern to most users. Private user data is subject to attacks by malicious attackers or misuse by internal administrators who may use the data in activities that are not explicitly approved by the user. To address this challenge, we present Serdab, a distributed orchestration framework for deploying deep neural network computation across multiple secure enclaves (e.g., Intel SGX). Secure enclaves provide a guarantee on the privacy of the data/code deployed inside it. However, their limited hardware resources make them inefficient when solely running an entire deep neural network. To bridge this gap, Serdab presents a DNN partitioning strategy to distribute the layers of the neural network across multiple enclave devices or across an enclave device and other hardware accelerators. Our partitioning strategy achieves up to 4.7x speedup compared to executing the entire neural network in one enclave.

READ FULL TEXT

page 1

page 5

page 7

research
05/01/2020

Inference Time Optimization Using BranchyNet Partitioning

Deep Neural Network (DNN) applications with edge computing presents a tr...
research
11/11/2020

ShadowNet: A Secure and Efficient System for On-device Model Inference

On-device machine learning (ML) is getting more and more popular as fast...
research
08/08/2020

Scission: Context-aware and Performance-driven Edge-based Distributed Deep Neural Networks

Partitioning and distributing deep neural networks (DNNs) across end-dev...
research
12/06/2020

CoEdge: Cooperative DNN Inference with Adaptive Workload Partitioning over Heterogeneous Edge Devices

Recent advances in artificial intelligence have driven increasing intell...
research
11/27/2019

Survey of Attacks and Defenses on Edge-Deployed Neural Networks

Deep Neural Network (DNN) workloads are quickly moving from datacenters ...
research
06/17/2019

CheckNet: Secure Inference on Untrusted Devices

We introduce CheckNet, a method for secure inference with deep neural ne...
research
05/30/2018

Privacy Aware Offloading of Deep Neural Networks

Deep neural networks require large amounts of resources which makes them...

Please sign up or login with your details

Forgot password? Click here to reset