DeepAI AI Chat
Log In Sign Up

Confidential Deep Learning: Executing Proprietary Models on Untrusted Devices

by   Peter M. VanNostrand, et al.
Worcester Polytechnic Institute
University at Buffalo

Performing deep learning on end-user devices provides fast offline inference results and can help protect the user's privacy. However, running models on untrusted client devices reveals model information which may be proprietary, i.e., the operating system or other applications on end-user devices may be manipulated to copy and redistribute this information, infringing on the model provider's intellectual property. We propose the use of ARM TrustZone, a hardware-based security feature present in most phones, to confidentially run a proprietary model on an untrusted end-user device. We explore the limitations and design challenges of using TrustZone and examine potential approaches for confidential deep learning within this environment. Of particular interest is providing robust protection of proprietary model information while minimizing total performance overhead.


page 1

page 2

page 3

page 4


Cyber-Resilient Privacy Preservation and Secure Billing Approach for Smart Energy Metering Devices

Most of the smart applications, such as smart energy metering devices, d...

Offline Model Guard: Secure and Private ML on Mobile Devices

Performing machine learning tasks in mobile applications yields a challe...

Progressive Transmission and Inference of Deep Learning Models

Modern image files are usually progressively transmitted and provide a p...

Teardown and feasibility study of IronKey – the most secure USB Flash drive

There are many solutions for protecting user data on USB Flash drives. H...

Privacy for Rescue: A New Testimony Why Privacy is Vulnerable In Deep Models

The huge computation demand of deep learning models and limited computat...

iDML: Incentivized Decentralized Machine Learning

With the rising emergence of decentralized and opportunistic approaches ...