GOAT: GPU Outsourcing of Deep Learning Training With Asynchronous Probabilistic Integrity Verification Inside Trusted Execution Environment

10/17/2020
by   Aref Asvadishirehjini, et al.
0

Machine learning models based on Deep Neural Networks (DNNs) are increasingly deployed in a wide range of applications ranging from self-driving cars to COVID-19 treatment discovery. To support the computational power necessary to learn a DNN, cloud environments with dedicated hardware support have emerged as critical infrastructure. However, there are many integrity challenges associated with outsourcing computation. Various approaches have been developed to address these challenges, building on trusted execution environments (TEE). Yet, no existing approach scales up to support realistic integrity-preserving DNN model training for heavy workloads (deep architectures and millions of training examples) without sustaining a significant performance hit. To mitigate the time gap between pure TEE (full integrity) and pure GPU (no integrity), we combine random verification of selected computation steps with systematic adjustments of DNN hyper-parameters (e.g., a narrow gradient clipping range), hence limiting the attacker's ability to shift the model parameters significantly provided that the step is not selected for verification during its training phase. Experimental results show the new approach achieves 2X to 20X performance improvement over pure TEE based solution while guaranteeing a very high probability of integrity (e.g., 0.999) with respect to state-of-the-art DNN backdoor attacks.

READ FULL TEXT
research
05/01/2021

Privacy and Integrity Preserving Training Using Trusted Hardware

Privacy and security-related concerns are growing as machine learning re...
research
03/31/2021

Perun: Secure Multi-Stakeholder Machine Learning Framework with GPU Support

Confidential multi-stakeholder machine learning (ML) allows multiple par...
research
06/08/2018

Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware

As Machine Learning (ML) gets applied to security-critical or sensitive ...
research
06/30/2022

DarKnight: An Accelerated Framework for Privacy and Integrity Preserving Deep Learning Using Trusted Hardware

Privacy and security-related concerns are growing as machine learning re...
research
07/01/2021

VeriDL: Integrity Verification of Outsourced Deep Learning Services (Extended Version)

Deep neural networks (DNNs) are prominent due to their superior performa...
research
11/09/2018

DeepSaucer: Unified Environment for Verifying Deep Neural Networks

In recent years, a number of methods for verifying DNNs have been develo...
research
06/01/2020

DarKnight: A Data Privacy Scheme for Training and Inference of Deep Neural Networks

Protecting the privacy of input data is of growing importance as machine...

Please sign up or login with your details

Forgot password? Click here to reset