GECKO: Reconciling Privacy, Accuracy and Efficiency in Embedded Deep Learning

10/02/2020
by   Vasisht Duddu, et al.
0

Embedded systems demand on-device processing of data using Neural Networks (NNs) while conforming to the memory, power and computation constraints, leading to an efficiency and accuracy tradeoff. To bring NNs to edge devices, several optimizations such as model compression through pruning, quantization, and off-the-shelf architectures with efficient design have been extensively adopted. These algorithms when deployed to real world sensitive applications, requires to resist inference attacks to protect privacy of users training data. However, resistance against inference attacks is not accounted for designing NN models for IoT. In this work, we analyse the three-dimensional privacy-accuracy-efficiency tradeoff in NNs for IoT devices and propose Gecko training methodology where we explicitly add resistance to private inferences as a design objective. We optimize the inference-time memory, computation, and power constraints of embedded devices as a criterion for designing NN architecture while also preserving privacy. We choose quantization as design choice for highly efficient and private models. This choice is driven by the observation that compressed models leak more information compared to baseline models while off-the-shelf efficient architectures indicate poor efficiency and privacy tradeoff. We show that models trained using Gecko methodology are comparable to prior defences against black-box membership attacks in terms of accuracy and privacy while providing efficiency.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/25/2020

DistPrivacy: Privacy-Aware Distributed Deep Neural Networks in IoT surveillance systems

With the emergence of smart cities, Internet of Things (IoT) devices as ...
research
05/14/2020

Prive-HD: Privacy-Preserved Hyperdimensional Computing

The privacy of data is a major challenge in machine learning as a traine...
research
10/21/2018

To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference

The recent advances in deep neural networks (DNNs) make them attractive ...
research
11/01/2017

Efficient Inferencing of Compressed Deep Neural Networks

Large number of weights in deep neural networks makes the models difficu...
research
04/12/2020

DarkneTZ: Towards Model Privacy at the Edge using Trusted Execution Environments

We present DarkneTZ, a framework that uses an edge device's Trusted Exec...
research
10/15/2021

Differentiable Network Pruning for Microcontrollers

Embedded and personal IoT devices are powered by microcontroller units (...
research
10/21/2021

Physical Side-Channel Attacks on Embedded Neural Networks: A Survey

During the last decade, Deep Neural Networks (DNN) have progressively be...

Please sign up or login with your details

Forgot password? Click here to reset