Forensicability of Deep Neural Network Inference Pipelines

02/01/2021
by   Alexander Schlögl, et al.
34

We propose methods to infer properties of the execution environment of machine learning pipelines by tracing characteristic numerical deviations in observable outputs. Results from a series of proof-of-concept experiments obtained on local and cloud-hosted machines give rise to possible forensic applications, such as the identification of the hardware platform used to produce deep neural network predictions. Finally, we introduce boundary samples that amplify the numerical deviations in order to distinguish machines by their predicted label only.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/15/2022

CQELS 2.0: Towards A Unified Framework for Semantic Stream Fusion

We present CQELS 2.0, the second version of Continuous Query Evaluation ...
research
12/24/2019

Characterizing the Decision Boundary of Deep Neural Networks

Deep neural networks and in particular, deep neural classifiers have bec...
research
07/28/2020

Declarative Experimentation in Information Retrieval using PyTerrier

The advent of deep machine learning platforms such as Tensorflow and Pyt...
research
09/04/2020

CLEANN: Accelerated Trojan Shield for Embedded Neural Networks

We propose CLEANN, the first end-to-end framework that enables online mi...
research
12/18/2019

Preventing Information Leakage with Neural Architecture Search

Powered by machine learning services in the cloud, numerous learning-dri...
research
08/19/2020

Compiling ONNX Neural Network Models Using MLIR

Deep neural network models are becoming increasingly popular and have be...

Please sign up or login with your details

Forgot password? Click here to reset