TamperNN: Efficient Tampering Detection of Deployed Neural Nets

03/01/2019
by   Erwan Le Merrer, et al.
0

Neural networks are powering the deployment of embedded devices and Internet of Things. Applications range from personal assistants to critical ones such as self-driving cars. It has been shown recently that models obtained from neural nets can be trojaned ; an attacker can then trigger an arbitrary model behavior facing crafted inputs. This has a critical impact on the security and reliability of those deployed devices. We introduce novel algorithms to detect the tampering with deployed models, classifiers in particular. In the remote interaction setup we consider, the proposed strategy is to identify markers of the model input space that are likely to change class if the model is attacked, allowing a user to detect a possible tampering. This setup makes our proposal compatible with a wide rage of scenarios, such as embedded models, or models exposed through prediction APIs. We experiment those tampering detection algorithms on the canonical MNIST dataset, over three different types of neural nets, and facing five different attacks (trojaning, quantization, fine-tuning, compression and watermarking). We then validate over five large models (VGG16, VGG19, ResNet, MobileNet, DenseNet) with a state of the art dataset (VGGFace2), and report results demonstrating the possibility of an efficient detection of model tampering.

READ FULL TEXT
research
07/05/2017

Model compression as constrained optimization, with application to neural nets. Part I: general framework

Compressing neural nets is an active research problem, given the large s...
research
05/28/2022

BadDet: Backdoor Attacks on Object Detection

Deep learning models have been deployed in numerous real-world applicati...
research
08/11/2023

Test-Time Adaptation for Backdoor Defense

Deep neural networks have played a crucial part in many critical domains...
research
03/01/2022

A Method Based on Deep Learning for the Detection and Characterization of Cybersecurity Incidents in Internet of Things Devices

Given the increased growing of Internet of Things networks and their pre...
research
08/22/2022

An anomaly detection approach for backdoored neural networks: face recognition as a case study

Backdoor attacks allow an attacker to embed functionality jeopardizing p...
research
08/18/2021

Deployment of Deep Neural Networks for Object Detection on Edge AI Devices with Runtime Optimization

Deep neural networks have proven increasingly important for automotive s...
research
07/13/2022

DiverGet: A Search-Based Software Testing Approach for Deep Neural Network Quantization Assessment

Quantization is one of the most applied Deep Neural Network (DNN) compre...

Please sign up or login with your details

Forgot password? Click here to reset