ImpNet: Imperceptible and blackbox-undetectable backdoors in compiled neural networks

09/30/2022
by   Tim Clifford, et al.
0

Early backdoor attacks against machine learning set off an arms race in attack and defence development. Defences have since appeared demonstrating some ability to detect backdoors in models or even remove them. These defences work by inspecting the training data, the model, or the integrity of the training procedure. In this work, we show that backdoors can be added during compilation, circumventing any safeguards in the data preparation and model training stages. As an illustration, the attacker can insert weight-based backdoors during the hardware compilation step that will not be detected by any training or data-preparation process. Next, we demonstrate that some backdoors, such as ImpNet, can only be reliably detected at the stage where they are inserted and removing them anywhere else presents a significant challenge. We conclude that machine-learning model security requires assurance of provenance along the entire technical pipeline, including the data, model architecture, compiler, and hardware specification.

READ FULL TEXT
research
11/11/2022

Striving for data-model efficiency: Identifying data externalities on group performance

Building trustworthy, effective, and responsible machine learning system...
research
12/15/2022

Holistic risk assessment of inference attacks in machine learning

As machine learning expanding application, there are more and more unign...
research
02/13/2023

Targeted Attack on GPT-Neo for the SATML Language Model Data Extraction Challenge

Previous work has shown that Large Language Models are susceptible to so...
research
06/30/2023

Inter-case Predictive Process Monitoring: A candidate for Quantum Machine Learning?

Regardless of the domain, forecasting the future behaviour of a running ...
research
08/28/2023

Machine Unlearning Methodology base on Stochastic Teacher Network

The rise of the phenomenon of the "right to be forgotten" has prompted r...
research
02/09/2023

Hyperparameter Search Is All You Need For Training-Agnostic Backdoor Robustness

Commoditization and broad adoption of machine learning (ML) technologies...
research
05/14/2020

Protecting the integrity of the training procedure of neural networks

Due to significant improvements in performance in recent years, neural n...

Please sign up or login with your details

Forgot password? Click here to reset