DeepAI AI Chat
Log In Sign Up

Flareon: Stealthy any2any Backdoor Injection via Poisoned Augmentation

12/20/2022
by   Tianrui Qin, et al.
Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences
0

Open software supply chain attacks, once successful, can exact heavy costs in mission-critical applications. As open-source ecosystems for deep learning flourish and become increasingly universal, they present attackers previously unexplored avenues to code-inject malicious backdoors in deep neural network models. This paper proposes Flareon, a small, stealthy, seemingly harmless code modification that specifically targets the data augmentation pipeline with motion-based triggers. Flareon neither alters ground-truth labels, nor modifies the training loss objective, nor does it assume prior knowledge of the victim model architecture, training data, and training hyperparameters. Yet, it has a surprisingly large ramification on training – models trained under Flareon learn powerful target-conditional (or "any2any") backdoors. The resulting models can exhibit high attack success rates for any target choices and better clean accuracies than backdoor attacks that not only seize greater control, but also assume more restrictive attack capabilities. We also demonstrate the effectiveness of Flareon against recent defenses. Flareon is fully open-source and available online to the deep learning community: https://github.com/lafeat/flareon.

READ FULL TEXT

page 7

page 13

page 14

page 15

03/27/2023

Learning the Unlearnable: Adversarial Augmentations Suppress Unlearnable Example Attacks

Unlearnable example attacks are data poisoning techniques that can be us...
04/08/2022

Taxonomy of Attacks on Open-Source Software Supply Chains

The widespread dependency on open-source software makes it a fruitful ta...
08/30/2018

Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation

Deep learning models have consistently outperformed traditional machine ...
06/16/2021

Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch

As the curation of data for machine learning becomes increasingly automa...
08/09/2019

DeepCleanse: A Black-box Input SanitizationFramework Against Backdoor Attacks on DeepNeural Networks

As Machine Learning, especially Deep Learning, has been increasingly use...
03/06/2023

On the Feasibility of Specialized Ability Stealing for Large Language Code Models

Recent progress in large language code models (LLCMs) has led to a drama...
01/16/2023

Optimizing Predictions for Very Small Data Sets: a case study on Open-Source Project Health Prediction

When learning from very small data sets, the resulting models can make m...