HoneyModels: Machine Learning Honeypots

02/21/2022
by   Ahmed Abdou, et al.
11

Machine Learning is becoming a pivotal aspect of many systems today, offering newfound performance on classification and prediction tasks, but this rapid integration also comes with new unforeseen vulnerabilities. To harden these systems the ever-growing field of Adversarial Machine Learning has proposed new attack and defense mechanisms. However, a great asymmetry exists as these defensive methods can only provide security to certain models and lack scalability, computational efficiency, and practicality due to overly restrictive constraints. Moreover, newly introduced attacks can easily bypass defensive strategies by making subtle alterations. In this paper, we study an alternate approach inspired by honeypots to detect adversaries. Our approach yields learned models with an embedded watermark. When an adversary initiates an interaction with our model, attacks are encouraged to add this predetermined watermark stimulating detection of adversarial examples. We show that HoneyModels can reveal 69.5 Network while preserving the original functionality of the model. HoneyModels offer an alternate direction to secure Machine Learning that slightly affects the accuracy while encouraging the creation of watermarked adversarial samples detectable by the HoneyModel but indistinguishable from others for the adversary.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/28/2021

Adversarial Machine Learning Attacks on Condition-Based Maintenance Capabilities

Condition-based maintenance (CBM) strategies exploit machine learning mo...
research
08/17/2021

When Should You Defend Your Classifier – A Game-theoretical Analysis of Countermeasures against Adversarial Examples

Adversarial machine learning, i.e., increasing the robustness of machine...
research
05/07/2018

PRADA: Protecting against DNN Model Stealing Attacks

As machine learning (ML) applications become increasingly prevalent, pro...
research
11/23/2020

Omni: Automated Ensemble with Unexpected Models against Adversarial Evasion Attack

BACKGROUND: Machine learning-based security detection models have become...
research
06/22/2023

Towards quantum enhanced adversarial robustness in machine learning

Machine learning algorithms are powerful tools for data driven tasks suc...
research
01/14/2021

Adversarial Machine Learning in Text Analysis and Generation

The research field of adversarial machine learning witnessed a significa...
research
06/20/2023

FDINet: Protecting against DNN Model Extraction via Feature Distortion Index

Machine Learning as a Service (MLaaS) platforms have gained popularity d...

Please sign up or login with your details

Forgot password? Click here to reset