Adversarial-Aware Deep Learning System based on a Secondary Classical Machine Learning Verification Approach

06/01/2023
by   Mohammed Alkhowaiter, et al.
0

Deep learning models have been used in creating various effective image classification applications. However, they are vulnerable to adversarial attacks that seek to misguide the models into predicting incorrect classes. Our study of major adversarial attack models shows that they all specifically target and exploit the neural networking structures in their designs. This understanding makes us develop a hypothesis that most classical machine learning models, such as Random Forest (RF), are immune to adversarial attack models because they do not rely on neural network design at all. Our experimental study of classical machine learning models against popular adversarial attacks supports this hypothesis. Based on this hypothesis, we propose a new adversarial-aware deep learning system by using a classical machine learning model as the secondary verification system to complement the primary deep learning model in image classification. Although the secondary classical machine learning model has less accurate output, it is only used for verification purposes, which does not impact the output accuracy of the primary deep learning model, and at the same time, can effectively detect an adversarial attack when a clear mismatch occurs. Our experiments based on CIFAR-100 dataset show that our proposed approach outperforms current state-of-the-art adversarial defense systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/02/2021

Hybrid Classical-Quantum Deep Learning Models for Autonomous Vehicle Traffic Image Classification Under Adversarial Attack

Image classification must work for autonomous vehicles (AV) operating on...
research
06/12/2021

Disrupting Model Training with Adversarial Shortcuts

When data is publicly released for human consumption, it is unclear how ...
research
02/19/2021

Fortify Machine Learning Production Systems: Detect and Classify Adversarial Attacks

Production machine learning systems are consistently under attack by adv...
research
11/06/2017

Adversarial Frontier Stitching for Remote Neural Network Watermarking

The state of the art performance of deep learning models comes at a high...
research
05/23/2017

Towards Interrogating Discriminative Machine Learning Models

It is oftentimes impossible to understand how machine learning models re...
research
09/17/2021

Messing Up 3D Virtual Environments: Transferable Adversarial 3D Objects

In the last few years, the scientific community showed a remarkable and ...
research
10/12/2022

When does deep learning fail and how to tackle it? A critical analysis on polymer sequence-property surrogate models

Deep learning models are gaining popularity and potency in predicting po...

Please sign up or login with your details

Forgot password? Click here to reset