Hardening Deep Neural Networks via Adversarial Model Cascades

02/02/2018
by   Deepak Vijaykeerthy, et al.
0

Deep neural networks (DNNs) have been shown to be vulnerable to adversarial examples - malicious inputs which are crafted by the adversary to induce the trained model to produce erroneous outputs. This vulnerability has inspired a lot of research on how to secure neural networks against these kinds of attacks. Although existing techniques increase the robustness of the models against white-box attacks, they are ineffective against black-box attacks. To address the challenge of black-box adversarial attacks, we propose Adversarial Model Cascades (AMC); a framework that performs better than existing state-of-the-art defenses, in both black-box and white-box settings and is easy to integrate into existing set-ups. Our approach trains a cascade of models by injecting images crafted from an already defended proxy model, to improve the robustness of the target models against multiple adversarial attacks simultaneously, in both white-box and black-box settings. We conducted an extensive experimental study to prove the efficiency of our method against multiple attacks; comparing it to numerous defenses, both in white-box and black-box setups.

READ FULL TEXT
research
09/24/2020

Improving Query Efficiency of Black-box Adversarial Attack

Deep neural networks (DNNs) have demonstrated excellent performance on v...
research
10/31/2022

Scoring Black-Box Models for Adversarial Robustness

Deep neural networks are susceptible to adversarial inputs and various m...
research
04/25/2023

Improving Robustness Against Adversarial Attacks with Deeply Quantized Neural Networks

Reducing the memory footprint of Machine Learning (ML) models, particula...
research
08/12/2020

Model Robustness with Text Classification: Semantic-preserving adversarial attacks

We propose algorithms to create adversarial attacks to assess model robu...
research
04/01/2019

Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks

Deep neural networks are vulnerable to adversarial attacks, which can fo...
research
12/11/2021

MedAttacker: Exploring Black-Box Adversarial Attacks on Risk Prediction Models in Healthcare

Deep neural networks (DNNs) have been broadly adopted in health risk pre...
research
07/21/2019

Open DNN Box by Power Side-Channel Attack

Deep neural networks are becoming popular and important assets of many A...

Please sign up or login with your details

Forgot password? Click here to reset