MULDEF: Multi-model-based Defense Against Adversarial Examples for Neural Networks

08/31/2018
by   Siwakorn Srisakaokul, et al.
0

Despite being popularly used in many application domains such as image recognition and classification, neural network models have been found to be vulnerable to adversarial examples: given a model and an example correctly classified by the model, an adversarial example is a new example formed by applying small perturbation (imperceptible to human) on the given example so that the model misclassifies the new example. Adversarial examples can pose potential risks on safety or security in real-world applications. In recent years, given a vulnerable model, defense approaches, such as adversarial training and defensive distillation, improve the model to make it more robust against adversarial examples. However, based on the improved model, attackers can still generate adversarial examples to successfully attack the model. To address such limitation, we propose a new defense approach, named MULDEF, based on the design principle of diversity. Given a target model (as a seed model) and an attack approach to be defended against, MULDEF constructs additional models (from the seed model) together with the seed model to form a family of models, such that the models are complementary to each other to accomplish robustness diversity (i.e., one model's adversarial examples typically do not become other models' adversarial examples), while maintaining about the same accuracy for normal examples. At runtime, given an input example, MULDEF randomly selects a model from the family to be applied on the given example. The robustness diversity of the model family and the random selection of a model from the family together lower the success rate of attacks. Our evaluation results show that MULDEF substantially improves the target model's accuracy on adversarial examples by 35-50 black-box attack scenarios, respectively.

READ FULL TEXT
research
03/13/2021

Generating Unrestricted Adversarial Examples via Three Parameters

Deep neural networks have been shown to be vulnerable to adversarial exa...
research
06/22/2023

Visual Adversarial Examples Jailbreak Large Language Models

Recently, there has been a surge of interest in introducing vision into ...
research
06/27/2018

Customizing an Adversarial Example Generator with Class-Conditional GANs

Adversarial examples are intentionally crafted data with the purpose of ...
research
09/20/2022

Audit and Improve Robustness of Private Neural Networks on Encrypted Data

Performing neural network inference on encrypted data without decryption...
research
08/01/2023

Dynamic ensemble selection based on Deep Neural Network Uncertainty Estimation for Adversarial Robustness

The deep neural network has attained significant efficiency in image rec...
research
01/28/2019

Defense Methods Against Adversarial Examples for Recurrent Neural Networks

Adversarial examples are known to mislead deep learning models to incorr...
research
09/14/2022

Certified Robustness to Word Substitution Ranking Attack for Neural Ranking Models

Neural ranking models (NRMs) have achieved promising results in informat...

Please sign up or login with your details

Forgot password? Click here to reset