Enhanced Security against Adversarial Examples Using a Random Ensemble of Encrypted Vision Transformer Models

07/26/2023
by   Ryota Iijima, et al.
0

Deep neural networks (DNNs) are well known to be vulnerable to adversarial examples (AEs). In addition, AEs have adversarial transferability, which means AEs generated for a source model can fool another black-box model (target model) with a non-trivial probability. In previous studies, it was confirmed that the vision transformer (ViT) is more robust against the property of adversarial transferability than convolutional neural network (CNN) models such as ConvMixer, and moreover encrypted ViT is more robust than ViT without any encryption. In this article, we propose a random ensemble of encrypted ViT models to achieve much more robust models. In experiments, the proposed scheme is verified to be more robust against not only black-box attacks but also white-box ones than convention methods.

READ FULL TEXT
research
09/19/2022

On the Adversarial Transferability of ConvMixer Models

Deep neural networks (DNNs) are well known to be vulnerable to adversari...
research
09/07/2022

On the Transferability of Adversarial Examples between Encrypted Models

Deep neural networks (DNNs) are well known to be vulnerable to adversari...
research
06/14/2023

Reliable Evaluation of Adversarial Transferability

Adversarial examples (AEs) with small adversarial perturbations can misl...
research
12/05/2018

Regularized Ensembles and Transferability in Adversarial Learning

Despite the considerable success of convolutional neural networks in a b...
research
09/20/2022

Audit and Improve Robustness of Private Neural Networks on Encrypted Data

Performing neural network inference on encrypted data without decryption...
research
01/01/2022

Adversarial Attack via Dual-Stage Network Erosion

Deep neural networks are vulnerable to adversarial examples, which can f...
research
09/13/2021

Evolving Architectures with Gradient Misalignment toward Low Adversarial Transferability

Deep neural network image classifiers are known to be susceptible not on...

Please sign up or login with your details

Forgot password? Click here to reset