Universal adversarial perturbations for multiple classification tasks with quantum classifiers

06/21/2023
by   Yun-Zhong Qiu, et al.
0

Quantum adversarial machine learning is an emerging field that studies the vulnerability of quantum learning systems against adversarial perturbations and develops possible defense strategies. Quantum universal adversarial perturbations are small perturbations, which can make different input samples into adversarial examples that may deceive a given quantum classifier. This is a field that was rarely looked into but worthwhile investigating because universal perturbations might simplify malicious attacks to a large extent, causing unexpected devastation to quantum machine learning models. In this paper, we take a step forward and explore the quantum universal perturbations in the context of heterogeneous classification tasks. In particular, we find that quantum classifiers that achieve almost state-of-the-art accuracy on two different classification tasks can be both conclusively deceived by one carefully-crafted universal perturbation. This result is explicitly demonstrated with well-designed quantum continual learning models with elastic weight consolidation method to avoid catastrophic forgetting, as well as real-life heterogeneous datasets from hand-written digits and medical MRI images. Our results provide a simple and efficient way to generate universal perturbations on heterogeneous classification tasks and thus would provide valuable guidance for future quantum learning technologies.

READ FULL TEXT
research
02/15/2021

Universal Adversarial Examples and Perturbations for Quantum Classifiers

Quantum machine learning explores the interplay between machine learning...
research
12/31/2019

Quantum Adversarial Machine Learning

Adversarial machine learning is an emerging field that focuses on studyi...
research
12/05/2022

Enhancing Quantum Adversarial Robustness by Randomized Encodings

The interplay between quantum physics and machine learning gives rise to...
research
11/22/2019

Universal adversarial examples in speech command classification

Adversarial examples are inputs intentionally perturbed with the aim of ...
research
09/07/2020

Dynamically Computing Adversarial Perturbations for Recurrent Neural Networks

Convolutional and recurrent neural networks have been widely employed to...
research
07/02/2018

Adversarial Perturbations Against Real-Time Video Classification Systems

Recent research has demonstrated the brittleness of machine learning sys...
research
06/27/2023

On the Universal Adversarial Perturbations for Efficient Data-free Adversarial Detection

Detecting adversarial samples that are carefully crafted to fool the mod...

Please sign up or login with your details

Forgot password? Click here to reset