Enhancing Quantum Adversarial Robustness by Randomized Encodings

12/05/2022
by   Weiyuan Gong, et al.
0

The interplay between quantum physics and machine learning gives rise to the emergent frontier of quantum machine learning, where advanced quantum learning models may outperform their classical counterparts in solving certain challenging problems. However, quantum learning systems are vulnerable to adversarial attacks: adding tiny carefully-crafted perturbations on legitimate input samples can cause misclassifications. To address this issue, we propose a general scheme to protect quantum learning systems from adversarial attacks by randomly encoding the legitimate data samples through unitary or quantum error correction encoders. In particular, we rigorously prove that both global and local random unitary encoders lead to exponentially vanishing gradients (i.e. barren plateaus) for any variational quantum circuits that aim to add adversarial perturbations, independent of the input data and the inner structures of adversarial circuits and quantum classifiers. In addition, we prove a rigorous bound on the vulnerability of quantum classifiers under local unitary adversarial attacks. We show that random black-box quantum error correction encoders can protect quantum classifiers against local adversarial noises and their robustness increases as we concatenate error correction codes. To quantify the robustness enhancement, we adapt quantum differential privacy as a measure of the prediction stability for quantum classifiers. Our results establish versatile defense strategies for quantum classifiers against adversarial perturbations, which provide valuable guidance to enhance the reliability and security for both near-term and future quantum learning technologies.

READ FULL TEXT

page 2

page 4

page 8

research
02/15/2021

Universal Adversarial Examples and Perturbations for Quantum Classifiers

Quantum machine learning explores the interplay between machine learning...
research
04/04/2022

Experimental quantum adversarial learning with programmable superconducting qubits

Quantum computing promises to enhance machine learning and artificial in...
research
11/02/2022

Certified Robustness of Quantum Classifiers against Adversarial Examples through Quantum Noise

Recently, quantum classifiers have been known to be vulnerable to advers...
research
06/21/2023

Universal adversarial perturbations for multiple classification tasks with quantum classifiers

Quantum adversarial machine learning is an emerging field that studies t...
research
12/31/2019

Quantum Adversarial Machine Learning

Adversarial machine learning is an emerging field that focuses on studyi...
research
03/20/2020

Quantum noise protects quantum classifiers against adversaries

Noise in quantum information processing is often viewed as a disruptive ...
research
12/21/2020

Defence against adversarial attacks using classical and quantum-enhanced Boltzmann machines

We provide a robust defence to adversarial attacks on discriminative alg...

Please sign up or login with your details

Forgot password? Click here to reset