Noise in quantum information processing has long been viewed as a feature to avoid and remove, notably in quantum computation. However, in the Noisy Intermediate-Scale Quantum (NISQ) era of near-term quantum computing preskill2018quantum , the presence of noise is inevitable. The focus is both on reducing the effects of quantum noise, for example using error-mitigation endo2018practical ; temme2017error and for finding protocols whose integrity can nevertheless withstand this noise. However, a parallel approach can be taken to instead study noise under a positive lens. In classical information processing, noise is actively leveraged in many applications including strengthening security and privacy using differential privacy dwork2011differential , enhancing weak signals using stochastic resonance gammaitoni1998stochastic , improving signal resolution after truncating data with dithering roberts1962picture
and speeding convergence rates in neural networksjim1995effects . Can we look at quantum noise in this same positive light and use it to our advantage?
One important proposed application of these quantum devices is performing machine-learning tasks like classification biamonte2017quantum ; grant2018hierarchical and classification algorithms can be less vulnerable against noise. An intuitive reason behind this is that classification only has few possible outputs and machine learning can still provide accurate classification in the classical world despite the ‘messiness’ of real-life data like images and sound recordings. Indeed, a recent work larose2020robust showed how quantum binary classifiers can be made robust against common sources of quantum noise by choosing a right encoding of classical data into quantum states.
However, despite being tolerant to small amounts of noise with known sources, classification algorithms are generally not protected against unknown ‘worst-case’ noise sources, such as adversarial attacks. In fact, classification algorithms in machine learning are often very sensitive to adversarial attacks and this presents a key obstacle for the future development of classical machine learning szegedy2013intriguing
. These adversaries perturb the original data point by only a small undetectable amount, yet the new datapoint, known as an adversarial example, is completely misclassified in otherwise extremely accurate classifiers. This observation presents an impetus for the vibrant field called adversarial machine learninghuang2011adversarial ; kurakin2016adversarial and this has recently been extended to the quantum domain in adversarial quantum learning wiebe2018hardening ; liu2019vulnerability ; lu2019quantum . While many important methods focus on finding new and more robust versions of existing algorithms goodfellow2018making , including on quantum devices wiebe2018hardening ; lu2019quantum , this approach is generally vulnerable to counterattacks and don’t provide theoretical guarantees against all possible adversaries yuan2019adversarial .
We take a different approach that does not require inventing new algorithms to improve robustness, yet can provide a robustness guarantee against any unknown perturbation, such as from an adversary. We begin from our intuition that noise is a kind of scrambling mechanism. It can ‘scramble’ the effects of disturbances made to one’s original data, for instance by adversaries, thus diminishing the effects adversarial attacks can have. Therefore we can ask whether noise, instead of hindering the computation, can in fact assist in the presence of adversarial attacks?
More specifically, noise in the classical realm has been associated with improving the privacy of algorithms, providing a property called differential privacy dwork2011differential . Differential privacy is the property of an algorithm whose output cannot distinguish small changes in the initial dataset, like the presence or absence of one party’s datapoint, hence in this way preserving privacy of that party. This is in fact the very property we want in making our algorithm robust against adversarial examples, which are small changes to the initial dataset that induce misclassification.
We demonstrate that by including depolarisation in one’s quantum circuit for classification, we can achieve quantum differential privacy and in turn, be able to provide robustness bounds in the presence of adversaries which were not possible before. This is the most natural mechanism to exploit noise to protect quantum data, which appear in condensed matter systems, quantum communication networks, quantum simulation, quantum metrology and quantum control. In addition, we show how the robustness bound in the classical case can be sensitive to the details of the classification model but in the quantum case this bound is dependent only on the number of possible class categories and no other feature of the classification model. This therefore demonstrates an important example of a security advantage in performing a classification algorithm on a quantum device versus a purely classical device, for both quantum and classical data.
We begin by defining classification, adversarial examples and differential privacy. Then we demonstrate how adding depolarisation noise in quantum classifiers can induce quantum differential privacy which can in turn provide protection against adversarial examples.
We briefly review the classification problem in both the classical and quantum domains before introducing the concept of adversarial examples. We then define classical and quantum differential privacy, which we later employ as a key tool to achieve robustness of our classifier against adversarial examples.
ii.1 Classification task
A classification task is a mapping from a set of classical or quantum input states to a label chosen from a finite set. If the size of this finite set is , we have a -multiclass classification problem goodfellow2016deep . is the special case of binary classification, e.g., given images of only ants or cicadas, to decide which picture belongs to which insect.
Definition 1 (-multiclass classification).
The algorithm is called a -multiclass classification algorithm if it maps the set of input states onto the set . Let the state and . If , then is the predicted class label assigned to .
In machine learning, the algorithm does not need to be pre-defined and can instead be learned through a training dataset . This dataset consists of pairs of input states and their corresponding class labels represented by the
-dimensional vector. Its entry if the class label of is and every other entry of is zero otherwise. To learn , we first define a parameterised function where are free parameters that can be tuned. The learning happens as is optimized to minimize the empirical risk
refers to a predefined loss function. The goal in learning is to minimise this empirical risk Eq. (1) for one’s given training dataset , where the optimized parameters are denoted . Given test state , we can define as the score vector among labels, where denotes the -norm and is the normalized vector of . Then the entry of the vector function
can be interpreted as the probability thatis assigned the label . Then the learned classification algorithm outputs the class label for a input state using the condition
where the final class label is decided by identifying the class label with the highest corresponding probability.
For the quantum -multiclass classification task with quantum test state we can employ a quantum circuit, see Fig. 1(a), to compute instead of using a classical circuit. We can identify to be the probability of the final measurement outcome of the quantum circuit being ,
where is a POVM, is a quantum operation that contains information about the trained parameters benedetti2019parameterized and is an ancilla.
However, precise values of the probabilities can only be obtained in the infinite sampling regime. This means that if only measurements are allowed at the output of the circuit, we can only obtain an estimated value of the output probabilities.
ii.2 Adversarial examples
Adversarial examples are attacks on input examples to classification problems that lead to misclassification. In particular, these include worst-case attacks where the adversary can craft small imperceptible perturbations about a given correctly classified input that result in misclassification goodfellow2014explaining . This means that while the true labels and are identical, if is an adversarial example, will class them differently. We can define adversarial examples more formally as follows sharif2018suitability .
Definition 2 (Adversarial example).
Suppose we are given a well-trained classification function as defined in Eq. (2), an input example , a distance metric and a small enough threshold value . Then is said to be an adversarial example if the following is true
If are classical states, suitable distance metrics are the -norms, so . If are quantum states, we will use the trace distance .
In the rest of this paper, we will use Greek letters to refer to quantum states and bold Roman letters to refer to classical states unless otherwise specified.
ii.3 Differential privacy
Differential privacy is an important concept in computer science that quantifies the sensitivity of the outputs of algorithms to changes in their input data. The less sensitive it is, the better the algorithm can preserve the privacy of the input data. Here we can formulate the definition of classical differential privacy as follows dwork2011differential .
Definition 3 (Classical differential privacy).
Suppose is a classical algorithm that takes as input entries of some classical database and outputs values belonging to the set . Then is said to satisfy classical -differential privacy if, for all , which are separated by a small distance, e.g., Hamming distance and all measurable sets ,
where denotes the probability of and . We call the privacy budget for the algorithm.
Informally, this definition says that for two input data points separated by a small distance, a small privacy budget means that the output of the algorithm differs very little, hence the input information is partially kept private. The selection of this distance varies depending on the task, e.g., Hamming distance or distance dwork2011differential . A natural distance for quantum data is the trace distance, which we can employ in a definition for quantum differential privacy zhou2017differential which we will use throughout this paper. An alternative definition for quantum differential privacy aaronson2019gentle does not require quantum data and to be close in trace distance, but rather that is obtainable by applying a quantum operation on only a single register of . See also arunachalam2020quantum for a related definition applied to PAC learning. However, for our purposes of working directly with quantum states and , the use of trace distance is the most appropriate.
Suppose is a quantum algorithm that takes input state , applies a quantum operation before applying the POVM , where the set of final measurement results . These set of outcomes are then observed with probability . By analogy with Definition 1, we can write a definition of quantum differential privacy following Zhou and Ying zhou2017differential .
Definition 4 (Quantum differential privacy).
The quantum algorithm satisfies -quantum differential privacy if for all input quantum states and with and for all measurable sets (equivalently, for every )
For the rest of the paper, we focus on the case , which is referred to as -quantum differential privacy. To illustrate a simple example, suppose we have a binary classification problem where we choose the POVM . The probability is assigned class labels by a quantum binary classifier is and respectively. Then if satisfies -quantum differential privacy, Definition 4 requires that we must satisfy
Iii Improving robustness of quantum classifiers against adversaries by adding noise
In this section, we show how the presence of depolarisation noise in quantum circuits for classification improves robustness against adversarial examples. We begin with our definition of adversarial robustness.
Definition 5 (Adversarial robustness).
Let the test state have the class label under a classification algorithm . Then is said to possess adversarial robustness of size if for all that is perturbed by an unknown source where , the class label of does not change, i.e., .
We must emphasise here the difference between robustness bounds against a known noise source versus an unknown adversary. Protection against an unknown adversary is a robustness guarantee against a worst-case scenario, whereas commonly-appearing known noise sources are usually far from the worst-case scenario.
Our goal is to demonstrate how a naturally-occurring known noise source can be used to protect a quantum classifier against worst-case adversarial perturbations. This can be done in three main steps. We first show the robustness of quantum classifiers to this known noise source, then demonstrate how this gives rise to quantum differential privacy for the classifier. Finally we prove how quantum differential privacy can be used to derive a theoretical bound against general adversaries.
One such naturally-occurring quantum noise source is the depolarisation noise channel , which acts on a -dimensional state like
where is the identity matrix and . Before the final measurement, we can represent our quantum classifier as a unitary gate acting on an input state , as represented in Fig. 1(a). We can then add after each unitary where and . Here is the total number of depolarisation channels with noise parameters . This noisy circuit is depicted in Fig. 1(b). The output of this noisy -multiclass classification circuit given test state can be written as
This leads to the interesting observation that the noisy test score is independent of where depolarisation channels are placed in the circuit. Furthermore, the effect of all depolarisation channels with parameters can be replaced by a single depolarisation channel with parameter . In the trivial case for all , . For the rest of this paper, we will for simplicity replace the effect of all noise parameters with unless stated otherwise.
Before achieveing our goal, we first need Eq. (10) to prove the following lemma showing that the -multiclass classification algorithm performed by the noisy circuit is robust against depolarisation noise for any . This is a generalisation of a recent result from LaRose and Coyle 222This appears in Theorem 2 in larose2020robust . Also see larose2020robust for a list of common types of noise that binary quantum classifers are naturally robust against as well as interesting encoding strategies to induce robustness when the classifiers are not naturally robust. to the case of -multiclass classification.
Let denote the output for the noiseless circuit in Fig. 1(a), i.e., for all . Then if the class label is assigned to by the noiseless circuit, i.e., , then the same label is also assigned by the noisy circuit, which has for at least one . This means for any and . Furthermore, if then .
The above result demonstrates robustness of quantum classifiers against depolarisation noise if one has access to the exact probabilities . However, this is only possible in the limit of infinite sampling. If one is only able to sample the circuit times, one instead obtains only the estimated values . Then to guarantee robustness against depolarisation noise to high probability, we find the following required sampling complexity increases only with increasing depolarisation noise , but is not dependent on the dimensionality of .
Let the predicted classification label of using the noiseless -multiclass classification circuit be . This means we can define where . In the corresponding circuit with depolarisation noise parameters , one samples the circuit times for each to obtain the estimates . Then is also labelled with probability at least if the sample complexity , where .
Proof of Proposition 1.
Now we show how adding depolarisation noise gives rise to quantum differential privacy for our algorithm. This is an application of a result from Zhou and Ying zhou2017differential for our quantum classifier.
Let the algorithm correspond to the -multiclass classification circuit defined in Fig. 1(b) with depolarisation noise channels , where and , and measurement operators . Then for two quantum test states and obeying with , satisfies -quantum differential privacy where
and is the dimension of the operators .
Proof of Lemma 2.
Lemma 2 states that the privacy budget in the presence of depolarisation noise decreases with increasing , hence higher depolarisation noise parameters gives greater differential privacy. Furthermore, this privacy is independent of where one inserts depolarisation noise because the product is invariant under permutation of its factors. It is also independent of any details of the classifier except , which serves as an upper-bound to the number of class labels in our classifier. We will return to these points later.
Using the results of Lemmas 1 and 2, the following theorem demonstrates that by increasing the strength of depolarisation noise in our circuit, this also increases our -multiclass classifier’s robustness against adversarial examples.
Theorem 1 (Infinite sampling case).
We begin with our -multiclass classification circuit with depolarisation noise parameters where and . Let infinite sampling of the output be allowed, so we can find for for any test state given. Suppose holds, where , which implies that is assigned the class label , i.e., . Then is also labelled as , i.e., for any where .
This means that if a test state undergoes an arbitrary adversarial perturbation , the classification of will remain identical to that of for a larger range of if increases. Furthermore, if remains constant, then the extra condition required of the input state also becomes easier to satisfy as increases. A similar result holds for the finite sampling case.
Theorem 2 (Finite sampling case).
Suppose one samples the output of the circuit times for the estimation of each . Let where , which implies has the class label . Then the class label of is also , i.e., to probability at least for any where . This also implies to probability at least .
Proof of Theorem 2.
We employ Hoeffding’s inequality mohri2018foundations to show is true to probability at least . This relates the finitely sampled estimates to from infinite sampling. Then we can apply the results of Theorem 1 for infinite sampling to prove our results. Please see Appendix E for details of the proof. ∎
As special examples, we now explore the robustness property of two discriminative learning models for binary classification: quantum neural network and quantum kernel classifiers.
iii.1 Quantum neural network
The quantum neural network (QNN), proposed by farhi2018classification , is a building block for various quantum learning models schuld2018circuit ; huggins2018towards ; havlivcek2019supervised ; farhi2018classification ; benedetti2018generative ; dallaire2018quantum . The basic scheme of QNN is illustrated in Figure 2 (a), which is a special case of the circuit in Fig. 1(a). The -dimensional quantum input state is , where refers to either the training or test states and is an ancilla. The trainable unitary is then applied, which consists of trainable single-qubit gates and fixed two-qubit gates. Our protocol for QNN, as shown in Figure 2 (b), employs the depolarisation channels that can appear within the QNN circuit before final measurements with POVM .
The typical application of QNN is for binary classification, broadly used in farhi2018classification ; huggins2018towards ; benedetti2018generative ; dallaire2018quantum , where one makes single-qubit measurements using and . We can apply Theorem 1 directly to our scenario and we have the following corollary.
Let the given input be given the classification label ‘0’ and define . In binary classification, QNN, with depolarisation channels and , is robust against any perturbations with and , if
Since for binary classification, we note that the privacy budget is now independent of the dimension of the problem. Therefore, even as the feature dimension of the input grows, it does not affect the robustness of the classifier against adversarial examples so long as some depolarisation noise with has been added to the circuit. This independence is an interesting contrast to the result in liu2019vulnerability which states that robustness should decrease as dimensionality of grows. This contradiction is resolved by observing that, unlike in liu2019vulnerability which places no constraints on distribution from which the input states are selected, here we have Eq. (12) which imposes a constraint.
In the finite sampling limit, we can employ Theorem 2 to apply to our binary classifier and we have the following corollary.
Let the input be given the classification label ‘0’ and define , where the probabilities are estimated using samples of the quantum circuit. Then if
the binary classification performed by the QNN circuit, with depolarisation channels and , is robust to adversarial attacks with the probability at least where and .
iii.2 Quantum kernel classifier
The main idea of kernel methods is to map complex input data to a higher-dimensional feature space that can then be efficiently separated goodfellow2016deep . The generic form of a quantum kernel classifier mitarai2018quantum ; havlivcek2019supervised ; schuld2019quantum is shown in Figure 3. The output of the kernel classifier can be written as , where is identified with a classical kernel with test state and weight vector captured by the trained values. Here contains the trainable parameters with the aim of minimizing the predefined loss function where the optimal occurs at and refers to the kernel state that maps the input data into the higher-dimensional feature space. Thus the probability of obtaining the measurement values all ‘’ after applying in the noiselss circuit is given by .
For a binary classification problem, the class label of is if . In this case, , thus the privacy budget becomes . which grows with increasing dimensionality of the input state. Corollaries 1 and 2 then hold for the quantum kernel classifier with this modified .
Iv Numerical simulations
We now conduct numerical simulations to illustrate our protocol for a binary QNN classifier. In particular, by leveraging the depolarisation channel, we show how a trained QNN binary classifier has the ability to achieve certified robustness under bounded-norm adversarial attacks at testing time. In this section, we first introduce our training dataset and the preprocessing step. We then explain the attack method that is used to evaluate the performance of our protocol. Lastly we analyse the performance of our proposed protocol.
iv.1 Preprocessing and training procedure
We choose to conduct our numerical simulations on the Iris dataset fisher1936use , which has been broadly used in classical machine learning. The Iris dataset consists of three different types of Iris flowers (Setosa, Versicolour, and Virginica), where examples (belonging to Setosa) with label are linearly separable with respect to examples (belonging to Versicolour) with label .
Next, we remove all examples belonging to Virginica and denote the dataset that only contains label and as , i.e., the cardinality of is . Then we set the fourth entry of all examples as . Afterwards, we apply normalization to each example, i.e., for any . Then we need to efficiently encode this classical data into quantum states schuld2017implementing . We can then carry out the amplitude encoding method mottonen2004transformation to encode the normalized into a quantum state.
Given the preprocessed dataset , we randomly split it into a training dataset and a test dataset with , , and . In the training procedure, we randomly sample an example from and forward to a binary QNN classifier. For details on the circuit see Appendix H. We employ the squared loss function to train this QNN, i.e.,
where is the score vector of QNN as formulated in Subsection II.1 and denotes the ideal output of the QNN.
We use the zeroth-order gradient method mitarai2018quantum to optimize trainable parameters of the QNN to minimize the loss function
. We set the number of training epochs to. The learning rate is set to and the total number of trainable parameters is . Figure 4 illustrates the training loss, training accuracy and test accuracy. Both the training and test accuracy converges to after epochs (See Appendix H for more implementation details). Given the test dataset , we randomly select three test examples and explore how the maximum robustness changes with varied according to Eq. (11), which we can rewrite as
Figure 5 illustrates how scales with different for three different test examples with . Note that the constants are different for the three test examples and the test examples satisfy the condition in Eq. (12). In the same figure, we also plot how the test score varies with , coming from Eq. (10) for the case of binary classification
iv.2 Evaluation metrics and adversarial attack methods
To evaluate the performance of our protocol, we adopt an adversarial attack method that is widely employed in classical machine learning. It is known as the iterative-fast gradient sign method (I-FGSM) with -bounded norm liu2016delving ; dong2018boosting ; madry2018towards that aims to attack the test dataset to make incorrect predictions when using a trained classifier. If we denote the original input by and the adversarial example at the updating step when using the I-FGSM by , then
where is the learning rate with and is the loss function formulated in Eq. (14).
iv.3 Adversarial attack at test time
Here we employ our trained classifier and the adversarial attack method formulated above to quantify the performance of our protocol. Recall that Corollaries 1 and 2 are the special cases of Theorems 1 and 2 when applied to binary QNN classifiers and work in the regime of using infinite and finite sampling of the output probabilities respectively. Here we explore how our protocol protects the binary QNN classifier against adversarial attacks under these two settings.
|Infinite Precision Case ()|
|(label )||(label )||(label )||(label )|
|Finite Precision Case ()||—|
|—||44.32% (label 1)||55.80% (label 0)||53.88% (label 0)|
The infinite sampling case. At testing time, we randomly sample an example from to investigate its robustness with respect to different level of depolarisation noise . Without loss of generality, the original test example has label . We set three different values of and : ; and . From Eq. (11), their corresponding privacy budgets are , and . Given our input , the outputs of our trained classifier with added depolarizsation noise are , and , where the corresponding constants defined in Corollary 1 is , , and , respectively. Following the condition for robustness in Eq. (12), we have confidence that the classifier is robust to adversarial attacks if . A simple comparison indicates that robustness is guaranteed when , since while and .
To validate the correctness of our theoretical results, we employ I-FGSM to attack our trained classifier, where we identify the -norm bound with its corresponding value. The left panel of Figure 6 demonstrates the simulation results and Table 1 shows the final test score of the attacked input. The classifier with the first setting is robust to the bounded-norm adversarial attacks, where the predicted label of is still ‘0’. For the third setting when , the adversary can easily perturb the input and lead the classifier to give the wrong prediction. In particular, the adversary can easily perturb the input to cross the classification boundary, as highlighted by the purple line. For the second setting with , the classifier correctly predicts the label, while our protocol cannot provide any promises, since Theorem 1 and Corollary 1 provides only sufficient conditions for robustness. The above three simulation results are then in accordance with our theoretical results.
respectively. The thick purple line is a trained hyperplane of our QNN classifier. The dotted arrows indicate how an adversary iteratively attacks the inputunder three different settings of ; and , where the aim of the adversary is to induce the classifier to output the wrong prediction. The inner plot enlarges the part of the central figure near the test example. (Right) The right panel illustrates the bounded-norm attack in the finite precision case. The circle region indicates the robustness value and . The dotted arrows indicate the path of an adversary that iteratively attacks the input under and , where the adversary aims to induce the classifier to output the wrong prediction. The inner plot enlarges the part of the central figure near the test example.
The finite sampling case. The only difference in the finite sampling case is the acquisition of the output of our trained classifier. The same test example
is employed. The hyperparameters are set asand from Eq. (11), the privacy budget is fixed. We set three different sampling number values to explore how affects the robustness guarantees, where , , and . The corresponding three approximated test scores are , and . The corresponding parameters are , , and with respect to , , and . Following the results of Theorem 2 and Corollary 2, with probability at least , the trained classifier with added depolarisation noise is robust to adversarial attacks if . By setting , a simple inspection shows that guarantees robustness. Analogous to the infinite sampling case, we employ a bounded-norm adversary to confirm the correctness of our theory result, where the simulation results are shown in the right panel of Figure 6.
For more details on the implementation of the classifier and perfomance analysis of our protocol please see Appendix H.
V Advantages of protocol
Adversarial settings naturally occur when data needs to be delegated to different parties, for instance in a client-server setting and in multiparty computing. When this data is in the form of quantum states before processing using a quantum classifier, our protocol currently provides the only exisiting method to protect the general quantum classifier against arbitrary adversarial examples and also includes a theoretically provable bound. Furthermore, it can take advantage of certain exisiting quantum noise in a quantum classifier, like depolarisation noise, to provide protection against adversarial examples thus obviating the need for error-correction or error-mitigation if no other noise sources are present. Moreover, even if the test score is diminished in presence of depolarisation noise, its original value in the absence of any quantum noise can be retrieved by simply increasing the number of times one samples from the classifier. This sample complexity increases with the amount of exisiting depolarisation noise and is independent of the dimension of the state itself.
Utilizing quantum noise like depolarisation noise also has certain advantages over classical methods for classical data in improving robustness against adversarial examples. We discuss this below.
v.1 Comparison to the best known classical protocol
While in the quantum case the theoretical bound on robustness is independent of the details of the classification model and is simple to compute, this is not true in the best known classical protocol. Before elaborating on this quantum advantage, we briefly review the classical results.
Following the results of lecuyer2019certified , classical -differential privacy of a classification algorithm is obtained by adding noise sampled from the Laplacian distribution to the trained classifier. This is commonly known as the Laplace mechanism. For numerical functions 333For non-numerical functions, the exponential mechanism is employed.
, the only other common method to attain differential privacy is the Gaussian mechanism, which adds noise sampled from the Gaussian distribution. However, this leads to classical-differential privacy where , so cannot be directly compared to our quantum scenario where . The Laplacian distribution used in the Laplace mechanism can be written as
refers to the variance of the Laplacian distribution andis the upper-bounded norm between original input and attacked input such that classical -differential privacy is preserved. The sensitivity of the function applied at a layer of the neural network classifier just before the Laplacian noise is injected is defined as
The classical protocol runs in the following way. In the testing phase, the adversarial example , where and is the original test example, is inserted into the trained classifier . The predicted label for is obtained by invoking a total times. For every run of , the noise with is independently sampled from and applied to the input to some layer of the neural network realising the classifier. Let denote the number of times that the predicted label is , so the probability of the predicted label being is given by . Then, similarly to Theorem 2, we can write the following condition for robustness of the -class classifier under the Laplace mechanism.
Lemma 3 (modified from lecuyer2019certified ).
Let be the input to the -multiclass classifier, which is endowed with classical -differential privacy under the Laplace mechanism, with , as formulated in Eq. (18). Let be the label of . Then with probability at least , the classifier is robust to any adversarial example with if
This means that this best available classical theoretical bound to depends on , which in general is dependent on both the details of the classification model used and the layer of the neural network in which the Laplacian noise is injected. However, in the quantum scenario with depolarisation noise, we see that the robustness bound is independent of both , the circuit realising the quantum classifier, as well as the location or locations of noise injection. This means that the adversarial robustness bound is universal for all quantum classifiers.
We can see this from the fact that the final state of the quantum circuit after applying depolarisation noise in layers to depends only on the product , which is independent of and invariant under any re-ordering of the layers. This simplicity in the quantum case results from two facts: that the ‘noisy’ part of depolarisation noise lies in injecting a maximally-mixed channel with a certain probability and that unitary operations realising any quantum classifier are unital (i.e., the identity operator remains invariant under ). On the other hand, there is no known classical equivalent of this property that also gives rise to differential privacy.
The dependence of on the details of the classifier in the most general cases also leads to a difficulty in the computation of and is often intractable except in the simplest cases lecuyer2019certified . This means that, unlike in the quantum case, the corresponding classical bound on robustness cannot be derived in closed form from Eq. (20) in the most general case.
However, in special simple cases we can provide quantitative examples of this quantum advantage. As a simple illustration, we can look at the binary classifier for the kernel perceptron, which can be written as
where are the training examples and are trained parameters of the classifier. We can consider the polynomial kernel
where is the kernel degree and is the special case of the linear kernel. We now have the following theorem.
We have a binary classifier where with the polynomial kernel . Let denote all correctly labelled test examples. We now implement the Laplace mechanism in this classifier where the sensitivity is and the privacy budget is . Let us choose and define . We can define the function for our noisy classifier where . Then the classifier is robust under any adversarial example where and
Proof of Theorem 3.
We compute an upper-bound for in terms of classification model parameters in and use . Please see Appendix C for details. ∎
From this we see that we can guarantee only a smaller robustness bound for a more nonlinear kernel (i.e., higher ). We can also use a quantum classifier below to realise the same polynomial kernel and find a robustness bound that is now independent of degree of nonlinearity of the kernel.
We have a kernel perceptron binary classifier that is realised by a quantum circuit in the absence of noise and takes the form in Figure 2 with . Without losing generality, we can assume the class label of is . Now we add depolarisation noise channels to the classifier where to create a noisy classifier . Let us choose and define . Then the noisy classifier is robust under any adversarial perturbation such that where