Hardening Quantum Machine Learning Against Adversaries

11/17/2017 ∙ by Nathan Wiebe, et al. ∙ 0

Security for machine learning has begun to become a serious issue for present day applications. An important question remaining is whether emerging quantum technologies will help or hinder the security of machine learning. Here we discuss a number of ways that quantum information can be used to help make quantum classifiers more secure or private. In particular, we demonstrate a form of robust principal component analysis that, under some circumstances, can provide an exponential speedup relative to robust methods used at present. To demonstrate this approach we introduce a linear combinations of unitaries Hamiltonian simulation method that we show functions when given an imprecise Hamiltonian oracle, which may be of independent interest. We also introduce a new quantum approach for bagging and boosting that can use quantum superposition over the classifiers or splits of the training set to aggregate over many more models than would be possible classically. Finally, we provide a private form of k--means clustering that can be used to prevent an all powerful adversary from learning more than a small fraction of a bit from any user. These examples show the role that quantum technologies can play in the security of ML and vice versa. This illustrates that quantum computing can provide useful advantages to machine learning apart from speedups.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

There is a huge uptick in the use of machine learning for mission critical industrial applications – from self driving cars pomerleau1989alvinn to detecting malignant tumors cruz2006applications to detecting fraudulent credit card transactions xiao2015feature

: machine learning is crucial in decision making. However, classical Machine learning algorithms such as Principal component analysis (used heavily in anomaly detection scenarios), clustering (used in unsupervised learning), support vector machines (used in classification scenarios), as commonly implemented, are extremely vulnerable to changes to the input data, features and the final model parameters/hyper-parameters that have been learned. Essentially, an attacker can exploit any of the above vulnerabilities and subvert ML algorithms. As a result, an attacker has a variety of goals that can be achieved: increasing the false negative rate and thus become undetected (for instance, in the case of spam, junk emails are classified as normal) 

wittel2004attacking or by increasing the false positive rate (for instance, in the case of intrusion detection systems, attacks get drowned in sea of “noise” which causes the system to shift the baseline activity) lowd2005good ; stern2004linguistics ; alfeld2016data ; stern2004linguistics , steal the underlying model itself exploiting membership queries tramer2016stealing and even recover the underlying training data breaching privacy contracts tramer2016stealing .

In machine learning literature, this study is referred to as adversarial machine learning and has largely been applied to security sensitive areas such as intrusion detection 

biggio2010multiple ; biggio2013evasion and spam filtering wittel2004attacking

. To combat the problem of adversarial machine learning, solutions have been studied from different vantage points: from a statistical stand point, adversaries are treated as noise and thus the models are hardened using robust statistics to overcome the malicious outliers 

hampel2011robust . Adversarial training is a promising trend in this space, is wherein defenders train the system with adversarial examples from the start so that the model is acclimatized to such threats. From a security standpoint, there has been substantial work surrounding threat modeling machine learning systems ringberg2007sensitivity ; papernot2016limitations and frameworks for anticipating different kinds of attacks papernot2016cleverhans .

Quantum computing has experienced a similar surge of interest of late and this had led to a synergistic relationship wherein quantum computers have been found to have profound implications for machine learning biamonte2017quantum ; aimeur2006machine ; lloyd2013quantum ; pudenz2013quantum ; rebentrost2014quantum ; WKS15 ; wiebe2016quantum ; rebentrost2016quantum ; kerenidis2016quantum ; amin2016quantum ; schuld2017quantum ; gilyen2017optimizing ; kieferova2016tomography and machine learning has been shown to be invaluable for characterizing, controlling and error correcting such quantum computers assion1998control ; granade2012robust ; torlai2017neural . However, as of late the question of what quantum computers can do to protect machine learning from adversaries is relatively underdeveloped. This is perhaps surprising given the impact that applications of quantum computing to security have a long history.

The typical nexus of quantum computing and security is studied from the light of quantum cryptography and its ramifications in key exchange and management. This paper takes a different tack. We explores this intersection by asking questions the use of quantum subroutines to analyze patterns in data, and explores the question and security assurance - specifically, asking the question “Are quantum machine learning systems secure”?

Our aim is to address this question by investigating the role that quantum computers may have in protecting machine learning models from attackers. In particular, we introduce a robust form of quantum principal component analysis that provides exponential speedups while making the learning protocol much less sensitive to noise introduced by an adversary into the training set. We then discuss bagging and boosting, which are popular methods for making models harder to extract by adversaries and also serve to make better classifiers by combining the predictions of weak classifiers. Finally, we discuss how to use ideas originally introduced to secure quantum money to boost the privacy of -means by obfuscating the private data of participants from even an allpowerful adversary. From this we conclude that quantum methods can be used to have an impact on security and that while we have shown defences against some classes of attacks, more work is needed before we can claim to have a fully developed understanding of the breadth and width of adversarial quantum machine learning.

Ii Adversarial Quantum Machine Learning

As machine learning becomes more ubiquitous in applications so too do attacks on the learning algorithms that they are based on. The key assumption usually made in machine learning is that the training data is independent of the model and the training process. For tasks such as classification of images from imagenet such assumptions are reasonable because the user has complete control of the data. For other applications, such as developing spam filters or intrusion detection this may not be reasonable because in such cases training data is provided in real time to the classifier and the agents that provide the information are likely to be aware of the fact that their actions are being used to inform a model.

Perhaps one of the most notable examples of this is the Tay chat bot incident. Tay was a chat bot designed to learn from users that it could freely interact with in a public chat room. Since the bot was programmed to learn from human interactions it could be subjected to what is known as a wolf pack attack, wherein a group of users in the chat room collaborated to purposefully change Tay’s speech patterns to become increasingly offensive. After hours the bot was pulled from the chat room. This incident underscores the need to build models that can learn while at the same time resist malfeasant interactions on the part of a small fraction of users of a system.

A major aim of adversarial machine learning is to characterize and address such problems by making classifiers more robust to such attacks or by giving better tools for identifying when such an attack is taking place. There are several broad classes of attacks that can be considered, but perhaps the two most significant in the taxonomy in attacks against classifiers are exploratory attacks and causative attacks. Exploratory attacks are not designed to explicitly impact the classifier but instead are intended to give an adversary information about the classifier. Such attacks work by the adversary feeding test examples to the classifier and then inferring information about it from the classes (or meta data) returned. The simplest such attack is known as an evasion attack, which aims to find test vectors that when fed into a classifier get misclassified in a way that benefits the adversary (such as spam being misclassified as ordinary email). More sophisticated exploratory attacks may even try identify to the model used to assign the class labels or in extreme cases may even try identify the training set for the classifier. Such attacks can be deployed as a precursor to causitive attacks or can simply be used to violate privacy assumptions that the users that supplied data to train the classifier may have had.

Causitive attacks are more akin to the Tay example. The goal of such attacks is to change the model by providing it with training examples. Again there is a broad taxonomy of causitive attacks but one of the most ubiquitous attacks is the poisoning attack. A poisoning attack seeks to control a classifier by introducing malicious training data into the training set so that the adversary can force an incorrect classification for a subset of test vectors. One particularly pernicious type of attack is the boiling frog attack, wherein the amount of malicious data introduced in the training set is slowly increased over time. This makes it much harder to identify whether the model has been compromised by users because the impact of the attack, although substantial, is hard to notice on the timescale of days.

While a laundry list of attacks are known against machine learning systems the defences that have been developed thus far are somewhat limited. A commonly used tactic is to replace classifiers with robust versions of the same classifier. For example, consider

–means classification. Such classifiers are not necessarily robust because the intra-cluster variance is used to decide the quality of a clustering. This means that an adversary can provide a small number of training vectors with large norm that can radically impact the cluster assignment. On the other hand, if robust statistics such as the median, are used then the impact that the poisoned data can have on the cluster centroids is minimal. Similarly, bagging can also be used to address these problems by replacing a singular classifier that is trained on the entire training set with an ensemble of classifiers that are trained on subsets the training set. By aggragating over the class labels returned by this process we can similarly limit the impact of an adversary that controls a small fraction of the data set.

Given such examples, two natural questions arise: 1) “How should we model threats in a quantum domain?” and 2) “Can quantum computing be used to make machine learning protocols more secure?”. In order to address the former problem it is important to consider the access model used for the training data for the quantum ML algorithm. Perhaps the simplest model to consider is a QRAM wherein the training data is accessed using a binary access tree that allows, in depth but size , the training data to be accessed as bit strings. For example, up to isometries, where

is a qubit string used to encode the

training vector. Similarly such an oracle can be used to implement a second type of oracle which outputs the vector as a quantum state vector (we address the issue of non-unit norm training examples below): . Alternatively, one can consider a density matrix query for use in an LMR algorithm for density matrix exponentiation lloyd2014quantum ; kimmel2017hamiltonian wherein a query to the training data takes the form where

is a density operator that is equivalent the distribution over training vectors. For example, if we had a uniform distribution over training vectors in a training set of

training vectors then .

With such quantum access models defined we can now consider what it means to perform a quantum poisoning attack. A poisoning attack involves an adversary purposefully altering a portion of the training data in the classical case so in the quantum case it is natural to define a poisoning attack similarly. In this spirit, a quantum poisoning attack takes some fraction of the training vectors, , and replaces them with training vectors of their choosing. That is to say, if without loss of generality an adversary replaces the first training vectors in the data set then the new oracle that the algorithm is provided is of the form

(1)

where is a bit string of the adversary’s choice. Such attacks are reasonable if a QRAM is stored on an untrusted machine that the adversary has partial access to, or alternatively if the data provided to the QRAM is partially under the adversary’s control. The case where the queries provide training vectors, rather than qubit vectors, is exactly the same. The case of a poisoning attack on density matrix input is much more subtle, however in such a case we define the poisoning attack to take the form , however other definitions are possible.

Such an example is a quantum causitive attack since it seeks to cause a change in the quantum classifier. An example of a quantum exploratory attack could be an attack that tries to identify the training data or the model used in the classification process. For example, consider a quantum nearest neighbor classifier. Such classifiers search through a large database of training examples in order to find the closest training example to a test vector that is input. By repeatedly querying the classifier using quantum training examples adversarially chosen it is possible to find the decision boundaries between the classes and from this even the raw training data can, to some extent, be extracted. Alternatively one could consider cases where an adversary has access to the network on which a quantum learning protocol is being carried out and seeks to learn compromising information about data that is being provided for the training process, which may be anonymized at a later stage in the algorithm.

Quantum can help, to some extent, both of these issues. Poisoning attacks can be addressed by building robust classifiers. That is, classifiers that are insensitive to changes in individual vectors. We show that quantum technologies can help with this by illustrating how quantum principal component analysis and quantum bootstrap aggragation can be used to make the decisions more robust to poisoning attacks by proposing new variants of these algorithms that are ammenable to fast quantum algorithms for medians to be inserted in the place of the expectation values typically used in such cases. We also illustrate how ideas from quantum communication can be used to thwart exploratory attacks by proposing a private version of –means clustering that allows the protocol to be performed without (substantially) compromising the private data of any participants and also without requiring that the individual running the experiment be authenticated.

Iii Robust Quantum PCA

The idea behind principal component analysis is simple. Imagine you have a training set composed of a large number of vectors that live in a high dimensional space. However, often even high dimensional data can have an effective low-dimensional approximation. Finding such representations in general is a fine art, but quantum principal component analysis provides a prescriptive way of doing this. The idea of principal component analysis is to examine the eigenvectors of the covariance matrix for the data set. These eigenvectors are the principal components, which give the directions of greatest and least variance, and their eigenvalues give the magnitude of the variation. By transforming the feature space to this eigenbasis and then projecting out the components with small eigenvalue one can reduce the dimensionality of the feature space.

One common use for this, outside of feature compression, is to detect anomalies in training sets for supervised machine learning algorithms. Imagine that you are the administrator of a network and you wish to determine whether your network has been compromised by an attacker. One way to detect an intrusion is to look at the packets moving through the network and use principal component analysis to determine whether the traffic patterns are anomalous based on previous logs. The detection of an anomalous result can be performed automatically by projecting the traffic patterns onto the principal components of the traffic data. If the data is consistent with this data it should have high overlap with the eigenvectors with large eigenvalue, and if it is not then it should have high overlap with the small eigenvectors.

While this technique can be used in practice it can have a fatal flaw. The flaw is that an adversary can inject spikes of usage in directions that align with particular principal components of the classifier. This allows them to increase the variance of the traffic along principal components that can be used to detect their subsequent nefarious actions. To see this, let us restrict our attention to poisoning attacks wherein an adversary controls some constant fraction of the training vectors.

Robust statistics can be used to help mitigate such attacks. The way that it can be used to help make PCA secure is by replacing the mean with a statistic like the median. Because the median is insensitive to rare but intense events, an adversary needs to control much more of the traffic flowing through the system in order to fool a classifier built to detect them. For this reason, switching to robust PCA is a widely used tactic to making principal component analysis more secure. To formalize this, let us define what the robust PCA matrix is first.

Definition 1.

Let be a set of real vectors and let be basis vectors in the computational basis then we define .

This is very similar to the PCA matrix when we can express as . Because of the similarity that it has with the PCA matrix, switching from a standard PCA algorithm to a robust PCA algorithm in classical classifiers is a straight forward substitution.

In quantum algorithms, building such robustness into the classifier is anything but easy. The challenge we face in doing so is that the standard approach to this uses the fact that quantum mechanical state operators can be viewed as a covariance matrix for a data set lloyd2014quantum ; kimmel2017hamiltonian

. By using this similarity in conjunction with the quantum phase estimation algorithm the method provides an expedient way to project onto the eigenvectors of the density operator and in turn the principal components. However, the robust PCA matrix given in 

1 does not have such a natural quantum analogue. Moreover, the fact that quantum mechanics is inherently linear makes it more challenging to apply an operation like the median which is not a linear function of its inputs. This means that if we want to use quantum PCA in an environment where an adversary controls a part of the data set, we will need to rethink our approach to quantum principal component analysis.

The challenges faced by directly applying density matrix exponentiation also suggests that we should consider quantum PCA within a different cost model than that usually used in quantum principal component analysis. We wish to examine whether data obtained from a given source, which we model as a black box oracle, is typical of a well understood training set or not. We are not interested in the principal components of the vector, but rather are interested in its projection onto the low variance subspace of the training data. We also assume that the training vectors are accessed in an oracular fashion and that nothing apriori is known about them.

Definition 2.

Let be a self-adjoint unitary operator acting on a finite dimensional Hilbert space such that where is the training vector.

Lemma 3.

Let for integer in obey . There exists a set of unit vectors in a Hilbert space of dimension such that for any within this set .

Proof.

The proof is constructive. Let us assume that . First let us define for any ,

(2)

We then encode

(3)

If then

(4)

It is then easy to verify that for all pairs of and , even if

. The resultant vectors are defined on a tensor product of two vector spaces. The first is of dimension

and the second is or dimension at least . Since the dimension of a tensor product of Hilbert spaces is the product of the subsystem dimensions, the dimension of the overall Hilbert space is as claimed. ∎

For the above reason, we can assume without loss of generality that all training vectors are unit vectors. Also for simplicity we will assume that , but the complex case follows similarly and only differs in that both the real and imaginary components of the inner products need to be computed.

Threat Model 1.

Assume that the user is provided an oracle, , that allows the user to access real-valued vectors of dimension as quantum states yielded by an oracle of the form in Lemma 3

. This oracle could represent an efficient quantum subroutine or it could represent a QRAM. The adversary is assumed to have control with probability

the vector yielded by the algorithm for a given coherent query subject to the constraint that . The aim of the adversary is to affect, through these poisoned samples, the principal components yielded from the data set yielded by .

Within this threat model, the maximum impact that the adversary could have on any of the expectation values that form the principal component matrix . Thus any given component of the principal component matrix can be controlled by the adversary provided that the data expectation where is an upper bound on the componentwise means of the data set. In other words, if is sufficiently large and the maximum vector length allowed within the algorithm is also large then the PCA matrix can be compromised. This vulnerability comes from the fact that the mean can be dominated by inserting a small number of vectors with large norm.

Switching to the median can dramatically help with this problem as shown in the following lemma, whose proof is trivial but we present for completeness.

Lemma 4.

Let

be a probability distribution on

with invertable cummulative distribution function . Further, let be its inverse cummulative distribution function and let be Lipshitz continuous with Lipshitz constant on . Assume that an adversary replaces with a distribution that is within total variational distance from . Under these assumptions .

Proof.

Let and . We then have from the triangle inequality that

(5)

Thus we have that . The inverse cummulative distribution function is a monotonically increasing function on which implies along with our assumption of Lipshitz continuity with constant

(6)

Using the exact same argument but applying the reverse triangle inequality in the place of the triangle inequality gives which completes the proof. ∎

Thus under reasonable continuity assumptions on the unperturbed probability distribution Lemma 4 shows that the maximum impact that an adversary can have on the median of a distribution is negligible. In contrast, the mean does not enjoy such stability and as such using robust statistics like the median can help limit the impact of adversarially placed training vectors within a QRAM or alternatively make the classes less sensitive to outliers that might ordinarily be present within the training data.

Theorem 5.

Let the minimum eigenvalue gap of –sparse Hermitian matrix within the support of the input state be , let each training vector be a unit vector provided by the oracle defined in Lemma 3 and assume that the underlying data distribution for the components of the training data has a continuous inverse cummulative distribution with function with constant Lipshitz constant . We have that

  1. The number of queries needed to the oracle given in Lemma 3 needed to sample from a distribution over –approximate eigenvalues of such that for all training vectors is in .

  2. Assuming an adversary replaces the data distribution used for the PCA task by one within variational distance from the original such that and . If be is the poisoned robust PCA matrix then .

The proof is somewhat involved. It involves the use of linear-combinations of unitary simulation techniques and introduces a generalizations to these methods that shows that they can continue to function when probabilistic oracles are used for the matrix elements of the matrix (see Lemma 16) without entanglement to the control register creating problems with the algorithm. Furthermore, we need to show that the errors in simulation do not have a major impact on the output probability distributions of the samples drawn from the eigenvalues/ vectors of the matrix. We do this using an argument based on perturbation theory under the assumption that all relevant eigenvalues have multiplicity . For these reasons, we encourage the interested reader to look at the appendix for all technical details regarding the proof.

From Theorem 5 we see that the query complexity required to sample from the eigenvectors of the robust PCA matrix is exponentially lower in some cases than what would be expected from classical methods. While this opens up the possibility of profound quantum advantages for robust principal component analysis, a number of caveats exist that limit the applicability of this method:

  1. The cost of preparing the relevant input data will often be exponential unless takes a special form.

  2. The proofs we provide here require that the gaps in the relevant portion of the eigenvalue spectrum are large in order to ensure that the perturbations in the eigenvalues of the matrix do not sufficiently large as to distort the support of the input test vector.

  3. The matrix must be polynomially sparse.

  4. The desired precision for the eigenvalues of the robust PCA matrix is inverse polynomial in the problem size.

  5. The eigenvalue gaps are polynomially large in the problem size.

These assumptions preclude it from providing exponential advantages in many cases, however, it does not preclude exponential advantages in quantum settings where the input is given by an efficient quantum procedure. Some of these caveats can be relaxed by using alternative simulation methods or exploiting stronger promises about the structure of . We leave a detailed discussion of this for future work. The final two assumptions are in practice may be the strongest restrictions for our method, or at least the analysis we provide for the method, because the complexity diverges rapidly with both quantities. It is our hope that subsequent work will improve upon this, perhaps by using more advanced methods based on LCU techniques to eschew the use of amplitude estimation as an intermediate step.

Looking at the problem from a broader perspective, we have shown that quantum PCA can be applied to defend against Threat Model 1. Specifically, we have resilience to . in a manner that helps defend against attacks on the training corpus either directly which allows the resulting classifiers to be made robust to adversaries who control a small fraction of the training data. This shows that ideas from adversarial quantum machine learning can be imported into quantum classifiers. In contrast, before this work it was not clear how to do this because existing quantum PCA methods cannot be easily adapted from using a mean to a median. Additionally, this robustness also can be valuable in non-adversarial settigns becuase it makes estimates yielded by PCA less sensitive to outliers which may not necessarily be added by an adversary. We will see this theme repeated in the following section where we discuss how to perform quantum bagging and boosting.

Iv Bootstrap Aggragation and Boosting

Bootstrap aggragation, otherwise known as bagging for short, is another approach that is commonly used to increase the security of machine learning as well as improve the quality of the decision boundaries of the classifier. The idea behind bagging is to replace a single classifier with an ensemble of classifiers. This ensemble of classifiers is constructed by randomly choosing portions of the data set to feed to each classifier. A common way of doing this is bootstrap aggragation, wherein the training vectors are chosen randomly with replacement from the original training set. Since each vector is chosen with replacement, with high probability each training set will exclude a constant fraction of the training examples. This can make the resultant classifiers more robust to outliers and also make it more challenging for an adversary to steal the model.

We can abstract this process by instead looking at an ensemble of classifiers that are trained using some quantum subroutine. These classifiers may be trained with legitimate data or instead may be trained using data from an adversary. We can envision a classifier, in the worst case, as being compromised if it receives the adversary’s training set. As such for our purposes we will mainly look at bagging through the lens of boosting, which uses an ensemble of different classifiers to assign a class to a test vector each of which may be trained using the same or different data sets. Quantum boosting has already been discussed in schuld2017quantum , however, our approach to boosting differs considerably from this treatment.

The type of threat that we wish to address with our quantum boosting protocol is given below.

Threat Model 2.

Assume that the user is provided a programmable oracle, , such that where each is a quantum classifier that acts on the test vector . The adversary is assumed to control a fraction of of all classifiers that the oracle uses to classify data and wishes to affect the classes reported by the user of the boosted quantum classifier that implements . The adversary is assumed to have no computational restrictions and complete knowledge of the classifiers / training data used in the boosting protocol.

While the concept of a classifier has a clear meaning in classical computing, if we wish to apply the same ideas to quantum classifiers we run into certain challenges. The first such challenge lies with the encoding of the training vectors. In classical machine learning training vectors are typically bit vectors. For example, a binary classifier can then be thought of as a map from the space of test vectors to corresponding to the two classes. This is if were a classifier then for all vectors depending on the membership of the vector. Thus every training vector can be viewed as an eigenvector with eigenvalue . This is illustrated for unit vectors in Figure 1.

Now imagine we instead that our test vector is a quantum state in . Unlike the classical setting, it may be physically impossible for the classifier to know precisely what is because measurement inherently damages the state. This makes the classification task fundamentally different than it is in classical settings where the test vector is known in its entirety. Since such vectors live in a two-dimensional vector space and they comprise an infinite set, not all can be eigenvectors of the classifier . However, if we let be a classifier that has eigenvalue we can always express , where . Thus we can still classify in the same manner, but now we have to measure the state repeatedly to determine whether

has more projection onto the positive or negative eigenspace of the classifier. This notion of classification is a generalization of the classical idea and highlights the importance of thinking about the eigenspaces of a classifier within a quantum framework.

Our approach to boosting and bagging embraces this idea. The idea is to combine an ensemble of classifiers to form a weighted sum of classifiers where and . Let be a simultaneous eigenvector of each , ie

(7)

The same obviously holds for any that is a negative eigenvector of each . That is any vector that is in the simultaneous positive or negative eigenspace of each classifier will be deterministically classified by .

Figure 1:

Eigenspaces for two possible linear classifiers for linearly separated unit vectors. Each trained model provides a different separating hyperplane and eigenvectors. Bagging aggragates over the projections of the test vector onto these eigenvectors and outputs a class based on that aggragation.

The simplest way to construct a classifier out of an ensemble of classifiers is to project the test vector onto the eigenvectors of the sum of the classifiers and compute the projection of the state into the positive and negative eigenvalue subspaces of the classifier. This gives us an additional freedom not observed in classical machine learning. While the positive and negative eigenspaces are fixed for each classifier in the classical analogues of our classifier, here they are not. Also we wish our algorithm to function for states that are fed in a streaming fashion. That is to say, we do not assume that when a state is provided that we can prepare a second copy of this state. This prevents us from straight forwardly measuring the expectation of each of the constituent to obtain a classification for . It also prevents us from using quantum state tomography to learn and then applying a classifier on it. We provide a formal definition of this form of quantum boosting or bagging below.

Definition 6.

Two-class quantum boosting or bagging is defined to be a process by which an unknown test vector is classified by projecting it onto the eigenspace of and assigning the class to be if the probability of projecting onto the positive eigenspace is greater than and otherwise. Here each is unitary with eigenvalue , , , and at least two that correspond to distinct are positive.

We realize such a classifier via phase estimation. But before discussing this, we need to abstract the input to the problem for generality. We do this by assuming that each classifier, , being included is specified only by a weight vector

. If the individual classifiers were neural networks then the weights could correspond to edge and bias weights for the different neural networks that we would find by training on different subsets of the data. Alternatively, if we were forming a committee of neural networks of different structures then we simply envision taking the register used to store

to be of the form where the tag tells the system which classifier to use. This allows the same data structure to be used for arbitrary classifiers.

One might wonder why we choose to assign the class based on the mean class output by rather than the class associated with the mean of . In other words, we could have defined our quantum analogue of boosting such that the class is given by the expectation value of in the test state, . In such a case, we would have no guarantee that the quantum classifier would be protected against the adversary. The reason for this is that the mean that is computed is not robust. For example, assume that the ensemble consists of classifiers such that for each , and assume the weights are uniform (if they are not uniform then the adversary can choose to replace the most significant classifier and have an even greater effect). Then if an adversary controls a single classifier example and knows the test example then they could replace such that . The assigned class is then

(8)

In such an example, even a single compromised classifier can impact the class returned because by picking the adversary has complete control over the class returned. In contrast, we show that compromising a single quantum classifier does not have a substantial impact on the class if our proposal is used assuming the eigenvalue gap between positive and negative eigenvalues of is sufficiently large in the absence of an adversary.

We formalize these ideas in a quantum setting by introducing quantum blackboxes, , and . The first takes an index and outputs the corresponding weight vector. The second prepares the weights for the different classifiers in the ensemble . The final blackbox applies, programmed via the weight vector, the classifier on the data vector in question. We formally define these black boxes below.

Definition 7.

Let be a unitary operator such that if represents the weights that specify the classifier for any state . Let be a unitary operator that, up to local isometries performs , which is to say that it generates the weights for classifier (potentially via training on a subset of the data). Finally, define unitary for non-negative that sum to .

In order to apply phase estimation on , for a fixed and unknown input vector, , we need to be able to simulate . Fortunately, because each has eigenvalue it is easy to see that and hence is Hermitian. Thus is a Hamiltonian. We can therefore apply Hamiltonian simulation ideas to implement this operator and formally demonstrate this in the following lemma.

Lemma 8.

Let where each is Hermitian and unitary and has a corresponding weight vector for and . Then for every and there exists a protocol for implementing a non-unitary operator, , such that for all using a number of queries to , and that is in .

Proof.

The proof follows from the truncated Taylor series simulation algorithm. By definition we have that . Now let us consider the operation select. This operation has the effect that, up to isometries, . The operations and take the exact same form as their analogues in the Taylor-series simulation algorithm. If then the Taylor series simulation algorithm requires an expansion up to order and the use of robust oblivious amplitude estimation requires such segments where . Thus the total number of queries to select made in the simulation is , and the number of queries to in the algorithm is .

Next, by assumption we have that and therefore . Thus the number of queries to select and in the algorithm is in . The result then follows after noting that a single call to select requires calls to and . ∎

Note that in general purpose simulation algorithms, the application of select will require a complex series of controls to execute; whereas here the classifier is made programmable via the oracle and so the procedure does not explicitly depend on the number of terms in (although in practice it will often depend on it through the cost of implementing as a circuit). For this reason the query complexity cited above can be deceptive if used as a surrogate for the time-complexity in a given computational model.

With the result of Lemma 8 we can now proceed to finding the cost of performing two-class bagging or boosting where the eigenvalue of is used to perform the classification. The main claim is given below.

Theorem 9.

Under the assumptions of Lemma 8, the number of queries needed to , and to project onto an eigenvector of with probability at least and estimate the eigenvalue within error is in .

Proof.

Since , it follows that we can apply phase estimation on the in order to project the unknown state onto an eigenvector. The number of applications of needed to achieve this within accuracy and error is using coherent phase estimation WKS15 . Therefore, after taking and using the result of Lemma 8 to simulate that the overall query complexity for the simulation is in . ∎

Finally, there is the question of how large on an impact an adversary can have on the eigenvalues and eigenvectors of the classifier. This answer can be found using perturbation theory to estimate the derivatives of the eigenvalues and eigenvectors of the classifier as a function of the maximum fraction of the classifiers that the adversary has compromised. This leads to the following result.

Corollary 10.

Assume that is a classifier defined as per Definition 6 that is described by a Hermitian matrix that the absolute value of each eigenvalue is bounded below by and that one wishes to perform a classification of a data set based on the mean sign of the eigenvectors in the suport of a test vector. Given that an adversary controls a fraction of the classifiers equal to , the adversary cannot affect the classes output by this protocol in the limit where and .

Proof.

If an adversary controls a fraction of data equal to then we can view the perturbed classifier as

(9)

This means that we can view as a perturbation of the original matrix by a small amount. In particular, this implies that because each is Hermitian and unitary

(10)

Using perturbation theory, (see Eq. (47) in the appendix), we show that the maximum shift in any eigenvalue due to this perturbation is at most . By which we mean that if is the eigenvalue of and then the corresponding eigenvalue obeys . By integrating this we find that (informally the eigenvalue of corresponding to the eigenvalue of ) we find that .

Given that the above argument shows that if . Thus if the value of is sufficiently small then none of the eigenvectors returned will have the incorrect sign. This implies that if we neglect errors in the estimation of eigenvalues of then the adversary can only impact the class output by if they control a fraction greater than . In the limit as and these errors become negligible and the result follows. ∎

From this we see that adversaries acting in accordance with Threat Model 2 can be thwarted using boosting or bagging that have control over a small fraction of the quantum classifiers used in a boosting protocol. This illustrates that by generalizing the concept of boosting to a quantum setting we can not only make our classifiers better but also make them more resilient to adversaries who control a small fraction of the classifiers provided. This result, combined with our previous discussion of robust quantum PCA, shows that quantum techniques can be used to defend quantum classifiers against causative attacks by quantum adversaries.

However, we have not discussed exploratory attacks which often could be used as the precursors to such attacks or in some cases may be the goal in and of itself. We show below that quantum can be used in a strong way to defend against some classes of these attacks.

V Quantum Enhanced Privacy for Clustering

Since privacy is one of the major applications of quantum technologies, it should come as no surprise that quantum computing can help boost privacy in machine learning as well. As a toy example, we will first discuss how quantum computing can be used to allow –means clustering to be performed without leaking substantial information about any of the training vectors.

The –means clustering algorithm is perhaps the most ubiquitous algorithm used to cluster data. While there are several variants of the algorithm, the most common variant attmepts to break up a data set into clusters such that the sum of the intra-cluster variances is minimized. In particular, let gives the index of the set of centroids and are the vectors in the data set. The –means clustering algorithm then seeks to minimize . Formally, this problem is –hard which means that no efficient clustering algorithm (quantum or classical) is likely to exist. However, most clustering problems are not hard examples, which means that clustering generically is typically not prohibitively challenging.

The –means algorithm for clustering is simple. First begin by assigning the to data points sampled at random. Then for each find the that is closest to it, and assign that vector to the cluster. Next set to be the cluster centroids of each of the clusters and repeat the previous steps until the cluster centroids converge.

A challenge in applying this algorithm to cases, such as clustering medical data, is that the individual performing the algorithm typically needs to have access to the users information. For sensitive data such as this, it is difficult to apply such methods to understand structure in medical data sets that are not already anonymized. Quantum mechanics can be used to address this.

Imagine a scenario where an experimenter wishes to collaboratively cluster a private data set. The experimenter is comfortable broadcasting information about the model to the owners of the data, but the users are not comfortable leaking more than bits of information about their private data in the process of clustering the data. Our approach, which is related to quantum voting strategies hillery2006towards , is to share an entangled quantum state between the recipients and use this to anonymously learn the means of the data.

Threat Model 3.

Assume that a group of users wish to apply –means clustering to cluster their data and that an adversary is present that wishes to learn the private data held by at least one of the users as an exploratory attack and has no prior knowledge about any of the users’ private data before starting the protocol. The adversary is assumed to have control over any and all parties that partake in the protocol, authentication is impossible in this setting and the adversary is unbounded computationally.

The protocol that we propose to thwart such attacks is given below.

  1. The experimenter broadcasts cluster centroids over a public classical channel.

  2. For each cluster centroid, , the experimenter sends the participant one qubit out of the state .

  3. Each participant that decides to contribute applies to the qubit corresponding to the cluster that is closest to their vector. If two clusters are equidistant, the closest cluster is chosen randomly.

  4. The experimenter repeats above two steps times in a phase estimation protocol to learn within error . Note that because participants may not participate .

  5. Next the experimenter performs the same procedure except phase estimation is now performed for each of the components of , a total of for each cluster .

  6. From these values the experimenter updates the centroids and repeats the above steps until convergence or until the privacy of the users data can no longer be guaranteed.

Intuitively, the idea behind the above method is that each time a participant interacts with their qubits they do so by performing a tiny rotation. These rotations are so small that individually they cannot be easily distinguished from performing no rotation, even when the best test permitted by quantum mechanics is used. However, collectively the rotation angles added by each participant sums to non-negligible rotation angle if a large number of participants are pooled. By encoding their private data as rotation angles, the experimenter can apply this trick to learn cluster means without learning more than a small fraction of a bit of information about any individual participant’s private data.

Lemma 11.

Steps 1-5 of the above protocol take a set of –dimensional vectors held by potential participants and a set of cluster centroids and computes an iteration of –means clustering with the output cluster centroids computed within error in the infinity norm and each participant requires a number of single qubit rotations that scales as under the assumption that each participant follows the protocol precisely.

Proof.

Let us examine the protocol from the perspective of the experimenter. From this perspective the participants’ actions can be viewed collectively as enacting blackbox transformations that apply an appropriate phase on the state . Let us consider the first phase of the algorithm, corresponding to steps and , where the experimenter attempts to learn the probabilities of users being in each of the clusters.

First when the cluster centroids are announced to participant , they can then classically compute the distance and each of the cluster centroids efficiently. No quantum operations are needed to perform this step. Next for the qubit corresponding to cluster user performs a single qubit rotation and uses a swap operation to send that qubit to the experimenter. This requires quantum operations. The collective phase incurred on the state from each such rotation results in (up to a global phase)

It is then easy to see that after querying this black box -times that the state obtained by them applying their rotation times is . Upon receiving this, the experimenter performs a series of controlled–not operations to reduce this state to up to local isometries. Then after applying a Hadamard transform the state can be expressed as . Thus the probability of measuring this qubit to be exactly matches that of phase estimation, wherein corresponds to the unknown phase. This inference problem is well understood and solutions exist such that if are the values used in the steps of the inference process then if we wish to estimate within error .

While it may naıvely seem that the users require rotations to perform this protocol, each user in fact only needs to perform rotations. This is because each vector is assigned to only one cluster using the above protocol. Thus only rotations are needed per participant.

In the next phase of the protocol, the experimentalist follows the same strategy to learn the means of the clustered data from each of the participants. In this case, however, the blackbox function is of the form

where is the component of the vector. By querying this black box times and performing the exact same transformations used above, we can apply phase estimation to learn within error using queries to the participants. Similarly, we can estimate

within error using querie if is known with no error.

If is known within error and then it is straight forward to see that

Then because it follows that the overall error is at most given our assumption that . Each component of the centroid can therefore be learned using rotations. Thus the total number of rotations that each user needs to contribute to provide information about their cluster centroid is .

While this lemma shows that the protocol is capable of updating the cluster centroids in a –means clustering protocol, it does not show that the users’ data is kept secret. In particular, we need to show that even if the experimenter is an all powerful adversary then they cannot even learn whether or not a user actually participated in the protocol. The proof of this is actually quite simple and is given in the following theorem.

Theorem 12.

Let be a participant in a clustering task that repeats the above protocol times where has and let be the cluster centroids at round of the clustering. Assume that the eavesdropper

assigns a prior probability that

participated in the above protocol to be and wishes to determine whether contributed (which may not be known to ) to the clustering. The maximum probability, over all quantum strategies, of successfully deciding this is if .

Proof.

The proof of the theorem proceeds as follows. Since does not communicate classically to the experimenter, the only way that can solve this problem is by feeding an optimal state to learn ’s data. Specifically, imagine usurps the protocol and provides a state . This quantum state operator is chosen to maximize the information that can learn about the state. The state , for example, could be entangled over the multiple qubits that would be sent over the protocol and could also be mixed in general. When this state is passed to a transformation is enacted in the protocol. Formally, the task that is faced with is then to distinguish between and .

Fortunately, this state discrimination problem is well studied. The optimal probability of correctly distinguishing the two, given a prior probability of , is fuchs1999cryptographic

(11)

Let us assume without loss of generality that ’s is located in the first cluster. Then takes the form

(12)

where refers to the identity acting on the qubits used to learn data about clusters through (as well as any ancillary qubits that chooses to use to process the bits) and and are the number of rotations used in the first and second phases discussed in Lemma 11. refers to the gate applied to the qubit.

Using Hadamard’s lemma and the fact that for any density operator we then see that provided

(13)

Let be the cluster cetnroids in round of the rounds. From Lemma 11 we see that the number of rotations used in the rounds of the first phase obeys and the number of rotations in the rounds of the second phase obeys . We therefore have from Eq. (13) that if then

(14)

If the minimum probability of membership over all clusters and rounds obeys

(15)

which is what we expect in typical cases, the probability of a malevolent experimenter discerning whether participant contributed data to the clustering algorithm is . It then follows that if a total probability of for the eavesdropper identifying whether the user partook in the algorithm can be tolerated then rounds of clustering can be carried out if . This shows that if is sufficiently large then this protocol for –means clustering can be carried out without compromising the privacy of any of the participants.

An important drawback of the above approach is that the scheme does not protect the model learned from the users. Such protocols could be enabled by only having the experimenter reveal hypothetical cluster centroids and then from the results infer the most likely cluster centroids, however this diverges from the -means approach to clustering and will likely need a larger value of to guarantee privacy given that the information from each round is unlikely to be as useful as it is in –means clustering.

Also, while the scheme is private it is not secure. This can easily be seen from the fact that an eavesdropper could intercept a qubit and apply a random phase to it. Because the protocol assumes that the sum of the phases from each participant adds up to at most , this can ruin the information sent. While this approach is not secure against an all-powerful adversary, the impact that individual malfeasant participants could have in the protocol can be mitigated. One natural way is to divide the participants into a constant number of smaller groups, and compute the median of the cluster centroids returned. While such strategies will be successful at thwarting a constant number of such attacks, finding more general secure and private quantum methods for clustering data remains an important goal for future work.

Vi Conclusion

We have surveyed here the impacts that quantum computing can have on security and privacy of quantum machine learning algorithms. We have shown that robust versions of quantum principal component analysis exist that retain the exponential speedups observed for its non-robust analogue (modulo certain assumptions about data access and output). We also show how bootstrap aggregation or boosting can be performed on a quantum computer and show that it is quadratically easier to generate statistics over a large number of weak classifiers using these methods. Finally, we show that quantum methods can be used to perform a private form of –means clustering wherein no eavesdropper can determine with high-probability that any participant contributed data, let alone learn that data.

These results show that quantum technologies hold promise for helping secure machine learning and artificial intelligences. Going forward, providing a better understanding of the impacts that technologies such as blind quantum computing 

arrighi2006blind ; broadbent2009universal

may have for both securing quantum machine learning algorithms as well as blinding the underlying data sets used from any adversary. Another important question is whether tools from quantum information theory could be used to bound the maximum information about the models used by quantum classifiers, such as quantum Boltzmann machines or quantum PCA, that adversaries can learn by querying a quantum classifier that is made publicly available in the cloud. Given the important contributions that quantum information theory has made for our understanding of privacy and security in a quantum world, we have every confidence that these same lessons will one day have an equal impact on the security and privacy of machine learning.

There are of course many open questions left in this field and we have only attempted to give a taste here of the sorts of questions that can be raised when one looks at quantum machine learning in adversarial settings. One important question surrounds the problem of generating adversarial examples for quantum classifiers. Due to the unitarity of quantum computing, many classifiers can be easily inverted from the output quantum states. This allows adversarial examples to be easily created that would generate false positives (or negatives) when exposed to the classifier. The question of whether quantum computing offers significant countermeasures against such attacks remains an open problem. Similarly, gaining a deeper understanding of the limitations that sample lower bounds on state/process imply on stealing quantum models imply for the security of quantum machine learning solutions could be an interesting question for further reasearch. Understanding how quantum computing can provide ways of making machine learning solutions more robust to adversarial noise of course brings more than security, it helps us understand how to train quantum computers to understand concepts in a robust fashion, similar to how we understand concepts. Thus thinking about such issues may help us address what is perhaps the biggest open question in quantum machine learning: “Can we train a quantum computer to think like we think?”

Appendix A Proof of Theorem 1 and Robust LCU Simulation

Our goal in this approach to robust PCA is to examine the eigendecomposition of a vector in terms of the principal components of . This has a major advantage over standard PCA in that it is manifestly robust to outliers. We will see that, within this oracle query model, quantum approaches can provide great advantages for robust PCA if we assume that is sparse.

Now that we have an algorithm for coherently computing the median, we can apply this subroutine to compute the components of . First we need a method to be able to compute a representation of for any and .

Lemma 13.

For any and define such that, for integer , if then there exists a coherent quantum algorithm that maps, up to isometries, for that uses queries to controlled .

Proof.

First let us assume that all the are not unit vectors. Under such circumstances we can use Lemma 3 to embed these vectors as unit vectors in a high dimensional space. Therefore we can assume without loss of generality that are all unit vectors.

Since can be taken to be a unit vector, we can use quantum approaches to compute the inner product. Specifically, the following circuit can estimate the the inner product .

The circuit implements the Hadamard test on the unitary , which prepares the state and computes the bitwise exclusive or on the resultant vector and the basis vector . Specifically, the probability of measuring the top-most qubit to be is

(16)

where recall that we have assumed that the training vectors satisfy . To see this, note that the circuit performs the following transformation

(17)

for some state vector . The probability of measuring the first qubit to be is which after some elementary simplifications is

(18)

given that is real valued.

Following the argument for coherent amplitude estimation in WKS15 we can use this process to produce a circuit that maps for using invocations of the above circuit where, for integer , if . Each query requires invocation of controlled and hence the query complexity is . By subtracting off from the resulting bit string and multiplying the result by we obtain the desired answer. The result is computing to precision because we demanded that is computed with accuracy . The arithmetic requires no additional queries, so the overall query complexity is as claimed. Note that in this case there are several junk qubit registers that are invoked by this algorithm that necessarily extend the space beyond that claimed in the lemma statement; however, since we only require the result to be true up to isometries we neglect such registers above for simplicity. ∎

Theorem 14.

Let be a unitary operation such and . Let be an inverse cummulative distribution function for the and assume that is Lipschitz with constant . For any a unitary can be implemented such that Here be a quantum state on a Hilbert space such that if and only if . Furthermore, this state can be prepared using a number of queries to that scales as

Proof.

Similar to the proof of Grover’s algorithm for the median grover1996fast and that of Nayak and Wu nayak1999quantum , our approach reduces to coherently applying binary search. Their algorithms cannot be directly used here because they utilize measurement, which prevents their use within a coherent amplitude estimation algorithm. Furthermore, the algorithms of nayak1999quantum solve a harder problem, namely that of outputting an approximate median from the list rather than simply providing a value that approximately seperates the data into two equal length halves. This value, however, need not actually be contained in either list unlike the algorithm of nayak1999quantum . For this reason we propose a slight variation on Nayak and Wu’s ideas here.

Consider the following algorithm for some value of

  1. Prepare the state such that corresponds to and corresponds to .

  2. Repeat steps 3 through 8 for to

  3. for .

  4. Repeat the following step within a coherent amplitude estimation with error and error probability on the fifth register using a projector that marks all states in that register, , such that and store the probabilities in the sixth register.

  5. Apply coherent amplitude estimation on the inner product circuit in Lemma to prepare state

    where if and .

  6. Use a reversible circuit to set, conditioned on the probability computed in the above steps is less than , and .

  7. Use a reversible circuit to set, conditioned on the probability computed in the above steps is greater than