Multi-party Poisoning through Generalized p-Tampering

09/10/2018 ∙ by Saeed Mahloujifar, et al. ∙ 0

In a poisoning attack against a learning algorithm, an adversary tampers with a fraction of the training data T with the goal of increasing the classification error of the constructed hypothesis/model over the final test distribution. In the distributed setting, T might be gathered gradually from m data providers P_1,...,P_m who generate and submit their shares of T in an online way. In this work, we initiate a formal study of (k,p)-poisoning attacks in which an adversary controls k∈[n] of the parties, and even for each corrupted party P_i, the adversary submits some poisoned data T'_i on behalf of P_i that is still "(1-p)-close" to the correct data T_i (e.g., 1-p fraction of T'_i is still honestly generated). For k=m, this model becomes the traditional notion of poisoning, and for p=1 it coincides with the standard notion of corruption in multi-party computation. We prove that if there is an initial constant error for the generated hypothesis h, there is always a (k,p)-poisoning attacker who can decrease the confidence of h (to have a small error), or alternatively increase the error of h, by Ω(p · k/m). Our attacks can be implemented in polynomial time given samples from the correct data, and they use no wrong labels if the original distributions are not noisy. At a technical level, we prove a general lemma about biasing bounded functions f(x_1,...,x_n)∈[0,1] through an attack model in which each block x_i might be controlled by an adversary with marginal probability p in an online way. When the probabilities are independent, this coincides with the model of p-tampering attacks, thus we call our model generalized p-tampering. We prove the power of such attacks by incorporating ideas from the context of coin-flipping attacks into the p-tampering model and generalize the results in both of these areas.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Learning from a set of training examples in a way that the predictions generalize to instances beyond is a fundamental problem in learning theory. The goal here is to produce a hypothesis (also called a model) in such a way that , with high probability, predicts the “correct” label , where the pair is sampled from the target (test) distribution . In the most natural setting, the examples in the training data set are also generated from the same distribution , however this is not always the case. For example, the examples in could be gathered under noisy conditions. Even more, the difference between the distribution producing and the test distribution could be adversarial. Indeed, in his seminal work Valiant [Val85] initiated a formal study of this phenomenon by defining the framework of learning under malicious noise, that is a model in which an all-powerful adversary is allowed to change each of the generated training examples with independent probability , in an online way. Subsequently, it was shown [KL93, BEK02] that PAC learning of even simple problems in this model could be impossible, at least for specific pathological distributions.

Poisoning attacks.

A more modern interpretation for the problem of learning under adversarial noise is the framework of so-called poisoning (aka causative) attacks (e.g., see [ABL14, XBB15, STS16, PMSW16]), in which the adversary’s goal is not necessarily to completely prevent the learning, but perhaps it simply wants to increase the risk of the hypothesis produced by the learning process. A poisoning attack could be defined also in settings that are not at all covered by the malicious noise model; for example a poisoning attacker might even have a particular test example in mind while doing the whole attack, making the final attack a targeted one [STS16]. Mahloujifar, Mahmoody and Diochnos [MM17, MDM18]

initiated a study of poisoning attacks in a model that closely follows Valiant’s malicious noise model and showed that such attacks can indeed increase the error of any classifiers for any learning problem by a constant probability,

even without using wrong labels, so long as there is an initial constant error probability. The attack model used in [MM17, MDM18], called -tampering, was a generalization of a similar model introduced in Austrin et al. [ACM14] in the bitwise setting in cryptographic contexts.

Multi-party poisoning.

In a distributed learning procedure [MR17, MMR16, BIK17, KMY16], the training data might be coming from various sources; e.g., it can be generated by data providers in an online way, while at the end a fixed algorithm, called the aggregator , generates the hypothesis based on . The goal of is to eventually help construct a hypothesis that does well in predicting the label of a given instance , where is sampled from the final test distribution. The data provided by each party might even be of “different type”, so we cannot simply assume that the data provided by is necessarily sampled from the same distribution . Rather, we let model the distribution from which the training data (of ) is sampled. Poisoning attacks can naturally be defined in the distributed setting as well (e.g., see [FYB18, BVH18, BGS17]) to model adversaries who partially control the training data with the goal of decreasing the quality of the generated hypothesis. The central question of our work is then as follows.

What is the inherent power of poisoning attacks in the multi-party setting? How much can they increase the risk of the final trained hypothesis, if they only have a “limited” power?

Using multi-party coin-tossing attacks?

The question above could be studied in various settings, but the most natural way to model it from a cryptographic perspective is to allow the adversary to control out of the parties. So, a natural idea here is to use techniques from attacks in the context of multi-party coin-tossing protocols [BOL89, HO14]. Indeed, the adversary in that context wants to change the outcome of a random bit generated by parties by corrupting of them. So, at a high level, if we interpret the output bit to be the case that makes a mistake on its test and interpret to be the other case, we might be able to use such attacks to increase the risk of by increasing the probability of

. At a high level, the issues with using this idea are that: (1) the attacks in the multi-party coin tossing are not polynomial time, while we need polynomial-time attacks, (2) they only apply to Boolean output, while here we might want to increase the loss function of

that is potentially real-valued, and finally (3) coin-tossing attacks, like other cryptographic attacks in the multi-party setting, completely change the messages of the corrupted parties, while here we might want to keep the corrupted distributions “close” to the original ones, perhaps with the goal of not alarming a suspicious behavior, or simply because the attack only gets partial control over the process by which generates its data. Indeed, we would like to model such milder forms of attacks as well.

Using -tampering attacks?

Now, let us try a different approach assuming that the adversary’s corrupted parties have a randomized pattern. In particular, let us assume that the adversary gets to corrupt and control randomly selected parties. In this case, it is easy to see that, at the end every single message in the protocol between the parties is controlled with exactly probability by the adversary (even though these probabilities are correlated). Thus, at a high level it seems that we should be able to use the -tampering attacks of [MM17, MDM18] to degrade the quality of the produced hypothesis. However, the catch is that the proof of -tampering attacks of [MM17, MDM18] (and the bitwise version of [ACM17]) crucially rely on the assumption that each message (which in our context corresponds to a training example) is tamperable with independent probability , while by corrupting random parties, the set of messages controlled by the adversary are highly correlated.

A new attack model: -poisoning attacks.

To get the best of -tampering attacks and the coin-tossing attacks with corrupted parties, we combine these models and define a new model, called -poisoning, that generalizes the corruption pattern in both of these settings. A -poisoning attacker can first choose to corrupt of the parties, but then even after doing so, has a limited control over the training examples generated by a corrupted party. More formally, if a corrupted is supposed to send the next message, then the adversary will sample for a maliciously chosen distribution that is guaranteed to be “close” to the original distribution , while their distance is controlled by a parameter . In particular, we require that the statistical distance between and is at most . It is easy to see that -poisoning attacks include -tampering attacks where ( is the number of parties). Moreover, -attacks trivially include -corrupting attacks by letting . Our main result in this works is to prove the following general theorem about the inherent power of -poisoning attacks.

Theorem 1.1 (Power of -poisoning attacks–informal).

Let be an -party learning protocol for an -party learning problem. There is a polynomial time -poisoning attack such that, given oracle access to the data distribution of the parties, can decrease the confidence of the learning process by , where the confidence parameter is

for a fixed parameter .111The confidence parameter here is what is usually known as in -PAC learning, where takes the role of our . Alternatively, for any target example , there is a similar polynomial time -poisoning attack that can increase the average error of final hypothesis on by , where this average is also over the generated hypothesis .

(For the formal version of Theorem 1.1 above, see Theorem 2.4.)

We prove the above theorem by first proving a general result about the power of “biasing” adversaries whose goal is to increase the expected value of a random process by controlling each block of the random process with probability (think of as ). As these biasing attacks generalize -tampering attacks, we simply call them generalized -tampering attacks. We now describe this attack model and clarify how it can be used to achieve our goals stated in Theorem 1.1.

Generalized -tampering biasing attacks.

Generalized -tampering attacks could be defined for any random process and a function defined over this process. In order to explain the attack model, first consider the setting where there is no attacker. Now, given a prefix of the blocks, the next block

is simply sampled from its conditional probability distribution

. (Looking ahead, think of as the ’th training example provided by one of the parties in the interactive learning protocol.) Now, imagine an adversary who enters the game and whose goal is to increase the expected value of a function defined over the random process by tampering with the block-by-block sampling process of described above. Before the attack starts, there will be a a list of “tamperable” blocks that is not necessarily known to the in advance, but will become clear to him as the game goes on. Indeed, this set itself will be first sampled according to some fixed distribution , and the crucial condition we require is that holds for all . After is sampled, the sequence of blocks will be sampled block-by-block as follows. Assuming (inductively) that are already sampled so far, if , then gets to fully control and determine its value, but if , then is simply sampled from its original conditional distribution . At the end, the function is computed over the (adversarially) sampled sequence.

We now explain the intuitive connection between generalized -tampering attacks and -poisoning attacks. The main idea is that we will use a generalized -tampering attack for over the random process that lists the sequence of training data provided by the parties during the protocol. Let be the distribution over that picks its members through the following algorithm. First choose a set of random parties , and then for each message that belongs to , include the corresponding index in the final sampled with independent probability . It is easy to see that eventually picks every message with (marginal) probability , but it is also the case that these inclusions are not independent events. Finally, to use the power of generalized -tampering attacks over the described and the random process of messages coming from the parties to get the results of Theorem 1.1, roughly speaking, we let a function model the loss function applied over the produced hypothesis. Therefore, to prove Theorem 1.1 it is sufficient to prove Theorem 1.2 below which focuses on the power of generalized -tampering biasing attacks.

Theorem 1.2 (Power of generalized -tampering attacks–informal).

Suppose

is a joint distribution such that, given any prefix, the remaining blocks could be efficiently sampled in polynomial time. Also let

, and let

in order be the expected value and variance of

. Then, for any set distribution for which for all , there is a polynomial-time generalized -tampering attack (over tampered blocks in ) that increases the average of over its input from to .

(The formal statement of Theorem 1.2 above follows from Theorem 3.6 and Lemma 3.7.)

Remark 1.3.

It is easy to see that in the definition of generalized -tampering attacks, it does not matter whether we define the attack bit-by-bit or block-by-block. The reason is that, even if we break down each block into smaller bits, then still each bit shall eventually fall into the set of tamperable bits, and the model allows correlation between the inclusion and exclusion of each block/bit into the final tamperable set. This is in contrast to the -tampering model for which this equivalence is not true. In fact, optimal bounds achievable by bitwise -tampering as proved in [ACM17] are impossible to achieve in the blockwise -tampering setting [MM17]. Despite this simplification, we still prefer to use a blockwise presentation of the random process, as this way of modeling the problem allows better tracking measures for the attacker’s sample complexity.

Before describing ideas behind the proof of Theorem 1.2 above and its formalization through Theorem 3.6 and Lemma 3.7, we discuss some related work.

Related previous work.

Any biasing attack can also be interpreted as some form of impossibility result for a set of imperfect sources that started with [SV86] and expanded to more complicated models (e.g., see [SV86, RVW04, DOPS04, CG88, Dod01, DR06, BEG17, DY15, BBEG18]), and in that regard, our work is no exception. In particular, the work of Beigi, Etesami, and Gohari [BEG17] defined the notion of generalized SV sources that generalized SV sources but, due to the correlated nature of tamperable blocks in our model, the two models are incomparable.

In addition, as explained above in more detail, our work is related to the area of attacks on coin-tossing protocols (e.g., see [LLS89, CI93, GKP15, MPS10, CI93, BHT14, HO14, BHLT17, BHT18]), however in the following we discuss some of these works in more depth. Perhaps, the most relevant is the model posed in the open questions section of Lichtenstein, Linial, and Saks [LLS89]. They ask about the power of biasing attackers in coin flipping protocols in which the adversary has a bounded budget , and after using this budget on any party/message, the corruption will indeed happen with probability . The main difference to our model is that the adversary does not get to pick the exact tampering locations in our model (which makes our results stronger). Moreover, in their model each party sends exactly one message, and the attacker can corrupt the parties in an adaptive way. Despite the similarities, the difference in the two models makes our results not tight for their setting. Finally, the work of Dodis [Dod01] also defines a tampering model that bears similarities to the generalized -tampering model. In the model of [Dod01] again an adversary has a bounded budget to use in its corruption, but even when he does not use his budget, he can still try to tamper the parties/messages and still succeed with probability . The latter makes the model of [Dod01] quite different from ours.

1.1 Ideas Behind Our Generalized -Tampering Attack

Since generalized -tampering already bears similarities to the model of -tampering attacks, our starting point is the blockwise -tampering attack of [MM17]. It was shown in [MM17] that, by using a so-called “one-rejection sampling” (1RS) attack, the adversary can achieve the desired bias of . So, here we recall the 1RS attack of [MM17].

  • In the 1RS attack, for any prefix of already sampled blocks , suppose the adversary is given the chance of controlling the next ’th block. In that case, the 1RS adversary first samples a full random continuation from the marginal distribution of the remaining blocks conditioned on . Let . Then, the 1RS attack keeps the sample with probability , and changes that into a new fresh sample with probability .

The great thing about the 1RS attack is that it is already polynomial time assuming that we give access to a sampling oracle that provides the adversary with a random continuation for any prefix. However, the problem with the above attack is that its inductive analysis of [MM17] crucially depends on the tampering probabilities of each block to be independent of each other.

The next idea is to modify the 1RS attack of [MM17] based on attacks in the context of coin-tossing protocols and, in particular, the two works of Ben-Or and Linial [BOL89] and Haitner and Omri [HO14]. Indeed, in [BOL89] it was shown that, if the adversary corrupts parties in an interactive coin-flipping protocol, then it is indeed able to increase the probability of obtaining as follows. Let , and for a subset of size of the parties, let be the probability of output being , if the parties in use their “optimal” strategy (which is not polynomial-time computable). Then, the result of [BOL89] could be interpreted as follows:

(1)

where

denotes the geometric mean (of the elements in the multi-set). Then, by an averaging argument one can show that

there is at least one set of size such that corrupting players in and using their optimal strategy can achieve expected value at least , and the bias is indeed , which is large enough. However, the proof of [BOL89] does not give a polynomial time attack and uses a rather complicated induction. So, to make it polynomial time and to even make it handle generalized -tampering attacks, we use one more idea from the follow up work of [HO14]. In [HO14], it was shown that for the case of two parties, the biasing bound proved in [BOL89] could be achieved by the following simple (repeated) rejection sampling (RRS) strategy.

  • In the RRS attack, for any prefix of already sampled blocks , suppose the adversary is given the chance of controlling the next ’th block. The good thing about the RRS attack is that it achieves the bounds of the [BOL89] while it could also can be made polynomial time in any model where random continuation can be done efficiently. The RRS tampering then works as follows:

    1. Let be a random continuation of the random process.

    2. If , then with probability output , and with probability go to Step 1 and repeat the sampling process again.

Indeed, we shall point out that for the Boolean case of [BOL89, HO14], the probability is either zero or one, and the above RRS attack is the adaptation of rejection sampling to the real-output case, as done in the context of -tampering.

Putting things together, our main contributions are taking the following steps to prove Theorem 1.2.

  1. We show that the RRS attack of [HO14] does indeed extend to the multi-party case. Interestingly, to prove this, we avoid the inductive proofs of both [BOL89, HO14] and give a direct proof based on the arithmetic-mean vs. geometric-mean inequality (Lemma A.1). However, our proof has a down side: we only get a lower bound on the arithmetic mean of in Inequality 1. But that is good enough for us, as we indeed want to lower bound the bias achieved when we corrupt a randomly selected set of parties, and the arithmetic mean gives exactly that.

  2. We show that the above argument extends even to the case of generalized -tampering when the weights in the arithmetic mean are proportional to the probabilities of choosing each set . For doing this, we use an idea from [HO14] that analyzes an imaginary attack in which the adversary does the tampering effort over each block, and then we compare this to the arithmetic mean of actual attacks.

  3. We show that our proof extends to the real-output case, and achieve a bound that generalizes the bound of the Boolean case. As pointed out above, the inductive proofs of [BOL89, HO14] seem to be tightly tailored to the Boolean case, but our direct proof based on the AM-GM inequality scales nicely for the real-output RRS attack described above.

    The lower bound, proved only for the arithmetic mean of , is equal to (see Theorem 3.6). However, it is not clear that the bound is any better than the original ! Yet, it can be observed that always holds due to Jensen’s inequality. Therefore, a natural tool for lower bounding is to use lower bounds on the “gap” of Jensen’s inequality. Indeed, we use one such result due to [LB17] (see Lemma A.2) and obtain the desired lower bound of by simple optimization calculations.

2 Multi-party Poisoning Attacks: Definitions and Main Results

Basic probabilistic notation.

We use bold font (e.g.,

) to represent random variables, and usually use same non-bold letters for denoting samples from these distributions. We use

to denote the process of sampling from the random variable . By we mean the expected value of over the randomness of , and by we denote the variance of random variable . We might use a “processed” version of , and use and to denote the expected value and variance, respectively, of over the randomness of .

Notation for learning problems.

A learning problem is specified by the following components. The set is the set of possible instances, is the set of possible labels, is distribution over .222By using joint distributions over , we jointly model a set of distributions over and a concept class mapping to (perhaps with noise and uncertainty). The set is called the hypothesis space or hypothesis class. We consider loss functions where measures how different the ‘prediction’ (of some possible hypothesis ) is from the true outcome . We call a loss function bounded if it always takes values in . A natural loss function for classification tasks is to use if and otherwise. The risk of a hypothesis is the expected loss of with respect to , namely . An example is a pair where and . An example is usually sampled from a distribution . A sample set (or sequence) of size is a set (or sequence) of examples. We assume that instances, labels, and hypotheses are encoded as strings over some alphabet such that given a hypothesis and an instance , is computable in polynomial time.

2.1 Basic Definitions for Multi-party Learning and Poisoning

Multi-party learning problems.

An -party learning problem is defined similarly to the (single-party) learning problem (without the sets denoted explicitly for a reason to be clear shortly), with the following difference. This time, consists of distributions (possibly all the same) such that party gets samples from , and they jointly want to learn the distribution . So, each distribution might have its own instance space and label space . The loss function is still defined for the specific target test distribution . Also, even though, is an actual sequence, for simplicity we sometimes treat it as a set and write statements like .

Definition 2.1 (Multi-party learning protocols).

An -party learning protocol for the -party learning problem consists of an aggregator function and (interactive) data providers . For each data provider , there is a distribution that models the (honest) distribution of labeled samples generated by , and there is a final (test) distribution that want to learn jointly. The protocol runs in rounds and at each round, based on the protocol , one particular data owner broadcasts a single labeled example .333We can directly model settings where more data is exchanged in one round, however, we stick to the simpler definition as it loses no generality. In the last round, the aggregator function maps the transcript of the messages to an output hypothesis . For a protocol designed for a multi-party problem , we define the following functions.

  • The confidence function for a given error threshold is defined as

  • The average error (or average loss) for a specific example is defined as

    based on which the total error of the protocol is defined as

Now, we define poisoning attackers that target multi-party protocols. We formalize a more general notion that covers both -tampering attackers as well as attackers who (statically) corrupt parties.

Definition 2.2 (Multi-party -poisoning attacks).

A -poisoning attack against an -party learning protocol is defined by an adversary who can control a subset of the parties where . The attacker shall pick the set at the beginning. At each round of the protocol, if a data provider is supposed to broadcast the next example from its distribution , the adversary can partially control this sample use tampered distribution such that in statistical distance. Note that the distribution can depend on the history of examples broadcast so far, but the requirement is that, conditioned on this history, the malicious message of adversary modeled by distribution , is at most -statistically far from . We use to denote the protocol in presence of . We also define the following notions.

  • We call a plausible adversary, if it always holds that .

  • is efficient if it runs in polynomial time in the total length of the messages exchanged during the protocol (from the beginning till end).

  • The confidence function in presence of is defined as

    and is the confidence of the learning protocol without any attacks which can be formally defined using an attacker who does not change any of the distributions.

  • The average error for a specific example in presence of is defined as

    based on which the total error of the protocol is defined as

Note that standard (non-adversarial) confidence and average error functions could be also defined as adversarial ones using a trivial adversary who simply outputs its input.

Remark 2.3 (Static vs. adaptive corruption).

Definition 2.2 focuses on corrupting parties statically. A natural extension of this definition in which the set is chosen adaptively [CFGN96] while the protocol is being executed can also be defined naturally. In this work, however, we focus on static corruption, and leave the possibility of improving our results in the adaptive case for future work.

2.2 Power of Multy-party Poisoning Attacks

We now formally state our result about the power of -poisoning attacks.

Theorem 2.4 (Power of efficient multi-party poisoning).

In any -party protocol for parties , for any and , the following hold where is the total length of the messages exchanged during the protocol.

  1. For any , there is a plausible, -poisoning attack that runs in time and decreases the confidence of the protocol as follows

  2. If the (normalized) loss function is bounded (i.e., it outputs in ), then there is a plausible, -poisoning that runs in time and increases the average error of the protocol as

    where (and denotes the variance).

  3. If is a Boolean function (e.g. as in classification problems), for any final test example , there is a plausible, -poisoning attack that runs in time and increases the average error of the test example as follows,

Before proving Theorem 2.4, we need to develop our main result about the power of generalized -tampering attacks; in Section 3, we develop such tools, and then in Section 4.1 we prove Theorem 2.4.

Remark 2.5 (Allowing different distributions in different rounds).

In Definition 2.2, we restrict the adversary to remain “close” to for each message sent out by one of the corrupted parties. A natural question is: what happens if we allow the parties distributions to be different in different rounds. For example, in a round , a party might send multiple training examples , and we want to limit the total statistical distance between the distribution of the larger message from (i.e., iid samples from ).444Note that, even if each block in remains -close to , their joint distribution could be quite far from . We emphasize that, our results extend to this more general setting as well. In particular, the proof of Theorem 2.4 directly extends to a more general setting where we can allow the honest distribution of each party to also depend on the round in which these messages are sent. Thus, we can use a round-specific distribution to model the joint distribution of multiple samples that are sent out in the ’th round by the party . This way, we can obtain the stronger form of attacks that remain statistically close to the joint (correct) distribution of the (multi-sample) messages sent in a round. In fact, as we will discuss shortly might be of completely different type, e.g., just some shared random bits.

Remark 2.6 (Allowing randomized aggregation).

The aggregator is a simple function that maps the transcript of the exchanged messages to a hypothesis . A natural question is: what happens if we generalize this to the setting where is allowed to be randomized. We note that in Theorem 2.4, Part 2 can allow to be randomized, but Parts 1 and 3 need deterministic aggregation. The reason is that for those parts, we need the transcript to determine the confidence and average error functions. One general way to make up for randomized aggregation is to allow the parties to inject randomness into the transcript as they run the protocol by sending messages that are not necessarily learning samples from their distribution . As described in Remark 2.5, our attacks extend to this more general setting as well. Otherwise, we will need the adversary to be able to also depend on the randomness of , but that is also a reasonable assumption if the aggregation is used using public beacon that could be obtained by the adversary as well.

3 Generalized -Tampering Biasing Attacks: Definitions and Main Results

In this section, we formally state our main result about the power of generalized -tampering attacks. we start by formalizing some notation and basic definitions.

3.1 Preliminary Notation and Basic Definitions for Tampering with Random Processes

Notation.

By we denote that the random variables and

have the same distributions. Unless stated otherwise, by using a bar over a variable, we emphasize that it is a vector. By

we refer to a joint distribution over vectors with components. For a joint distribution , we use to denote the joint distribution of the first variables . Also, for a vector we use to denote the prefix . For a randomized algorithm , by we denote the randomized execution of on input outputting . For a distribution , by we denote the conditional distribution . By we denote the support set of . By we denote an algorithm with oracle access to a sampler for that upon every query returns fresh samples from . By we denote the distribution that returns iid samples from .

Definition 3.1 (Valid prefixes).

Let be an arbitrary joint distribution. We call a valid prefix for if there exist such that . denotes the set of all valid prefixes of .

Definition 3.2 (Tampering with random processes).

Let be an arbitrary joint distribution. We call a (potentially randomized and possibly computationally unbounded) algorithm an (online) tampering algorithm for if given any valid prefix , it holds that

Namely, outputs such that is again a valid prefix. We call an efficient tampering algorithm for if it runs in time where is maximum bit length to represent any .

Definition 3.3 (Online samplers).

We call an online sampler for if for all , . Moreover, we call online samplable if it has an online sampler that runs in time where is maximum bit length of any .

Definition 3.4 (Notation for tampering distributions).

Let be an arbitrary joint distribution and a tampering algorithm for . For any subset , we define to be the joint distribution that is the result of online tampering of over set , where is sampled inductively as follows. For every , suppose is the previously sampled block. If , then the block is generated by the tampering algorithm , and otherwise, is sampled from . For any distribution over subsets of , by we denote the random variable that can be sampled by first sampling and then sampling .

3.2 Power of Generalized -Tampering Attacks

Having the definitions above, we finally describe our main result about the power of generalized -tampering attacks. We first formalize the way tampering blocks are chosen in such attacks.

Definition 3.5 (-covering).

Let be a distribution over the subsets of . We call a -covering distribution on (or simply -covering, when is clear from the context), if for all .

Theorem 3.6 (Biasing of bounded functions through generalizing -tampering).

Let be a -covering distribution on , be a joint distribution, , and . Then,

  1. Computationally unbounded attack. There is a (computationally unbounded) tampering algorithm such that if we let be the tampering distribution of over , then

  2. Polynomial-time attack. For any , there exists a tampering algorithm that, given oracle access to and any online sampler for , it runs in time , where is the bit length of any , and for , it holds that

Special case of Boolean functions.

When the function is Boolean, we get , which matches the bound proved in [BOL89] for the special case of for integer and for that is uniformly random subset of of size . (The same bound for the case of 2 parties was proved in [HO14] with extra properties). Even for this case, compared to [BOL89, HO14] our result is more general, as we can allow with arbitrary and achieve a polynomial time attack given oracle access to an online sampler for . The work of [HO14] also deals with polynomial time attackers for the special case of 2 parties, but their efficient attackers use a different oracle (i.e., OWF inverter), and it is not clear whether or not their attack extend to the case of more then 2 parties. Finally, both [BOL89, HO14] prove their bound for the geometric mean of the averages for different , while we do so for their arithmetic mean, but we emphasize that this is enough for all of our applications.

The bounds of Theorem 3.6 for both cases rely on the quantity . A natural question is: how large is compared to ? As discussed above, for the case of Boolean , we already know that , but that argument does not apply to the real-output . A simple application of Jensen’s inequality shows that in general, but that still does not mean that (i.e., that there is a large enough gap).

General case of real-output functions: relating the bias to the variance.

If , then no tampering attack can achieve any bias, so any gap achieved between and shall somehow depend on the variance of . In the following, we show that this gap does exist and that . similar results (relating the bias the the variance of the original distribution) were previously proved [MDM18, MM17, ACM14] for the special case of -tampering attacks (i.e., chooses every independently with probability ). Here we obtain a more general statement that holds for any -covering set structure .

Using Lemma 3.7 below for , we immediately get lower bounds for the bias achieved by (both versions of) the attackers of Theorem 3.6 for the general case of real-valued functions and arbitrary -covering set distribution .

Lemma 3.7.

Let be any real-valued random variable over , and . Let be the expected value of , be the variance of , and . Then, it holds that

(2)

4 Proofs of the Main Results

In the following subsections, we will first prove Theorem 2.4 using Theorem 3.6 and Lemma 3.7, and then we will prove Theorem 3.6 and Lemma 3.7.

4.1 Obtaining -Poisoning Attacks: Proof of Theorem 2.4

In this subsection, we formally prove Theorem 2.4 using Theorems 3.6 and Lemma 3.7.

For a subset let and be the subset of rounds where one of the parties in sends an example. Also for a subset , we define to be a distribution over all the subsets of , where each subset hast the probability . Now, consider the covering of the set which is distributed equivalent to the following process. First sample a uniform subset of of size . Then sample and output a set sampled from . is clearly a -covering. We will use this covering to prove all the three statements of the theorem. Before proving the statement we define several notions. For let be the index of the provider at round and let be the designated distribution of the th round and let . Now we prove the first part of the theorem. We define a function , which is a Boolean function and is if the output of the protocol has risk less than or equal to and otherwise (note that is equivalent to ). Note that this function can be approximated in polynomial time, however, here for simplicity, we are assuming that it can be exactly computed in polynomial time. Now we use Theorem 3.6. We know that is a -covering for . Therefore by Part 2 of Theorem 3.6, there exist an time tampering algorithm that changes to where

By an averaging argument, we can conclude that there exist a set of size for which the distribution produces a bias at least . Note that the measure of empty set in is exactly equal to which means with probability the adversary will not tamper with any of the blocks, therefore, the statistical distance is at most . This concludes the proof of the first part.

Now we prove the second part. The second part is very similar to first part except that the function that we define here is a real valued function. Consider the function which is defined to be the risk of the output hypotheses. Now by Theorem 3.6 and Lemma 3.7, we know that there is tampering algorithm that changes to such that

By a similar averaging argument we can conclude the proof.

Now we prove Part 3. Again we define a Boolean function which outputs the loss of the final hypothesis on the example . Note that is Boolean since the loss function is Boolean. is also computable by the adversary because he knows the target example . Again by a similar use of Theorem 3.6 and averaging argument we can conclude the proof.

4.2 Computationally Unbounded Attacks: Proving Part 1 of Theorem 3.6

Construction 4.1 (Rejection-sampling tampering).

Let and . The rejection sampling tampering algorithm works as follows. Given the valid prefix , the tampering algorithm would do the following:

  1. Sample by using the online sampler for .

  2. If , then with probability output , otherwise go to Step 1 and repeat the process.

We will first prove a property of the rejection sampling algorithm when applied on every block.

Definition 4.2 (Notation for partial expectations of functions).

Suppose is defined over a joint distribution , , and . Then, using a small hat, we define the notation . (In particular, for , we have .)

Claim 4.3.

If . Then, for every valid prefix ,

Proof.

Based on the description of , for any the following equation holds for the probability of sampling conditioned on prefix .

The first term in this equation corresponds to the probability of selecting and accepting in the first round of sampling and the second term corresponds to the probability of selecting and accepting in any round except the first round. Therefore we have

which implies that

Now, we prove two properties for any tampering algorithm (not just rejection sampling) over a -covering set distribution.

Lemma 4.4.

Let be -covering for and . For any and an arbitrary tampering algorithm for , let . Then,

Proof.

For every define as