# Stochastic Linear Optimization with Adversarial Corruption

We extend the model of stochastic bandits with adversarial corruption (Lykouriset al., 2018) to the stochastic linear optimization problem (Dani et al., 2008). Our algorithm is agnostic to the amount of corruption chosen by the adaptive adversary. The regret of the algorithm only increases linearly in the amount of corruption. Our algorithm involves using Löwner-John's ellipsoid for exploration and dividing time horizon into epochs with exponentially increasing size to limit the influence of corruption.

• 22 publications
• 1 publication
• 9 publications
02/22/2019

### Better Algorithms for Stochastic Bandits with Adversarial Corruptions

We study the stochastic multi-armed bandits problem in the presence of a...
02/20/2017

### An Improved Parametrization and Analysis of the EXP3++ Algorithm for Stochastic and Adversarial Bandits

We present a new strategy for gap estimation in randomized algorithms fo...
02/12/2022

### Corralling a Larger Band of Bandits: A Case Study on Switching Regret for Linear Bandits

We consider the problem of combining and learning over a set of adversar...
03/25/2018

### Stochastic bandits robust to adversarial corruptions

We introduce a new model of stochastic bandits with adversarial corrupti...
02/11/2021

### Achieving Near Instance-Optimality and Minimax-Optimality in Stochastic and Adversarial Linear Bandits Simultaneously

In this work, we develop linear bandit algorithms that automatically ada...
12/22/2017

### Network Utility Maximization in Adversarial Environments

Stochastic models have been dominant in network optimization theory for ...
05/29/2019

### Learning to Crawl

Web crawling is the problem of keeping a cache of webpages fresh, i.e., ...

## 1 Introduction

The multi-armed bandit problem has been extensively studied in computer science, operations research and economics since the seminal work of Robbins (1952)

. It is a model designed for sequential decision-making in which a player chooses at each time step amongst a finite set of available arms and receives a reward for the chosen decision. The player’s objective is to minimize the difference, called regret, between the rewards she receives and the rewards accumulated by the best arm. The rewards of each arm is drawn from a probability distribution in the stochastic multi-armed bandit problem; but in adversarial multi-armed bandit models, there is typically no assumption imposed on the sequence of rewards received by the player.

In recent work, Lykouris et al. (2018) introduce a model in which an adversary could corrupt the stochastic reward generated by an arm pull. They provide an algorithm and show that the regret of this “middle ground” scenario degrades smoothly with the amount of corruption injected by the adversary. Gupta et al. (2019) present an alternative algorithm which gives a significant improvement.

With real-world applications such as fake reviews and effects of employing celebrity brand ambassadors in mind (Kapoor et al., 2019), we complement the literature by incorporating the notion of corruption into the stochastic linear optimization problem, and hence answering an open question suggested in Gupta et al. (2019), in the framework of Dani et al. (2008). In our finite-horizon model, the player chooses at each time step

a vector (i.e., an arm) in a fixed decision set

. To consider the problem dependent bound, we assume that is a -dimensional polytope as in Abbasi-Yadkori et al. (2011). The regret of our algorithm is , where corresponds to the distance between the highest and lowest expected rewards, the amount of corruption and the level of confidence. In contrast to the stochastic model with corruption, our regret suffers an extra multiplicative loss of , which is caused by the separation of exploration and exploitation.

### 1.1 Related works

The finite-arm version of the stochastic linear optimization problem is introduced in Auer (2002). When the number of arms becomes infinity, the CONFIDENCEBALL algorithm (Dani et al., 2008) obtains the worst case regret bound of . Li et al. (2019) improve this result by replacing by a dependence. For the problem dependent bound, Abbasi-Yadkori et al. (2011) show that the regret of their OFUL algorithm is , and our algorithm achieves at least the same asymptotic performance when there exists an amount of corruption. Similar to the result of Lykouris et al. (2018), both the CONFIDENCEBALL algorithm and the OFUL algorithm suffer linear regret even when the amount of corruption appears to be small.

There also have been works that strive to achieve good regret guarantees in both stochastic multi-armed bandit models and their adversarial counterparts, commonly known as “the best of both worlds” (e.g., Bubeck and Slivkins (2012) and Zimmert and Seldin (2018)). In those algorithms the regret does not degrade smoothly as the amount of adversarial corruption increases. Kapoor et al. (2019) consider the corruption setting in the linear contextual bandit problem under a strong assumption that at each time step the adversary corrupts the data with a constant probability.

Our algorithm builds on Gupta et al. (2019)

. To eliminate the effect from corruption, we borrow the idea of dividing the time horizon into epochs which increase exponentially in length and use only the estimation from the previous epoch to conduct exploitation in the current round. This approach weakens the dependence of current estimate on the levels of earlier corruption, so the negative impact from the adversary fades away over time. The main challenge of our paper is that we cannot simply adopt the widely used ordinary least square estimator since the correlation between different time steps of estimation impedes the application of concentration inequalities. We thus conduct exploration on each coordinate independently.

## 2 Preliminaries

Let be a -polytope. At each time step , the algorithm chooses an action . Let be an unknown hidden vector and

a sequence of sub-Gaussian random noise with mean 0 and variance proxy 1. For a given time step,

, and a chosen action, , we define the reward as , where the first term is the inner product of and . We assume without loss of generality that and for all .

At each time step , there is an adaptive adversary who may corrupt the observed reward by choosing a corruption function . The algorithm chooses first , then observes the corrupted reward , and finally receives the actual reward . We denote by the total corruption generated by the adversary. The value of is unknown to the algorithm, which is, in turn, evaluated by pseudo-regret:

 R(T)=T∑t=1⟨x∗−xt,θ⟩,

where is an action that maximizes the expected reward. In this paper, we assume that is unique.111This assumption is without loss of generality because it is of probability 1 that the best action is unique when the action set is perturbed with a random noise. Let be the set of extreme points of and . The extreme point that generates the second highest reward is denoted ; i.e., . Thus the corresponding expected reward gap between and is given by

 Δ=⟨x∗−x(2),θ⟩.

We now introduce the so-called Löwner-John ellipsoid (see Grtschel et al. (1988) for a detailed discussion), which plays a key role in the construction of our algorithm.

###### Theorem 2.1 (Löwner-John’s Ellipsoid Theorem).

For any bounded convex body , there exists an ellipsoid satisfying

 E⊆K⊆dE.

A discussion of finding efficiently the Löwner-John ellipsoid is deferred in Section 6. Let be a Löwner-John ellipsoid guaranteed by Theorem 2.1. Let be the center and the -th principal axis, , of . Without loss of generality, we assume that is the origin; otherwise we could shift the origin toward such that the new decision set . Then the reward for each action is shifted by the same constant, and therefore the problem remains unchanged. In what follows, we dub the exploration set. It is worth noting that corresponds to an orthogonal basis for . From Theorem 2.1, we obtain the following result.

###### Corollary 2.2.

For each , we have , where .

## 3 The SBE algorithm

In this section, we introduce our Support Basis Exploration (SBE) algorithm for the stochastic linear optimization problem with adversarial corruption (see Algorithm 1).

The algorithm runs in epochs which increase exponentially in length. Each epoch has a length greater than , and therefore the total number of epochs is bounded above by . The choice of current action depends only on information received from the last epoch, so the level of earlier corruption will have a decreasing effect on later epochs. Different from other algorithms for stochastic linear optimization models, we separate exploration and exploitation so that we can decrease the correlation between vector pulls in each epoch and thus minimize the influence of adversarial corruption on the estimate. This approach will inevitably increase the regret by a multiplicative factor.

Given the exploration set defined in Section 2, we can represent each vector in the decision set  according to the elements of . By Corollary 2.2, the coefficient on each coordinate, in this new representation, is bounded by . It follows that the maximal projection on the basis vector is simply . In other words, contains the maximum information up to a constant in its own direction. Since basis vectors and are orthogonal to each other, there is no information loss using the exploration set in the algorithm. Thus, we obtain a better concentration in each round of estimation. Note that our algorithm can take any basis as input that has similar performance as in Corollary 2.2, and in Section 6, we provide an efficient algorithm that finds such a set with a multiplicative loss in regret. The construction of other parameters in the algorithm is explained in the next section.

## 4 Parameter estimation

We now know that the hidden vector, , can be represented according to the exploration set ; that is, . For any , let be an indicator defined on the event if the basis vector is chosen in time step . Let be the expected number of time steps used to explore each basis vector . But since is sampled uniformly, it follows that is independent of . Then, the “average reward" for exploring in epoch is222 This is not the actual average reward as is not the realized number of time steps used to explore .

 r(m)j=1n(m)eTm∑t=Tm−1+1ξtj⋅(⟨sj,θ⟩+ηt+ct(sj)).

Note that is independent of the noise, , as well as the amount of corruption, , taking expectation over the randomness of independent variables and on both sides yields

 \bf E[r(m)j]=⟨sj,θ⟩+1NmTm∑t=Tm−1+1\bf E[ct(sj)]≤bj∥∥sj∥∥22+CmNm,

where . At the end of each epoch , we have as the estimate of and as the estimate of . Before giving an uniform bound for the error in expected reward , we provide first an upper bound for the error of in each dimension .

### 4.1 Error of estimated reward

###### Lemma 4.1.

With probability at least , the estimate is such that

 ∣∣∣^b(m)j−bj∣∣∣∥∥sj∥∥22≤2CmNm+^Δ(m−1)32d2

for all and for all epoch .

###### Proof.

Since the indicator and the noise are independent random variables, by a form of the Chernoff-Hoeffding bound in Hoeffding (1963), we have for any deviation and any

 (1)

For any , let for all . Denote by the filtration generated by random variables and , and define . Since is independent of the corruption level conditional on , yields a martingale with respect to the filtration . The variance of conditional on can be bounded as

 V=\bf E[X2t|Ft−1]≤Tm∑t=Tm−1+1∣∣ct(sj)∣∣\bf Var[ξtj]≤n(m)eNmTm∑t=Tm−1+1∣∣ct(sj)∣∣. (2)

The first inequality holds because , and the second inequality holds because . Using a Freedman-type concentration inequality for martingales (Beygelzimer et al., 2011), we have for any ,

 \bf Pr⎡⎣1nmeTm∑t=Tm−1+1Xt≥V+ln\sfrac4νn(m)e⎤⎦≤ν4.

Note that . Combining it with Equation (2), for any , we have

 \bf Pr⎡⎢⎣∑Tmt=Tm−1+1ξtjct(sj)n(m)e≥2CmNm+ln\sfrac4νn(m)e⎤⎥⎦≤\bf Pr⎡⎣1nmeTm∑t=Tm−1+1Xt≥V+ln\sfrac4νn(m)e⎤⎦≤ν4.

For any , substituting , we can get

 \bf Pr⎡⎢⎣∑Tmt=Tm−1+1ξtjct(sj)n(m)e≥κ2+2CmNm⎤⎥⎦≤exp{−κn(m)e2}.

Similarly, consider the sequence . Then, for any , we have

 \bf Pr⎡⎢⎣∣∣ ∣∣∑Tmt=Tm−1+1ξtjct(sj)n(m)e∣∣ ∣∣≥κ2+2CmNm⎤⎥⎦≤2exp{−κn(m)e2}≤2exp{−κ2n(m)e16}. (3)

Combining Inequalities (1) and (3) yields

 \bf Pr[∣∣r(m)j−⟨sj,θ⟩∣∣≥κ+2CmNm]≤4exp{−κ2n(m)e16}.

Let and . Then

 n(m)e=ζd(^Δ(m−1))−2=214d4(^Δ(m−1))−2log(\sfrac4dlogTδ),

and . It follows that

 \bf Pr⎡⎣∣∣b(m)j−bj∣∣∥∥sj∥∥22≥2CmNm+^Δ(m−1)32d2⎤⎦ = \bf Pr⎡⎣∣∣r(m)j−⟨sj,θ⟩∣∣≥2CmNm+^Δ(m−1)32d2⎤⎦≤δdlogT,

where the first equality holds because by the definition of and , , and . By applying the union bound for all and epoch , we obtain the desired result. ∎

###### Lemma 4.2.

With probability at least , we have

 ∣∣∣⟨x,^θ(m)−θ⟩∣∣∣≤4d2CmNm+^Δ(m−1)16.

for all epochs and all .

###### Proof.

Since the exploration set is an orthogonal set, for any context , there exists multipliers such that

Then Corollary 2.2 and Lemma 4.1 together imply, with probability , that

 ∣∣∣⟨x,^θ(m)−θ⟩∣∣∣≤4d2CmNm+^Δ(m−1)16.\qed

For simplicity, we denote

 βm=4d2CmNm+^Δ(m−1)16 (4)

and let be the event that . Note that event happens with probability at least .

### 4.2 Bound analysis for estimated gap

Let us now turn to provide the upper and lower bounds for the estimated gap . Let be one of the actions that maximizes the expected reward given the estimate . We also define , and let the second best action given be . Since may not be unique, the expected reward for and may coincide. Then the estimated gap in epoch corresponds to .

###### Lemma 4.3 (Upper Bound for ^Δ(m)).

Suppose that event happens, then for all epochs

 ^Δ(m)≤2[Δ+2−m+4d2m∑s=1(18)m−sCsNs].
###### Proof.

First note that whenever ; otherwise we have a unique expected reward-maximizing action for estimate . By the uniqueness of , we have , which implies that

 ⟨x(m)∗−x(2),θ⟩=⟨x(m)∗−x∗,θ⟩+⟨x∗−x(2),θ⟩≤Δ,

because . For the case , we have . It follows that , and therefore

 ^Δ(m)=⟨x(m)∗,^θ(m)−θ⟩ +⟨x(m)∗−x(2),θ⟩ +⟨x(2),θ−^θ(m)⟩+⟨x(2)−x(m)(2),^θm⟩≤Δ+2βm.

The last inequality follows from Lemma 4.2 because when the event occurs, both inequalities and are satisfied.

Now for the case , it is straightforward to see from the fact . This implies that the expected reward of given the estimate is at least as large as that of ; i.e., . Therefore

 ^Δ(m) ≤⟨x(m)∗−x∗,^θ(m)⟩ =⟨x(m)∗,^θ(m)−θ⟩+⟨x(m)∗−x∗,θ⟩+⟨x∗,θ−^θ(m)⟩≤−Δ+2βm.

Combining all cases yields

 ^Δ(m)≤Δ+2βm+2−m. (5)

Given the initial assignment , we know that , satisfying Lemma 4.3. Applying inequality (5) recursively, we thus obtain

 ^Δ(m) ≤Δ+8d2CmNm+2−m+18^Δ(m−1) ≤Δ+8d2CmNm+2−m+18[Δ+2−(m−1)+4d2m−1∑s=1(18)m−1−sCsNs] ≤2[Δ+2−m+4d2m∑s=1(18)m−sCsNs].\qed
###### Lemma 4.4 (Lower Bound for ^Δ(m)).

Suppose that event happens, then for all epochs

 ^Δ(m)≥Δ2−2−m−1−8d2m∑s=1(18)m−sCsNs.
###### Proof.

We consider first the case that the best action for is unique.

If , we know that , and thus

 Δ =⟨x∗−x(2),θ⟩≤⟨x∗−x(m)∗,θ⟩ =⟨x∗−x(m)∗,^θ(m)⟩+⟨x∗−x(m)∗,θ−^θ(m)⟩≤2βm

as the term is always negative.

If , then . It follows that

 Δ =⟨x(m)∗−x(2),θ⟩≤⟨x(m)∗−x(m)(2),θ⟩ =⟨x(m)∗−x(m)(2),^θ(m)⟩+⟨x(m)∗−x(m)(2),θ−^θ(m)⟩ ≤^Δ(m)+2βm,

where the last inequality holds because . When the best action given is unique, we have

 ^Δ(m)≥Δ−2βm.

For the case that the best action is not unique, let be the best action given . Then , giving that

 Δ ≤⟨x∗−x(m)∗,θ⟩=⟨x∗,θ−^θ(m)⟩+⟨x∗−x(m)∗,^θ(m)⟩+⟨x(m)∗,^θ(m)−θ⟩≤2βm.

Now

 ^Δ(m)≥0≥Δ−2βm.

By applying the upper bound for in Lemma 4.3, we thus get

 ^Δ(m) ≥Δ−2⎛⎝4d2CmNm+^Δ(m−1)16⎞⎠ ≥Δ−8d2CmNm−14[Δ+2−(m−1)+4d2m−1∑s=1(18)m−1−sCsNs] ≥Δ2−2−m−1−8d2m∑s=1(18)m−sCsNs.\qed

## 5 Regret estimation

###### Theorem 5.1.

With probability at least , the regret is bounded by

 R=O(d2ClogTΔ+d5log(\sfracdlogTδ)logTΔ2).
###### Proof.

Let and be the pseudo regret for exploitation and exploration in epoch respectively. By Lemma 4.2, the event occurs with probability . We propose first the pseudo regret bound for exploitation given the occurrence of .

Exploitation: The pseudo regret for exploitation in epoch is .

Let be the pseudo regret for the action . Given that the event happens, we have

 Δ(m)=⟨θ−^θ(m−1),x∗⟩+⟨^θ(m−1),x∗−x(m−1)∗⟩+⟨^θ(m−1)−θ,x(m−1)∗⟩≤2βm−1, (6)

because . Define . Then we can get

 Δ(m)≤ 8d2Cm−1Nm−1+^Δ(m−2)8 ≤ 8d2Cm−1Nm−1+14(Δ+2−m+2+4ρm−2) = Δ4+2−m+8ρm−1, (7)

where the first inequality holds by the definition of and Inequality (6), and the second inequality holds by Lemma 4.3.

If , then the total regret for exploitation is ; otherwise, we have . Now we consider two different cases.

For the case , we have . Combining it with Inequality (5), we have . So, the pseudo regret is

 R(m)1=nmΔ(m)≤32ζ⋅4mρm−1.

For the case , by Inequality (5), we have . It follows that

 R(m)1=nmΔ(m)≤8ζ⋅4mρm−1+ζ⋅2m+1≤8ζ⋅4mρm−1+4ζΔ.

Thus, for each epoch ,

 R(m)1≤32ζ⋅4mρm−1+4ζΔ.

Summing over all epochs yields

 R1 ≤4ζMΔ+32ζM∑m=1ρm−14m≤4ζMΔ+32ζM∑m=1m∑s=1Cs8m−1−sNs4m ≤4ζMΔ+32M∑m=1m∑s=1Cs⋅4m−s8m−1−s =4ζMΔ+32M∑s=1CsM∑m=s4m−s8m−1−s≤4ζMΔ+512C, (8)

where the third inequality holds because by the construction of our algorithm.

Exploration: Now we turn to the exploration part and propose a bound for the pseudo regret in each epoch . Note that the expected number of time steps in which exploration is conducted is , and the pseudo regret for each of such time step is bounded above by 1.

When , since , we have

 R(m)2≤ζ(^Δ(m))2≤4ζΔ2.

When , we again consider two cases. For the case , since and because , we have , and

 Δ64≤ρm=d2m∑s=1(18)m−sCsNs≤2d2∑ms=1CsNm≤2d2CN