# Adaptive and Optimal Online Linear Regression on L1-balls

We consider the problem of online linear regression on individual sequences. The goal in this paper is for the forecaster to output sequential predictions which are, after T time rounds, almost as good as the ones output by the best linear predictor in a given L1-ball in R^d. We consider both the cases where the dimension d is small and large relative to the time horizon T. We first present regret bounds with optimal dependencies on the sizes U, X and Y of the L1-ball, the input data and the observations. The minimax regret is shown to exhibit a regime transition around the point d = sqrt(T) U X / (2 Y). Furthermore, we present efficient algorithms that are adaptive, i.e., they do not require the knowledge of U, X, and Y, but still achieve nearly optimal regret bounds.

## Authors

• 15 publications
• 22 publications
• ### Uniform regret bounds over R^d for the sequential linear regression problem with the square loss

We consider the setting of online linear regression for arbitrary determ...
05/29/2018 ∙ by Pierre Gaillard, et al. ∙ 0

• ### Sparsity regret bounds for individual sequences in online linear regression

We consider the problem of online linear regression on arbitrary determi...
01/05/2011 ∙ by Sébastien Gerchinovitz, et al. ∙ 0

• ### Stochastic Online Linear Regression: the Forward Algorithm to Replace Ridge

We consider the problem of online linear regression in the stochastic se...
11/02/2021 ∙ by Reda Ouhamma, et al. ∙ 7

• ### Online Nonparametric Regression

We establish optimal rates for online regression for arbitrary classes o...
02/11/2014 ∙ by Alexander Rakhlin, et al. ∙ 0

• ### Efficient online algorithms for fast-rate regret bounds under sparsity

We consider the online convex optimization problem. In the setting of ar...
05/23/2018 ∙ by Pierre Gaillard, et al. ∙ 0

• ### Distributed Online Linear Regression

We study online linear regression problems in a distributed setting, whe...
02/13/2019 ∙ by Deming Yuan, et al. ∙ 0

• ### L1 Regression with Lewis Weights Subsampling

We consider the problem of finding an approximate solution to ℓ_1 regres...
05/19/2021 ∙ by Aditya Parulekar, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

In this paper, we consider the problem of online linear regression against arbitrary sequences of input data and observations, with the objective of being competitive with respect to the best linear predictor in an -ball of arbitrary radius. This extends the task of convex aggregation. We consider both low- and high-dimensional input data. Indeed, in a large number of contemporary problems, the available data can be high-dimensional—the dimension of each data point is larger than the number of data points. Examples include analysis of DNA sequences, collaborative filtering, astronomical data analysis, and cross-country growth regression. In such high-dimensional problems, performing linear regression on an -ball of small diameter may be helpful if the best linear predictor is sparse. Our goal is, in both low and high dimensions, to provide online linear regression algorithms along with bounds on -balls that characterize their robustness to worst-case scenarios.

### 1.1 Setting

We consider the online version of linear regression, which unfolds as follows. First, the environment chooses a sequence of observations in

and a sequence of input vectors

in , both initially hidden from the forecaster. At each time instant , the environment reveals the data ; the forecaster then gives a prediction ; the environment in turn reveals the observation ; and finally, the forecaster incurs the square loss . The dimension can be either small or large relative to the number of time steps: we consider both cases.

In the sequel, denotes the standard inner product between , and we set and . The -ball of radius is the following bounded subset of :

Given a fixed radius and a time horizon , the goal of the forecaster is to predict almost as well as the best linear forecaster in the reference set , i.e., to minimize the regret on defined by

 T∑t=1(yt−ˆyt)2−minu∈B1(U){T∑t=1(yt−u⋅xt)2}.

We shall present algorithms along with bounds on their regret that hold uniformly over all sequences222Actually our results hold whether is generated by an oblivious environment or a non-oblivious opponent since we consider deterministic forecasters. such that and for all , where . These regret bounds depend on four important quantities: , , , and , which may be known or unknown to the forecaster.

### 1.2 Contributions and related works

In the next paragraphs we detail the main contributions of this paper in view of related works in online linear regression.

Our first contribution (Section 2) consists of a minimax analysis of online linear regression on -balls in the arbitrary sequence setting. We first provide a refined regret bound expressed in terms of , , and a quantity . This quantity is used to distinguish two regimes: we show a distinctive regime transition333In high dimensions (i.e., when , for some absolute constant ), we do not observe this transition (cf. Figure 1). at or . Namely, for , the regret is of the order of , whereas it is of the order of for .

The derivation of this regret bound partially relies on a Maurey-type argument used under various forms with i.i.d. data, e.g., in Nem-00-TopicsNonparametric ; Tsy-03-OptimalRates ; BuNo08SeqProcedures ; ShSrZh-10-Sparsifiability (see also Yan-04-BetterPerformance ). We adapt it in a straightforward way to the deterministic setting. Therefore, this is yet another technique that can be applied to both the stochastic and individual sequence settings.

Unsurprisingly, the refined regret bound mentioned above matches the optimal risk bounds for stochastic settings444For example, may be i.i.d. , or can be deterministic and for an unknown function and an i.i.d. sequence of Gaussian noise. BiMa-01-GaussianMS ; Tsy-03-OptimalRates (see also RaWaYu-09-MinimaxSparseRegression ). Hence, linear regression is just as hard in the stochastic setting as in the arbitrary sequence setting. Using the standard online to batch conversion, we make the latter statement more precise by establishing a lower bound for all at least of the order of . This lower bound extends those of CB99AnalysisGradientBased ; KiWa97EGvsGD , which only hold for small of the order of .

The algorithm achieving our minimax regret bound is both computationally inefficient and non-adaptive (i.e., it requires prior knowledge of the quantities , , , and that may be unknown in practice). Those two issues were first overcome by AuCeGe02Adaptive via an automatic tuning termed self-confident (since the forecaster somehow trusts himself in tuning its parameters). They indeed proved that the self-confident -norm algorithm with and tuned with has a cumulative loss bounded by

 ˆLT ⩽L∗T+8UX√(elnd)L∗T+(32elnd)U2X2 ⩽8UXY√eTlnd+(32elnd)U2X2 ,

where . This algorithm is efficient, and our lower bound in terms of shows that it is optimal up to logarithmic factors in the regime without prior knowledge of , , and .

Our second contribution (Section 3) is to show that similar adaptivity and efficiency properties can be obtained via exponential weighting. We consider a variant of the algorithm KiWa97EGvsGD . The latter has a manageable computational complexity and our lower bound shows that it is nearly optimal in the regime . However, the algorithm requires prior knowledge of , , , and . To overcome this adaptivity issue, we study a modification of the

algorithm that relies on the variance-based automatic tuning of

CeMaSt07SecOrder . The resulting algorithm – called adaptive algorithm

– can be applied to general convex and differentiable loss functions. When applied to the square loss, it yields an algorithm of the same computational complexity as the

algorithm that also achieves a nearly optimal regret but without needing to know , , and beforehand.

Our third contribution (Section 3.3) is a generic technique called loss Lipschitzification. It transforms the loss functions (or if the predictions are scored with the -loss for a real number ) into Lipschitz continuous functions. We illustrate this technique by applying the generic adaptive algorithm to the modified loss functions. When the predictions are scored with the square loss, this yields an algorithm (the LEG algorithm) whose main regret term slightly improves on that derived for the adaptive algorithm without Lipschtizification. The benefits of this technique are clearer for loss functions with higher curvature: if , then the resulting regret bound roughly grows as instead of a naive .

Finally, in Section 4, we provide a simple way to achieve minimax regret uniformly over all -balls for . This method aggregates instances of an algorithm that requires prior knowledge of . For the sake of simplicity, we assume that , , and are known, but explain in the discussions how to extend the method to a fully adaptive algorithm that requires the knowledge neither of , , , nor .

This paper is organized as follows. In Section 2, we establish our refined upper and lower bounds in terms of the intrinsic quantity . In Section 3, we present an efficient and adaptive algorithm — the adaptive algorithm with or without loss Lipschitzification — that achieves the optimal regret on when is known. In Section 4, we use an aggregating strategy to achieve an optimal regret uniformly over all -balls , for , when , , and are known. Finally, in Section 5, we discuss as an extension a fully automatic algorithm that requires no prior knowledge of , , , or . Some proofs and additional tools are postponed to the appendix.

## 2 Optimal rates

In this section, we first present a refined upper bound on the minimax regret on for an arbitrary . In Corollary 1, we express this upper bound in terms of an intrinsic quantity . The optimality of the latter bound is shown in Section 2.2.

We consider the following definition to avoid any ambiguity. We call online forecaster any sequence of functions such that maps at time the new input and the past data to a prediction . Depending on the context, the latter prediction may be simply denoted by or by .

### 2.1 Upper bound

###### Theorem 1 (Upper bound).

Let , and . The minimax regret on for bounded base predictions and observations satisfies

 infFsup∥xt∥∞⩽X,|yt|⩽Y{T∑t=1(yt−ˆyt)2−inf∥u∥1⩽UT∑t=1(yt−u⋅xt)2} ⩽⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩3UXY√2Tln(2d)ifU2dYX√T ,

where the infimum is taken over all forecasters and where the supremum extends over all sequences such that and .

Theorem 1 improves the bound of (KiWa97EGvsGD, , Theorem 5.11) for the algorithm. First, our bound depends logarithmically—as opposed to linearly—on for . Secondly, it is smaller by a factor ranging from to when

 YX√ln(1+2d)Tln2⩽U⩽2dY√TX . (1)

Hence, Theorem 1 provides a partial answer to a question555The authors of KiWa97EGvsGD asked: “For large there is a significant gap between the upper and lower bounds. We would like to know if it possible to improve the upper bounds by eliminating the factors.” raised in KiWa97EGvsGD about the gap of between the upper and lower bounds.

Before proving the theorem (see below), we state the following immediate corollary. It expresses the upper bound of Theorem 1 in terms of an intrinsic quantity that relates to the ambient dimension .

###### Corollary 1 (Upper bound in terms of an intrinsic quantity).

Let , and . The upper bound of Theorem 1 expressed in terms of , , and the intrinsic quantity reads:

 infFsup∥xt∥∞⩽X,|yt|⩽Y{T∑t=1(yt−ˆyt)2−inf∥u∥1⩽UT∑t=1(yt−u⋅xt)2}

The upper bound of Corollary 1 is shown in Figure 1. Observe that, in low dimension (Figure 1(b)), a clear transition from a regret of the order of to one of occurs at . This transition is absent for high dimensions: for , where , the regret bound is worse than a trivial bound of when .

We now prove Theorem 1. The main part of the proof relies on a Maurey-type argument. Although this argument was used in the stochastic setting Nem-00-TopicsNonparametric ; Tsy-03-OptimalRates ; BuNo08SeqProcedures ; ShSrZh-10-Sparsifiability , we adapt it to the deterministic setting. This is yet another technique that can be applied to both the stochastic and individual sequence settings.

Proof (of Theorem 1): First note from Lemma 5 in AppendixB that the minimax regret on is upper bounded666As proved in Lemma 5, the regret bound (2) is achieved either by the algorithm, the algorithm of Ger-11colt-SparsityRegretBounds

(we could also get a slightly worse bound with the sequential ridge regression forecaster

AzWa01RelativeLossBounds ; Vo01CompetitiveOnline ), or the trivial null forecaster. by

 min{3UXY√2Tln(2d),32dY2ln(1+√TUXdY)+dY2} . (2)

Therefore, the first case and the third case are straightforward.

Therefore, we assume in the sequel that .
We use a Maurey-type argument to refine the regret bound (2). This technique was used under various forms in the stochastic setting, e.g., in Nem-00-TopicsNonparametric ; Tsy-03-OptimalRates ; BuNo08SeqProcedures ; ShSrZh-10-Sparsifiability . It consists of discretizing and looking at a random point in this discretization to study its approximation properties. We also use clipping to get a regret bound growing as instead of a naive .

More precisely, we first use the fact that to be competitive against , it is sufficient to be competitive against its finite subset

 ˜BU,m≜{(k1Um,…,kdUm):(k1,…,kd)∈Zd,d∑j=1|kj|⩽m}⊂B1(U) ,

where with  .

By Lemma 7 in AppendixC, and since (see below), we indeed have

 infu∈˜BU,mT∑t=1(yt−u⋅xt)2 ⩽infu∈B1(U)T∑t=1(yt−u⋅xt)2+TU2X2m ⩽infu∈B1(U)T∑t=1(yt−u⋅xt)2+2√ln2UXY√Tln(1+2dY√TUX) , (3)

where (3) follows from since (in particular, as stated above).

To see why , note that it suffices to show that where we set . But from the assumption , we have , so that, by monotonicity, .

Therefore it only remains to exhibit an algorithm which is competitive against at an aggregation price of the same order as the last term in (3). This is the case for the standard exponentially weighted average forecaster applied to the clipped predictions

and tuned with the inverse temperature parameter . More formally, this algorithm predicts at each time as

 ˆyt≜∑u∈˜BU,mpt(u)[u⋅xt]Y ,

where (denoting by the cardinality of the set ), and where the weights are defined for all and by

 pt(u)≜exp(−η∑t−1s=1(ys−[u⋅xs]Y)2)∑v∈˜BU,mexp(−η∑t−1s=1(ys−[v⋅xs]Y)2) .

By Lemma 6 in AppendixB, the above forecaster tuned with satisfies

 T∑t=1(yt−ˆyt)2−infu∈˜BU,mT∑t=1(yt−u⋅xt)2 ⩽8Y2ln∣∣˜BU,m∣∣ ⩽8Y2ln(e(2d+m)m)m (4) (5) =8Y2α+8Y2αln⎛⎜⎝1+2dY√TUX√ln(1+2dY/(√TUX))ln2⎞⎟⎠ ⩽8Y2α+16Y2αln(1+2dY√TUX) (6) ⩽(8√ln2+16√ln2)UXY√Tln(1+2dY√TUX) . (7)

To get (4) we used Lemma 8 in AppendixC. Inequality (5) follows by definition of and the fact that is nondecreasing on for all . Inequality (6) follows from the assumption and the elementary inequality which holds for all and was used, e.g., at the end of (BuNo08SeqProcedures, , Theorem 2-a)). Finally, elementary manipulations combined with the assumption that lead to (7).

Putting Eqs. (3) and (7) together, the previous algorithm has a regret on which is bounded from above by

 (10√ln2+16√ln2)UXY√Tln(1+2dY√TUX) ,

which concludes the proof since . ∎

### 2.2 Lower bound

Corollary 1 gives an upper bound on the regret in terms of the quantities , , and . We now show that for all , , and , the upper bound can not be improved777For sufficiently large, we may overlook the case or . Observe that in this case, the minimax regret is already of the order of (cf. Figure 1). up to logarithmic factors.

###### Theorem 2 (Lower bound).

For all , , and , there exist , , and such that and

 infFsup∥xt∥∞⩽X,|yt|⩽Y{T∑t=1(yt−ˆyt)2−inf∥u∥1⩽UT∑t=1(yt−u⋅xt)2} ⩾⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩c1ln(2+16d2)dY2κ√ln(1+1/κ)if√ln(1+2d)2d√ln2⩽κ⩽1 ,c2ln(2+16d2)dY2ifκ>1 ,

where are absolute constants. The infimum is taken over all forecasters and the supremum is taken over all sequences such that and .

The above lower bound extends those of CB99AnalysisGradientBased ; KiWa97EGvsGD , which hold for small of the order of . The proof is postponed to AppendixA.1. We perform a reduction to the stochastic batch setting — via the standard online to batch conversion — and employ a version of a lower bound of Tsy-03-OptimalRates .

## 3 Adaptation to unknown X, Y and T via exponential weights

Although the proof of Theorem 1 already gives an algorithm that achieves the minimax regret, the latter takes as inputs , , , and , and it is inefficient in high dimensions. In this section, we present a new method that achieves the minimax regret both efficiently and without prior knowledge of , , and provided that is known. Adaptation to an unknown is considered in Section 4. Our method consists of modifying an underlying efficient linear regression algorithm such as the algorithm KiWa97EGvsGD or the sequential ridge regression forecaster Vo01CompetitiveOnline ; AzWa01RelativeLossBounds . Next, we show that automatically tuned variants of the algorithm nearly achieve the minimax regret for the regime . A similar modification could be applied to the ridge regression forecaster — without retaining additional computational efficiency — to achieve a nearly optimal regret bound of order in the regime . The latter analysis is more technical and hence is omitted.

### 3.1 An adaptive EG± algorithm for general convex and differentiable loss functions

The second algorithm of the proof of Theorem 1 is computationally inefficient because it aggregates approximately experts. In contrast, the algorithm has a manageable computational complexity that is linear in  at each time . Next we introduce a version of the algorithm — called the adaptive algorithm — that does not require prior knowledge of , and (as opposed to the original algorithm of KiWa97EGvsGD ). This version relies on the automatic tuning of CeMaSt07SecOrder . We first present a generic version suited for general convex and differentiable loss functions. The application to the square loss and to other -losses will be dealt with in Sections 3.2 and 3.3.

The generic setting with arbitrary convex and differentiable loss functions corresponds to the online convex optimization setting Zin-03-GradientAscent ; ShShSrSr-09-StochasticConvexOptimization and unfolds as follows: at each time , the forecaster chooses a linear combination , then the environment chooses and reveals a convex and differentiable loss function , and the forecaster incurs the loss . In online linear regression under the square loss, the loss functions are given by .

The adaptive algorithm for general convex and differentiable loss functions is defined in Figure 2. We denote by the canonical basis of , by the gradient of at , and by the -th component of this gradient. The adaptive algorithm uses as a blackbox the exponentially weighted majority forecaster of CeMaSt07SecOrder on experts — namely, the vertices of — as in KiWa97EGvsGD . It adapts to the unknown gradient amplitudes by the particular choice of due to CeMaSt07SecOrder and defined for all by

 ηt=min{1ˆEt−1,C√ln(2d)Vt−1} , (8)

where and where we set, for all ,

 z+j,s ≜U∇jℓs(ˆus)andz−j,s≜−U∇jℓs(ˆus) ,j=1,…,d,s=1,…,t , ˆEt ≜infk∈Z⎧⎪⎨⎪⎩2k:2k⩾max1⩽s⩽tmax1⩽j,k⩽dγ,μ∈{+,−}∣∣zγj,s−zμk,s∣∣⎫⎪⎬⎪⎭ , Vt ≜t∑s=1∑1⩽j⩽dγ∈{+,−}pγj,s⎛⎜ ⎜ ⎜⎝zγj,s−∑1⩽k⩽dμ∈{+,−}pμk,szμk,s⎞⎟ ⎟ ⎟⎠2 .

Note that approximates the range of the up to time , while is the corresponding cumulative variance of the forecaster.

###### Proposition 1 (The adaptive EG± algorithm for general convex and differentiable loss functions).

Let . Then, the adaptive algorithm on defined in Figure 2 satisfies, for all and all sequences of convex and differentiable999Gradients can be replaced with subgradients if the loss functions are convex but not differentiable. loss functions ,

In particular, the regret is bounded by .

Proof: The proof follows straightforwardly from a linearization argument and from a regret bound of CeMaSt07SecOrder applied to appropriately chosen loss vectors. Indeed, first note that by convexity and differentiability of for all , we get that

 =max1⩽j⩽dγ∈{+,−}T∑t=1∇ℓt(ˆut)⋅(ˆut−γUej) (9) =T∑t=1∑1⩽j⩽dγ∈{+,−}pγj,tγU∇jℓt(ˆut)−min1⩽j⩽dγ∈{+,−}T∑t=1γU∇jℓt(ˆut) , (10)

where (9) follows by linearity of on the polytope , and where (10) follows from the particular choice of in Figure 2.

To conclude the proof, note that our choices of the weight vectors in Figure 2 and of the time-varying parameter in (8) correspond to the exponentially weighted average forecaster of (CeMaSt07SecOrder, , Section 4.2) when it is applied to the loss vectors , . Since at time  the coordinates of the last loss vector lie in an interval of length , we get from (CeMaSt07SecOrder, , Corollary 1) that

 T∑t=1∑1⩽j⩽dγ∈{±1}pγj,tγU∇jℓt(ˆut)−min1⩽j⩽dγ∈{±1}T∑t=1γU∇jℓt(ˆut)

Substituting the last upper bound in (10) concludes the proof. ∎

### 3.2 Application to the square loss

In the particular case of the square loss , the gradients are given by for all . Applying Proposition 1, we get the following regret bound for the adaptive algorithm.

###### Corollary 2 (The adaptive EG± algorithm under the square loss).

Let . Consider the online linear regression setting defined in the introduction. Then, the adaptive algorithm (see Figure 2) tuned with and applied to the loss functions satisfies, for all individual sequences ,

 T∑t=1(yt−ˆut⋅xt)2−min∥u∥1⩽UT∑t=1(yt−u⋅xt)2 ⩽8UX ⎷(min∥u∥1⩽UT∑t=1(yt−u⋅xt)2)ln(2d)+(137ln(2d)+24)(UXY+U2X2) ⩽8UXY√Tln(2d)+(137ln(2d)+24)(UXY+U2X2) ,

where the quantities and are unknown to the forecaster.

Using the terminology of cesa-bianchi06prediction ; CeMaSt07SecOrder , the first bound of Corollary 2 is an improvement for small losses: it yields a small regret when the optimal cumulative loss is small. As for the second regret bound, it indicates that the adaptive algorithm achieves approximately the regret bound of Theorem 1 in the regime , i.e., . In this regime, our algorithm thus has a manageable computational complexity (linear in at each time ) and it is adaptive in , , and .

In particular, the above regret bound is similar101010By Theorem 5.11 of KiWa97EGvsGD , the original algorithm satisfies the regret bound , where is an upper bound on (in particular, ). Note that our main regret term is larger by a multiplicative factor of . However, contrary to KiWa97EGvsGD , our algorithm does not require the prior knowledge of and — or, alternatively, , , and . to that of the original algorithm (KiWa97EGvsGD, , Theorem 5.11), but it is obtained without prior knowledge of , , and . Note also that this bound is similar to that of the self-confident -norm algorithm of AuCeGe02Adaptive with (see Section 1.2). The fact that we were able to get similar adaptivity and efficiency properties via exponential weighting corroborates the similarity that was already observed in a non-adaptive context between the original algorithm and the -norm algorithm (in the limit with an appropriate initial weight vector, or for of the order of with a zero initial weight vector, cf. Ge-03-pNorm ).

Proof (of Corollary 2): We apply Proposition 1 with the square loss . It yields

 (11)

Using the equality for all , we get that, on the one hand, by the upper bound ,

 ∥∇ℓt(ˆut)∥2∞⩽4X2ℓt(ˆut) , (12)

and, on the other hand, (indeed, by Hölder’s inequality, ). Substituting the last two inequalities in (11), setting