# Efficient learning with robust gradient descent

Minimizing the empirical risk is a popular training strategy, but for learning tasks where the data may be noisy or heavy-tailed, one may require many observations in order to generalize well. To achieve better performance under less stringent requirements, we introduce a procedure which constructs a robust approximation of the risk gradient for use in an iterative learning routine. We provide high-probability bounds on the excess risk of this algorithm, by showing that it does not deviate far from the ideal gradient-based update. Empirical tests show that in diverse settings, the proposed procedure can learn more efficiently, using less resources (iterations and observations) while generalizing better.

## Authors

• 14 publications
• 4 publications
• ### Robust descent using smoothed multiplicative noise

To improve the off-sample generalization of classical procedures minimiz...
10/15/2018 ∙ by Matthew J. Holland, et al. ∙ 0

• ### Better scalability under potentially heavy-tailed gradients

We study a scalable alternative to robust gradient descent (RGD) techniq...
06/01/2020 ∙ by Matthew J. Holland, et al. ∙ 0

• ### Learning with CVaR-based feedback under potentially heavy tails

We study learning algorithms that seek to minimize the conditional value...
06/03/2020 ∙ by Matthew J. Holland, et al. ∙ 0

• ### Improved scalability under heavy tails, without strong convexity

Real-world data is laden with outlying values. The challenge for machine...
06/02/2020 ∙ by Matthew J. Holland, et al. ∙ 0

• ### Convergence and Margin of Adversarial Training on Separable Data

Adversarial training is a technique for training robust machine learning...
05/22/2019 ∙ by Zachary Charles, et al. ∙ 0

• ### Surfing: Iterative optimization over incrementally trained deep networks

We investigate a sequential optimization procedure to minimize the empir...
07/19/2019 ∙ by Ganlin Song, et al. ∙ 3

• ### Better scalability under potentially heavy-tailed feedback

We study scalable alternatives to robust gradient descent (RGD) techniqu...
12/14/2020 ∙ by Matthew J. Holland, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Any successful machine learning application depends both on procedures for reliable statistical inference, and a computationally efficient implementation of these procedures. This can be formulated using a risk

, induced by a loss , where

is the parameter (vector, function, set, etc.) to be specified, and expectation is with respect to

, namely the underlying data distribution. Given data , if an algorithm outputs such that is small with high probability over the random draw of the sample, this is formal evidence for good generalization, up to assumptions on the distribution. Performance-wise, the statistical side is important because is always unknown, and the method of implementation is important since the only we ever have in practice is one we can actually compute.

Empirical risk minimization (ERM), which admits any minimizer of , is the canonical strategy for machine learning problems, and there exists a rich body of literature on its generalization ability [20, 4, 2, 5]. In recent years, however, some severe limitations of this technique have come into light. ERM can be implemented by numerous methods, but its performance is sensitive to this implementation [11, 13]

, showing sub-optimal guarantees on tasks as simple as multi-class pattern recognition, let alone tasks with unbounded losses. A related issue is highlighted in recent work by

Lin and Rosasco [25], where we see that ERM implemented using a gradient-based method only has appealing guarantees when the data is distributed sharply around the mean in a sub-Gaussian sense. These results are particularly important due to the ubiquity of gradient descent (GD) and its variants in machine learning. They also carry the implication that ERM under typical implementations is liable to become highly inefficient whenever the data has heavy tails, requiring a potentially infinitely large sample to achieve a small risk. Since tasks with such “inconvenient” data are common [14]

, it is of interest to investigate and develop alternative procedures which can be implemented as readily as the GD-based ERM (henceforth, ERM-GD), but which have desirable performance for a wider class of learning problems. In this paper, we introduce and analyze an iterative routine which takes advantage of robust estimates of the risk gradient.

#### Review of related work

Here we review some of the technical literature related to our work. As mentioned above, the analysis of Lin and Rosasco [25] includes the generalization of ERM-GD for sub-Gaussian observations. ERM-GD provides a key benchmark to be compared against; it is of particular interest to find a technique that is competitive with ERM-GD when it is optimal, but which behaves better under less congenial data distributions. Other researchers have investigated methods for distribution-robust learning. One notable line of work looks at generalizations of the “median of means” procedure, in which one constructs candidates on disjoint partitions of the data, and aggregates them such that anomalous candidates are effectively ignored. These methods can be implemented and have theoretical guarantees, ranging from the one-dimensional setting [24, 29] to multi-dimensional and even functional models [28, 17, 23]. Their main limitation is practical: when sample size is small relative to the complexity of the model, very few subsets can be created, and robustness is poor; conversely, when is large enough to make many candidates, cheaper and less sophisticated methods often suffice.

An alternative approach is to use all the observations to construct robust estimates of the risk for each to be checked, and subsequently minimize as a surrogate. An elegant strategy using M-estimates of was introduced by Brownlees et al. [6], based on fundamental results due to Catoni [7, 8]. While the statistical guarantees are near-optimal under very weak assumptions on the data, the proxy objective is defined implicitly, introducing many computational roadblocks. In particular, even if is convex, the estimate need not be, and the non-linear optimization required by this method can be both unstable and costly in high dimensions.

Finally, conceptually the closest recent work to our research are those also analyzing novel “robust gradient descent” algorithms, namely steepest descent procedures which utilize a robust estimate of the gradient vector of the underlying (unknown) objective of interest. The first works in this line are due to Holland and Ikeda [16] (a preliminary version of our work) and Chen et al. [9] (later updated as Chen et al. [10]), which appeared as pre-prints almost simultaneously. While the problem setting of Chen et al. [10] and the technical approach to robustification are completely different from ours, the underlying motivation of replacing the empirical mean gradient estimate with a more robust alternative is shared. We utilize an M-estimator of the gradient coordinates which can be approximated using fixed-point iterative updates. On the other hand, Chen et al. [9] utilize the geometric median to robustly aggregate multiple candidates constructed on subsets after partitioning the data. They consider a federated learning setting with many low-cost machines susceptible to arbitrarily bad performance, running in parallel, and provide rigorous learning guarantees within that problem setting. We on the other hand consider a single learning machine, with potentially heavy-tailed data, within a general risk-minimization framework. While the theoretical guarantees are not directly comparable, the dependence on sample size , confidence , and dimension are essentially the same, up to minor differences in log factors. The key advantage to our approach is the ease of computation. While the geometric median used by Chen et al. [10] can indeed be computed using well-known iterative routines [38], these suffer from substantial overhead in computing pairwise distances over all partitions at each iteration, and as mentioned above in reference to the work of Minsker [28] and Hsu and Sabato [17], can run into significant bias when the number of partitions cannot be made large enough. A more recent entry into this line of research comes from Prasad et al. [33], who follow the exact same strategy as Chen et al. [10], but consider a more general learning setting, very close to the general setting of our paper. They also provide new results for several concrete models under heavy-tailed data, although the practical weaknesses of their procedure are exactly the same as those inherent in the procedure of Chen et al. [10].

#### Our contributions

To deal with these limitations of ERM-GD and its existing robust alternatives, the key idea here is to use robust estimates of the risk gradient, rather than the risk itself, and to feed these estimates into a first-order steepest descent routine. In doing so, at the cost of minor computational overhead, we get formal performance guarantees for a wide class of data distributions, while enjoying the computational ease of a gradient descent update. Our main contributions:

• A learning algorithm which addresses the vulnerabilities of ERM-GD, is easily implemented, and can be adapted to stochastic sub-sampling for big problems.

• High-probability bounds on excess risk of this procedure, which hold under mild moment assumptions on the data distribution, and suggest a promising general methodology.

• Using both tightly controlled simulations and real-world benchmarks, we compare our routine with ERM-GD and other cited methods, obtaining results that reinforce the practical utility and flexibility suggested by the theory.

#### Content overview

In section 2, we introduce the key components of the proposed algorithm, and provide an intuitive example meant to highlight the learning principles taken advantage of. Theoretical analysis of algorithm performance is given in section 3, including a sketch of the proof technique and discussion of the main results. Empirical analysis follows in section 4, in which we elucidate both the strengths and limits of the proposed procedure, through a series of tightly controlled numerical tests. Finally, concluding remarks and a look ahead are given in section 5. Proofs and extra information regarding computation is given in appendix A. Additional empirical test results are provided in appendix B.

Before introducing the proposed algorithm in more detail, we motivate the practical need for a procedure which deals with the weaknesses of the traditional sample mean-based gradient descent strategy.

### 2.1 Why robustness?

Recall that since ERM admits any minima of , the simplest implementation of gradient descent (for ) results in the update

 \wwhat(t+1)=\wwhat(t)−α(t)1nn∑i=1l′(\wwhat(t);\zzi) (1)

where are scaling parameters. Taking the derivative under the integral we have , meaning ERM-GD uses the sample mean as an estimator of each coordinate of , in pursuit of a solution minimizing the unknown . Without rather strong assumptions on the tails and moments of the distribution of for each , it has become well-known that the sample mean fails to provide sharp estimates [8, 28, 12, 27]. Intuitively, the issue is that we expect bad estimates to imply bad approximate minima. Does this formal sub-optimality indeed manifest itself in natural settings? Can principled modifications improve performance at a tolerable cost?

A simple example suggests affirmative answers to both questions. The plot on the left of Figure 1 shows contour lines of a strongly convex quadratic risk to be minimized, as well as the trajectory of 10 iterations of ERM-GD, given four independent samples from a common distribution, initiated at a common . With data , losses are generated as . We consider the case where the “noise” is heavy-tailed (log-Normal). Half of the samples saw relatively good solutions after ten iterations, and half saw rather stark deviation from the optimal procedure. When the sample contains errant observations, the empirical mean estimate is easily influenced by such points.

To deal with this, a classical idea is to re-weight the observations in a principled manner, and then carry out gradient descent as normal. That is, in the gradient estimate of (1), we replace the summands with , where , and . For example, we could set

 ωi\defeq˜ωi∑nk=1˜ωk,˜ωi\defeqψ(⟨\ww,\xx⟩−yi)(⟨\ww,\xx⟩−yi)

where

is an odd function of sigmoid form (see

A.1 and A.3). The idea is that for observations that induce errors which are inordinately large, the weight will be correspondingly small, reducing the impact. In the right-hand plot of Figure 1

, we give analogous results for this procedure, run under the exact same settings as ERM-GD above. The modified procedure at least appears to be far more robust to random idiosyncrasies of the sample; indeed, if we run many trials, the average risk is far better than the ERM-GD procedure, and the variance smaller. The fragility observed here was in the elementary setting of

, ; it follows a fortiori that we can only expect things to get worse for ERM-GD in higher dimensions and under smaller samples. In what follows, we develop a robust gradient-based minimization method based directly on the principles illustrated here.

### 2.2 Outline of proposed procedure

Were the risk to be known, we could update using

 \wwstar(t+1)\defeq\wwstar(t)−α(t)\gvec(\wwstar(t)) (2)

where , an idealized procedure. Any learning algorithm in practice will not have access to or , and thus must approximate this update with

 \wwhat(t+1)\defeq\wwhat(t)−α(t)\gghat(\wwhat(t)), (3)

where represents some sample-based estimate of . Setting to the sample mean reduces to ERM-GD, and conditioned on , , a property used throughout the literature [34, 22, 19, 35, 15, 30]. While convenient from a technical standpoint, there is no conceptual necessity for to be unbiased. More realistically, as long as is sharply distributed around , then an approximate first-order procedure should not deviate too far from the ideal, even if these estimators are biased. An outline of such a routine is given in Algorithm 1.

Let us flesh out the key sub-routines used in a single iteration, for the

case. When the data is prone to outliers, a “soft” truncation of errant values is a prudent alternative to discarding valuable data. This can be done systematically using a convenient class of M-estimators of location and scale

[37, 18]. The locate sub-routine entails taking a convex, even function , and for each coordinate, computing as

 \thatj∈\argminθ∈\RRn∑i=1ρ(l′j(\ww;\zzi)−θsj),j=1,…,d. (4)

Note that if , then reduces to the sample mean of , thus to reduce the impact of extreme observations, it is useful to take as . Here the factors are used to ensure that consistent estimates take place irrespective of the order of magnitude of the observations. We set the scaling factors in two steps. First is rescale, in which a rough dispersion estimate of the data is computed for each using

 ˆσj∈{σ>0:n∑i=1χ(l′j(\ww;\zzi)−γjσ)=0}. (5)

Here is an even function, satisfying , and as to ensure that the resulting is an adequate measure of the dispersion of about a pivot point, say . Second, we adjust this estimate based on the available sample size and desired confidence level, as

 sj=ˆσj√n/log(2δ−1) (6)

where specifies the desired confidence level , and is the sample size. This last step appears rather artificial, but can be derived from a straightforward theoretical argument, given in section 3.1. This concludes all the steps111For concreteness, in all empirical tests to follow we use the Gudermannian function [1], where , and , for a constant . General conditions on , as well as standard methods for computing the M-estimates, namely the and , are given in appendix A.1. in one full iteration of Algorithm 1 on .

In the remainder of this paper, we shall investigate the learning properties of this procedure, through analysis of both a theoretical (section 3) and empirical (section 4) nature. As an example, in the strongly convex risk case, our formal argument yields excess risk bounds of the form

 R(\wwhat(T))−R∗≤O(d(log(dδ−1)+dlog(n))n)+O((1−αβ)T)

with probability no less than , for small enough over iterations. Here is a constant that depends only on , and analogous results hold without strong convexity. Of the underlying distribution, all that is assumed is a bound on the variance of , suggesting formally that the procedure should be competitive over a diverse range of data distributions.

## 3 Theoretical analysis

Here we analyze the performance of Algorithm 1 on hypothesis class , as measured by the risk achieved, which we estimate using upper bounds that depend on key parameters of the learning task. A general sketch is given, followed by some key conditions, representative results, and discussion. All proofs are relegated to appendix A.2.

#### Notation

For integer , write for all the positive integers from to . Let denote the data distribution, with independent observations from , and an independent copy. Risk is then , its gradient , and . denotes a generic probability measure, typically the product measure induced by the sample. We write for the usual () norm on . For function on with partial derivatives defined, write the gradient as where for short, we write .

### 3.1 Sketch of the general argument

The analysis here requires only two steps: (i) A good estimate implies that approximate update (3) is near the optimal update. (ii) Under variance bounds, coordinate-wise M-estimation yields a good gradient estimate. We are then able to conclude that with enough samples and iterations, the output of Algorithm 1 can achieve an arbitrarily small excess risk. Here we spell out the key facts underlying this approach.

For the first step, let be a minimizer of . When the risk is strongly convex, then using well-established convex optimization theory [31], we can easily control as a function of for any step . Thus to control , in comparing the approximate case and optimal case, all that matters is the difference between and (Lemma 4). For the general case of convex , since we cannot easily control the distance of the optimal update from any potential minimum, one can directly compare the trajectories of and over , which once again amounts to a comparison of and . This inevitably leads to more error propagation and thus a stronger dependence on , but the essence of the argument is identical to the strongly convex case.

For the second step, since both and are based on a random sample , we need an estimation technique which admits guarantees for any step, with high probability over the random draw of this sample. A basic requirement is that

 \prr{maxt≤T∥\gghat(\wwhat(t))−\gvec(\wwhat(t))∥≤ε}≥1−δ. (7)

Of course this must be proved (see Lemmas 3 and 8), but if valid, then running Algorithm 1 for steps, we can invoke (7) to get a high-probability event on which closely approximates the optimal GD output, up to the accuracy specified by . Naturally this will depend on confidence level , which implies that to get confidence intervals, the upper bound in (7) will increase as gets smaller.

In the locate sub-routine of Algorithm 1, we construct a more robust estimate of the risk gradient than can be provided by the empirical mean, using an ancillary estimate of the gradient variance. This is conducted using a smooth truncation scheme, as follows. One important property of in (4) is that for any , one has

 −log(1−u+Cu2)≤ρ′(u)≤log(1+u+Cu2) (8)

for a fixed , a simple generalization of the key property utilized by Catoni [8]. For the Gudermannian function (section 2 footnote), we can take , with the added benefit that is bounded and increasing. As to the quality of these estimates, note that they are distributed sharply around the risk gradient, as follows.

###### Lemma 1 (Concentration of M-estimates).

For each coordinate , the estimates of (4) satisfy

 12|\thatj−gj(\ww)|≤C\vaaμl′j(\ww;\zz)sj+sjlog(2δ−1)n (9)

with probability no less than , given large enough and .

To get the tightest possible confidence interval as a function of , we must set

 s2j=Cn\vaaμl′j(\ww;\zz)log(2δ−1),

from which we derive (6), with corresponding to a computable estimate of . If the variance over all choices of is bounded by some , then up to the variance estimates, we have , with from Algorithm 1, yielding a bound for (7) free of .

###### Remark 2 (Comparison with ERM-GD).

As a reference example, assume we were to run ERM-GD, namely using an empirical mean estimate of the gradient. Using Chebyshev’s inequality, with probability all we can guarantee is . On the other hand, using the location estimate of Algorithm 1 provides guarantees with dependence on the confidence level, realizing an exponential improvement over the dependence of ERM-GD, and an appealing formal motivation for using M-estimates of location as a novel strategy.

### 3.2 Conditions and results

On the learning task, we make the following assumptions.

1. Minimize risk over a closed, convex with diameter .

2. and (for all ) are -smooth, convex, and continuously differentiable on .

3. There exists at which .

4. Distribution satisfies , for all , .

Algorithm 1 is run following (4), (5), and (6) as specified in section 2. For rescale, the choice of is only important insofar as the scale estimates (the ) should be moderately accurate. To make the dependence on this accuracy precise, take constants such that

 c2min≤ˆσj\vaaμl′j(\ww;\zz)≤c2max,j∈[d] (10)

for all choices of , and write . For confidence, we need a large enough sample; more precisely, for each , it is sufficient if for each ,

 14≥Clog(2δ−1)n⎛⎝1+C\vaaμl′j(\ww;\zz)ˆσ2j⎞⎠. (11)

For simplicity, fix a small enough step size,

 α(t)=α,∀t∈{0,…,T−1},α∈(0,2/λ). (12)

Dependence on initialization is captured by two related factors , and . Under this setup, we can control the estimation error.

###### Lemma 3 (Uniform accuracy of gradient estimates).

For all steps of Algorithm 1, we have

 ∥ˆ\mvθ(t)−\gvec(\wwhat(t))∥≤˜ε√n\defeqλ(√d+1)√n+2c0√dV(log(2dδ−1)+dlog(3Δ√n/2))n

with probability no less than .

#### Under strongly convex risk

In addition to assumptions 3.23.2, assume that is -strongly convex. In this case, in 3.2 is the unique minimum. First, we control the estimation error by showing that the approximate update (3) does not differ much from the optimal update (2).

###### Lemma 4 (Minimizer control).

Consider the general approximate GD update (3), with such that . Assume that (7) holds with bound . Write . Then, with probability no less than , we have

 ∥\wwhat(T)−\wwstar∥≤(1−αβ)T/2D0+2εβ.

Since Algorithm 1 indeed satisfies (7), as proved in Lemma 3, we can use the control over the parameter deviation provided by Lemma 4 and the smoothness of to prove a finite-sample excess risk bound.

###### Theorem 5 (Excess risk bounds).

Write for the output of Algorithm 1 after iterations, run such that (11)–(12) hold, with step size for all , as in Lemma 4. It follows that

 R(\wwhat(T))−R∗≤λ(1−αβ)TD20+4λ˜εβ2n

with probability no less than , where is as given in Lemma 3.

###### Remark 6 (Interpretation of bounds).

There are two terms in the upper bound of Theorem 5, an optimization term decreasing in , and an estimation term decreasing in . The optimization error decreases at the usual gradient descent rate, and due to the uniformity of the bounds obtained, the statistical error is not hurt by taking

arbitrarily large, thus with enough samples we can guarantee arbitrarily small excess risk. Finally, the most important assumption on the distribution is weak: finite second-order moments. If we assume finite kurtosis, the argument of

Catoni [8] can be used to create analogous guarantees for an explicit scale estimation procedure, yielding guarantees whether the data is sub-Gaussian or heavy-tailed an appealing robustness to the data distribution.

###### Remark 7 (Doing projected descent).

The above analysis proceeds on the premise that holds after all the updates, . To enforce this, a standard variant of Algorithm 1 is to update as

 \wwhat(t+1)←π\WW(\wwhat(t)−α(t)ˆ\mvθ(t)),t∈{0,…,T−1}

where . By 3.2, this projection is well-defined [26, Sec. 3.12, Thm. 3.12]. Using this fact, it follows that for all , by which we can immediately show that Lemma 4 holds for the projected robust gradient descent version of Algorithm 1.

#### With prior information

An interesting concept in machine learning is that of the relationship between learning efficiency, and the task-related prior information available to the learner. In the previous results, the learner is assumed to have virtually no information beyond the data available, and the ability to set a small enough step-size. What if, for example, just the gradient variance was known? A classic example from decision theory is the dominance of the estimator of James and Stein over the maximum likelihood estimator, in multivariate Normal mean estimation using prior variance information. In our more modern and non-parametric setting, the impact of rough, data-driven scale estimates was made explicit by the factor . Here we give complementary results that show how partial prior information on the distribution can improve learning.

###### Lemma 8 (Accuracy with variance information).

Conditioning on and running one scale-location sequence of Algorithm 1, with modified to satisfy , . It follows that

 ∥\mv\that(t)−\gvec(\wwhat(t))∥≤4(C\trace(Σ(t))log(2dδ−1)n)1/2

with probability no less than , where is the covariance matrix of .

One would expect that with sharp gradient estimates, the variance of the updates should be small with a large enough sample. Here we show that the procedure stabilizes quickly as the estimates get closer to an optimum.

###### Theorem 9 (Control of update variance).

Run Algorithm 1 as in Lemma 8, with arbitrary step-size . Then, for any , taking expectation with respect to the sample , conditioned on , we have

 \exx∥\wwhat(t+1)−\wwhat(t)∥2≤2α2(t)(32Cd\trace(Σ(t))n+∥\gvec(\wwhat(t))∥2).

In addition to these results, one can prove an improved version of Theorem 5 in a perfectly analogous fashion, using Lemma 8.

## 4 Empirical analysis

The chief goal of our experiments is to elucidate the relationship between factors of the learning task (e.g., sample size, model dimension, initial value, underlying data distribution) and the behavior of the robust gradient procedure proposed in Algorithm 1. We are interested in how these factors influence performance, both in an absolute sense and relative to the key competitors cited in section 1.

We have carried out three classes of experiments. The first considers a concrete risk minimization task given noisy function observations, and takes an in-depth look at how each experimental factor influences algorithm behavior, in particular the trajectory of performance over time (as we iterate). Second is an application of the proposed algorithm to the corresponding regression task under a large variety of data distributions, meant to rigorously evaluate the practical utility and robustness in an agnostic learning setting. Finally, we consider applications to classification tasks using real-world data sets.

### 4.1 Controlled tests

#### Experimental setup

Our first set of controlled numerical experiments uses a “noisy convex minimization” model, designed as follows. We construct a risk function taking a canonical quadratic form, setting , for pre-fixed constants , , and . The task is to minimize without knowledge of itself, but rather only access to random function observations . These are generated independently from a common distribution, satisfying the property for all . In particular, here we generate observations , , with and independent of each other. Here denotes the minimum, and we have that . The inputs shall follow an isotropic

-dimensional Gaussian distribution throughout all the following experiments, meaning

is positive definite, and is strongly convex.

We consider three main performance metrics in this section: the average excess empirical risk (based on the losses ), the average excess risk (based on true risk ), and the variance of the risk. Averages and variances are computed over trials, with each trial corresponding to a new independent random sample. For all tests, the number of trials is 250.

For these first tests, we run three procedures. First is ideal gradient descent, denoted oracle, which has access to the true objective function . This corresponds to (2). Second, as a standard approximate procedure (3) when is unknown, we use ERM-GD, denoted erm and discussed at the start of section 2, which approximates the optimal procedure using the empirical risk. Against these two benchmarks, we compare our Algorithm 1, denoted rgd, as a robust alternative for (3).

#### Impact of heavy-tailed noise

Let us examine the results. We begin with a simple question: are there natural learning settings in which rgd outperforms ERM-GD? How does the same algorithm fare in situations where ERM is optimal? Under Gaussian noise, ERM-GD is effectively optimal [25, Appendix C]. We thus consider the case of Gaussian noise (mean

) as a baseline, and use centered log-Normal noise (log-location , log-scale ) as an archetype of asymmetric heavy-tailed data. Risk results for the two routines are given alongside training error in Figure 2.

In the situation favorable to erm, differences in performance are basically negligible. On the other hand, in the heavy-tailed setting, the performance of rgd is superior in terms of quality of the solution found and the variance of the estimates. Furthermore, we see that at least in the situation of small and large , taking beyond numerical convergence has minimal negative effect on rgd performance; on the other hand erm is more sensitive. Comparing true risk with sample error, we see that while there is some unavoidable overfitting, in the heavy-tailed setting rgd departs from the ideal routine at a slower rate, a desirable trait.

At this point, we still have little more than a proof of concept, with rather arbitrary choices of , , noise distribution, and initialization method. We proceed to investigate how each of these experimental parameters independently impacts performance.

#### Impact of initialization

Given a fixed data distribution and sample size, how does the quality of the initial guess impact learning performance? We consider three initializations of the form , with , values ranging over , , where larger naturally correspond to potentially worse initialization. Relevant results are displayed in Figure 3.

Some interesting observations can be made. That rgd matches erm when the latter is optimal is clear, but more importantly, we see that under heavy-tailed noise, rgd is far more robust to poor initial value settings. Indeed, while a bad initialization leads to a much worse solution in the limit for erm, we see that rgd is able to achieve the same performance as if it were initialized at a better value.

#### Impact of distribution

It is possible for very distinct distributions to have exactly the same risk functions. Learning efficiency naturally depends heavily on the process generating the sample; the underlying optimization problem is the same, but the statistical inference task changes. Here we run the two algorithms of interest from common initial values as in the first experimental setting, and measure performance changes as the noise distribution is modified. We consider six situations, three for Normal noise, three for log-Normal noise. The location and scale parameters for the former are respectively ; the log-location and log-scale parameters for the latter are respectively . Results are given in Figure 4.

Looking first at the Normal case, where we expect ERM-based methods to perform well, we see that rgd is able to match erm in all settings. In the log-Normal case, as our previous example suggested, the performance of erm degrades rather dramatically, and a clear gap in performance appears, which grows wider as the variance increases. This flexibility of rgd in dealing with both symmetric and asymmetric noise, both exponential and heavy tails, is indicative of the robustness suggested by the weak conditions of section 3.2. In addition, it suggests that our simple dispersion-based technique ( settings in 2.2) provides tolerable accuracy, implying a small enough factor, and reinforcing the insights from the proof of concept case seen in Figure 2.

#### Impact of sample size

Since the true risk is unknown, the size and quality of the sample is critical to the output of all learners. To evaluate learning efficiency, we examine how performance depends on the available sample size, with dimension and all algorithm parameters fixed. Figure 5 gives the accuracy of erm and rgd in tests analogous to those above, using common initial values across methods, and .

Both algorithms naturally show monotonic performance improvements as the sample size grows, but the most salient feature of these figures is the performance of rgd under heavy-tailed noise, especially when sample sizes are small. When our data may be heavy-tailed, this provides clear evidence that the proposed RGD methods can achieve better generalization than ERM-GD with less data, in less iterations.

#### Impact of dimension

Given a fixed number of observations, the role of dimension , namely the number of parameters to be determined, plays an important from both practical and theoretical standpoints, as seen in the error bounds of section 3.2. Fixing the sample size and all algorithm parameters as above, we investigate the relative difficulty each algorithm has as the dimension increases. Figure 6 shows the risk of erm and rgd in tests just as above, with .

As the dimension increases, since the sample size is fixed, both non-oracle algorithms tend to require more iterations to converge. The key difference is that under heavy tails, the excess risk achieved by our proposed method is clearly superior to ERM-GD over all settings, while matching it in the case of Gaussian noise. In particular, erm hits bottom very quickly in higher dimensions, while rgd continues to improve for more iterations, presumably due to updates which are closer to that of the optimal (2).

#### Comparison with robust loss minimizer

Another interesting question: instead of paying the overhead to robustify gradient estimates ( dimensions to handle), why not just make robust estimates of the risk itself, and use those estimates to fuel an iterative optimizer? Just such a procedure is analyzed by Brownlees et al. [6] (denoted bjl henceforth). To compare our gradient-centric approach with their loss-centric approach, we implement bjl using the non-linear conjugate gradient method of Polak and Ribière [32], which is provided by fmin_cg in the optimize module of the SciPy scientific computation library (default maximum number of iterations is ). This gives us a standard first-order general-purpose optimizer for minimizing the bjl objective. To see how well our procedure can compete with a pre-fixed max iteration number, we set for all settings. Computation time is computed using the Python time module. To give a simple comparison between bjl and rgd, we run multiple independent trials of the same task, starting both routines at the same (random) initial value each time, generating a new sample, and repeating the whole process for different settings of . Median times taken over all trials (for each setting) are recorded, and presented in Figure 7 alongside performance results.

From the results, we can see that while the performance of both methods is similar in low dimensions and under Gaussian noise, in higher dimensions and under heavy-tailed noise, our proposed rgd realizes much better performance in much less time. Regarding excess empirical risk, random deviations in the sample cause the minimum of the empirical risk function to deviate away from , causing the rgd solution to be closer to the ERM solution in higher dimensions. On the other hand, bjl is minimizing a different objective function. It should be noted that there are assuredly other ways of approaching the bjl optimization task, but all of which require minimizing an implicitly defined objective which need not be convex. We believe that rgd provides a simple and easily implemented alternative, while still utilizing the same statistical principles.

#### Regression application

In this experiment, we apply our algorithm to a general regression task, under a wide variety of data distributions, and compare its performance against standard regression algorithms, both classical and modern. For each experimental condition, and for each trial, we generate training observations of the form . Distinct experimental conditions are delimited by the setting of and . Inputs are assumed to follow a -dimensional isotropic Gaussian distribution, and thus our setting of will be determined by the distribution of noise . In particular, we look at several families of distributions, and within each family look at 15 distinct noise levels, namely parameter settings designed such that monotonically increases over the range 0.3–20.0, approximately linearly over the levels.

To capture a range of signal/noise ratios, for each trial, is randomly generated as follows. Defining the sequence and uniformly sampling with , we set . Computing , we have . Noise families: log-logistic (denoted llog in figures), log-Normal (lnorm), Normal (norm), and symmetric triangular (tri_s). Even with just these four, we capture both bounded and unbounded sub-Gaussian noise, and heavy-tailed data both with and without finite higher-order moments. Results for many more noise distributions are given in appendix B.

Our key performance metric of interest is off-sample prediction error, here computed as excess RMSE on an independent large testing set, averaged over trials. For each condition and each trial, an independent test set of observations is generated identically to the corresponding -sized training set. All competing methods use common sample sets for training and are evaluated on the same test data, for all conditions/trials. For each method, in the th trial, some estimate is determined. To approximate the -risk, compute root mean squared test error , and output prediction error as the average of normalized errors taken over all trials. While values vary, in all experiments we fix and test size .

Regarding the competing methods, classical choices are ordinary least squares (

-ERM, denoted OLD) and least absolute deviations (-ERM, LAD). We also look at two recent methods of practical and theoretical importance described in section 1, namely the robust regression routines of Hsu and Sabato [17] (HS) and Minsker [28] (Minsker). For the former, we used the source published online by the authors. For the latter, on each subset the ols solution is computed, and solutions are aggregated using the geometric median (in norm), computed using the well-known algorithm of Vardi and Zhang [38, Eqn. 2.6], and the number of partitions is set to . For comparison to this, we also initialize RGD to the OLS solution, with confidence , and for all iterations. Maximum number of iterations is ; the routine finishes after hitting this maximum or when the absolute value of the gradient falls below for all conditions. Illustrative results are given in Figure 8.

First we fix the model dimension , and evaluate performance as sample size ranges from very small to quite large (top row of Figure 8). We see that regardless of distribution, rgd effectively matches the optimal convergence of OLS in the norm and tri_s cases, and is resilient to the remaining two scenarios where ols breaks down. There are clear issues with the median of means based methods at very small sample sizes, though the geometric median based method does eventually at least surpass OLS in the llog and lnorm cases. Essentially the same trends can be observed at all noise levels.

Next, we look at performance over noise settings, from negligible noise to significant noise with potentially infinite higher-order moments (middle row of Figure 8). We see that rgd generalizes well, in a manner which is effectively uniform across the distinct noise families. We note that even in such diverse settings with pre-fixed step-size and iteration numbers, very robust performance is shown. It appears that under small sample size, rgd reduces the variance due to errant observations, while incurring a smaller bias than the other robust methods. When ols (effectively ERM-GD) is optimal, note that rgd follows it closely, with virtually negligible bias. When the former breaks down, rgd remains stable.

Finally, we fix the ratio and look at the role played by increasingly large dimension (bottom row of Figure 8). We see that for all distributions, the performance of rgd is essentially constant. This coincides with the theory of section 3.2, and our intuition since Algorithm 1 is run in a by-coordinate fashion. On the other hand, competing methods show sensitivity to the number of free parameters, especially in the case of asymmetric data with heavy tails.

### 4.2 Application to real-world benchmarks

To close out this section, and to gain some additional perspective on algorithm performance, we shift our focus to some nascent applications to real-world benchmark data sets.

Having already paid close attention to regression models in the previous section, here we consider applications of robust gradient descent to classification tasks, under both binary and multi-class settings. The model assumed is standard multi-class logistic regression: if the number of classes is

, and the number of input features is , then the total number of parameters to be determined is

. The loss function is convex in the parameters, and its partial derivatives all exist, so the model aligns well with our problem setting of interest. In addition, a squared

-norm regularization term is added to the loss, with varying over datasets (see below). All learning algorithms are given a fixed budget of gradient computations, set here to , where is the size of the training set made available to the learner.

We use three well-known data sets for benchmarking: the CIFAR-10 data set of tiny images, the MNIST data set of handwritten digits, and the protein homology dataset made popular by its inclusion in the KDD Cup. For all data sets, we carry out 10 independent trials, with training and testing tests randomly sampled as will be described shortly. For all datasets, we normalize the input features to the unit interval in a per-dimension fashion. For CIFAR-10, we average the RGD color channels to obtain a single greyscale channel. As a result, . There are ten classes, so , meaning . We take a sample size of for training, with the rest for testing, and set . For MNIST, we have and once again , so . As with the previous dataset, we set , and . Note that both of these datasets have all classes in equal proportions, so with uniform random sampling, class frequencies are approximately equal in each trial. On the other hand, the protein homology dataset (binary classification) has highly unbalanced labels, with only 1296 positive labels out of over 145,000 observations. We thus take random samples such that the training and test sets are balanced. For each trial, we randomly select 296 positively labeled examples, and the same amount of negatively labeled examples, yielding a test set of 592 examples. As for the training set size, we use the rest of the positive labels (1000 examples) plus a random selection of 1000 negatively labeled examples, so , and with and , we have . Regularization parameter is . For all datasets, the parameter weights are initialized uniformly over the interval .

Regarding the competing methods used, we test out a random mini-batch version of robust gradient descent given in Algorithm 1, with mini-batch sizes ranging over , roughly on the order of for the largest datasets. We also consider a mini-batch in the sense of randomly selecting coordinates to robustify: select indices randomly at each iteration, and run the RGD sub-routine on just these coordinates, using the sample mean for all the rest. Furthermore, we considered several minor alterations to the original routine, including using instead of the Gudermannian function for , updating the scale much less frequently (compared to every iteration), and different choices of

for re-scaling. We compare our proposed algorithm with stochastic gradient descent (SGD), and stochastic variance-reduced gradient descent (SVRG) proposed by

Johnson and Zhang [19]. For each method, pre-fixed step sizes ranging over are tested. SGD uses mini-batches of size 1, as does the inner loop of SVRG. The outer loop of SVRG continues until the budget is spent, with the inner loop repeating times.

Representative results are given in Figure 9. For each of the three methods of interest, and each dataset, we chose the top two performance settings, displayed as *_1 and *_2

respectively. Here “top performance” is measured by the median value of the last five iterations. We see that in general, robust gradient descent is competitive with the best settings of these well-known routines, has minimal divergence between the performance of its first- and second-best settings, and in the case of smaller data sets (protein homology), indeed significantly outperforms the competitors. While these are simply nascent applications of RGD, the strong initial performance suggests that further investigation of efficient strategies under high-dimensional data is a promising direction.

## 5 Concluding remarks

In this work, we introduced and analyzed a learning algorithm which takes advantage of robust estimates of the unknown risk gradient, integrating statistical estimation and practical implementation into a single routine. Doing so allows us to deal with the statistical vulnerabilities of ERM-GD and partition-based methods, while circumventing computational issues posed by minimizers of robust surrogate objectives. The price to be paid is new computational overhead and potentially biased estimates. Is this price worth paying? Bounds on the excess risk are available under very weak assumptions on the data distribution, and we find empirically that the proposed algorithm has desirable learning efficiency, in that it can competitively generalize, with less samples, over more distributions than its competitors.

Moving forward, a more careful analysis of the role that prior knowledge can play on learning efficiency, starting with the first-order optimizer setting, is of significant interest. Characterizing the learning efficiency enabled by sharper estimates could lead to useful insights in the context of larger-scale problems, where a small overhead might save countless iterations and dramatically reduce budget requirements, while simultaneously leading to more consistent performance across samples. Another natural line of work is to look at alternative strategies which operate on the data vector as a whole (rather than coordinate-wise), integrating information across coordinates, in order to infer more efficiently.

## Appendix A Technical appendix

### a.1 Preliminaries

Our generic data shall be denoted by . Let denote a probability measure on , equipped with an appropriate -field. Data samples shall be assumed independent and identically distributed (iid), written . We shall work with loss function throughout, with assumed differentiable for each . Write for a generic probability measure, most commonly the product measure induced by the sample. Let be an measurable function. Expectation is written , with variance defined analogously. For -dimensional Euclidean space , the usual () norm shall be denoted unless otherwise specified. For function on with partial derivatives defined, write the gradient as where for short, we write . For integer , write for all the positive integers from to . Risk shall be denoted , and its gradient . We make a running assumption that we can differentiate under the integral sign in each coordinate [3, 36], namely that

 \gvec(\ww)=(\exxμ∂l(\ww;\zz)∂w1,…,\exxμ∂l(\ww;\zz)∂wd). (13)

Smoothness and convexity of functions shall also be utilized. For convex function on convex set , say that is -Lipschitz if, for all we have . We say that is -smooth if is -Lipschitz. Finally, is strongly convex with parameter if for all ,

 F(\ww1)−F(\ww2)≥⟨F′(\ww2),\ww1−\ww2⟩+κ2∥\ww1−\ww2∥2

for any norm on , though we shall be assuming . If there exists such that , then it follows that is the unique minimum of on . Let be a continuously differentiable, convex, -smooth function. The following basic facts will be useful: for any choice of , we have

 f(\uu)−f(\vv) ≤λ2∥\uu−\vv∥2+⟨f′(\vv),\uu−\vv⟩ (14) 12λ∥f′(\uu)−f′(\vv)∥2 ≤f(\uu)−f(\vv)−⟨f′(\vv),\uu−\vv⟩. (15)

Proofs of these results can be found in any standard text on convex optimization, e.g. [31].

We shall leverage a special type of M-estimator here, built using the following convenient class of functions.

###### Definition 10 (Function class for location estimates).

Let be an even function () with and the following properties. Denote .

1. as .

2. as .

3. , and for some , and all ,

 −log(1−u+Cu2)≤ψ(u)≤log(1+u+Cu2).

Of particular importance in the proceeding analysis is the fact that is bounded, monotonically increasing and Lipschitz on , plus the upper/lower bounds which let us generalize the technique of Catoni [8].

###### Example 11 (Valid ρ choices).

In addition to the Gudermannian function (section 2 footnote), functions such as and are well-known examples that satisfy the desired criteria. Note that the wide/narrow functions of Catoni do not meet all these criteria, nor does the classic Huber function.

### a.2 Proofs

###### Proof of Lemma 1.

For cleaner notation, write for our iid observations. Here is assumed to satisfy the conditions of Definition 10. A high-probability concentration inequality follows by direct application of the specified properties of and , following the general technique laid out by Catoni [7, 8]. For and , writing , and taking expectation over the random draw of the sample,

 \exxexp(n∑i=1ψs(xi−u)) ≤(1+1s(\exxx−u)+Cs2\exx(x2+u2−2xu))n ≤exp(ns(\exxx−u)+Cns2(\vaax+(\exxx−u)2)).

The inequalities above are due to an application of the upper bound on , and and the inequality . Now, letting

 A \defeq1nn∑i=1ψs(xi−u) B \defeq1s(\exxx−u)+Cs2(\vaax+(\exxx−u)2)

we have a bound on . By Chebyshev’s inequality, we then have

 \prr{A>B+ε} =\prr{exp(nA)>exp(nB+nε)} ≤\exxexp(nA)exp(nB+nε) ≤exp(−nε).

Setting for confidence level , and for convenience writing

 b(u)\defeq\exxx−u+Cs(\vaax+(\exxx−u)2),

we have with probability no less than that

 snn∑i=1ψs(xi−u)≤b(u)+slog(δ−1)n. (16)

The right hand side of this inequality, as a function of , is a polynomial of order 2, and if

 1≥D\defeq4(C2\vaaxs2+Clog(δ−1)n),

then this polynomial has two real solutions. In the hypothesis, we stated that the result holds “for large enough and .” By this we mean that we require and to satisfy the preceding inequality (for each in the multi-dimensional case). The notation is for notational simplicity. The solutions take the form

 u=12(2\exxx+sC±sC(1−D)1/2).

Looking at the smallest of the solutions, noting this can be simplified as

 u+ \defeq\exxx+s2C(1−√1−D)(1+√1−D)1+√1−D =\exxx+s2CD1+√1−D ≤\exxx+sD/2C, (17)

where the last inequality is via taking the term in the previous denominator as small as possible. Now, writing as the M-estimate using and as in (4), note that equivalently satisfies . Using (