1. Introduction and main result
Let be an matrix and be a consistent linear system of equations. Suppose that is a corrupted version of defined by
has independent mean zero random entries. Given an initial vector, we consider the relaxed Kaczmarz algorithm
where is the learning rate (or relaxation parameter), is the -th row of , is the row index for iteration , and is the -norm. When the rows are chosen randomly, (2) is an instance of stochastic gradient descent, see , whose performance in practice depends on the definition of the learning rate, see . In this paper, we derive a scheduled learning rate for a randomized Kaczmarz algorithm, which optimizes a bound on the expected error; our main result proves an associated convergence result, see Theorem 1.1 and Figure 1.
The Kaczmarz algorithm dates back to the 1937 paper by Kaczmarz  who considered the iteration (2) for the case . The algorithm was subsequently studied by many authors; in particular, in 1967, Whitney and Meany  established a convergence result for the relaxed Kaczmarz algorithm: if is a consistent linear system, , and for fixed , then (2) converges to , see . In 1970 the Kaczmarz algorithm was rediscovered under the name Algebraic Reconstruction Technique (ART) by Gordon, Bender, and Herman  who were interested in applications to computational tomography (including applications to three-dimensional electron microscopy); such applications typically use the relaxed Kaczmarz algorithm with learning rate , see 
. Methods and heuristics for setting the learning ratehave been considered by several authors, see the 1981 book by Censor ; also see [2, 8, 9].
More recently, in 2009, Strohmer and Vershynin  established the first proof of a convergence rate for a Kaczmarz algorithm that applies to general matrices; in particular, given a consistent linear system , they consider the iteration (2) with . Under the assumption that the row index at iteration
is chosen randomly with probability proportional tothey prove that
where ; here, is a condition number for the matrix defined by , where is the Frobenius norm of , and is the operator norm of the left inverse of . We remark that the convergence rate in (3) is referred to as exponential convergence in , while in the field of numerical analysis (where it is typical to think about error on a logarithmic scale) it is referred to as linear convergence.
The result of  was subsequently extended by Needell  who considered the case of a noisy linear system: instead of having access to the right hand side of the consistent linear system , we are given , where the entries of satisfy but are otherwise arbitrary. Under these assumptions  proves that the iteration (2) with satisfies
that is, we converge until we reach some ball of radius around the solution and then no more. Let be the left inverse of , and observe that
Moreover, if is a scalar multiple of the left singular vector of
associated with the smallest singular value, and, then (5) holds with equality (such examples are easy to manufacture). Thus, (4) is optimal when is arbitrary. In this paper, we consider the case where , where has independent mean zero random entries: our main result shows that in this case, we break through the convergence horizon of (4) by using an optimized learning rate and many equations with independent noise, see Theorem 1.1 for a precise statement.
Modifications and extensions of the randomized Kaczmarz algorithm of  have been considered by many authors, see the references discussed in §4. In particular,  considers the convergence of a relaxed Kaczmarz algorithm, but their analysis focuses on the case of a consistent linear system. The randomized Kaczmarz algorithm of 
can also be viewed as an instance of other machine learning methods. In particular, it can be viewed as an instance of coordinate descent, see, or as an instance of stochastic gradient descent, see 
. Part of our motivation for studying scheduled learning rates for the randomized Kaczmarz algorithm is that the randomized Kaczmarz algorithm provides a model where we can start to develop a complete theoretical understanding that can be transferred to other problems. The learning rate is of essential importance in machine learning; in particular, in deep learning:“The learning rate is perhaps the most important hyperparameter…. the effective capacity of the model is highest when the learning rate is correct for the optimization problem”, [5, pp. 429]
. In this paper, we derive a scheduled learning rate (depending on two hyperparameters) for a randomized Kaczmarz algorithm, which optimizes a bound on the expected error, and prove an associated convergence result. In general, a learning rate is said to be scheduled if the method of changing the learning rate is determined a priori as a function of the iteration number and possibly hyperparameters (that is, the rate is non-adaptive). There are many general results about scheduled learning rates, see for example,[23, 31]; the contribution of this paper is that we are able to derive a stronger convergence guarantee for the specific case of a randomized Kaczmarz algorithm consisting of the iteration (2). We remark that in practice adaptive learning rates such as Adadelta  or ADAM  are often used. We discuss the possibility of extending our analysis to adaptive learning rates and other potential extensions in §4.
1.3. Main result
Let be an matrix, be a consistent linear system of equations, and denote the -th row of . Suppose that is defined by
are independent random variables such thathas mean
and variance. Given , define
where denotes the learning rate parameter; assume that is chosen from with probability proportional to . Let denote the matrix formed by deleting rows from .
Remark 1.1 (Interpreting Theorem 1.1).
We summarize a few key observations that aid in interpreting Theorem 1.1:
We emphasize that is chosen without replacement, see §1.3, so the maximum number of iterations is at most the number of rows . In practice, the algorithm can be run with restarts: after a complete pass over the data we use the final iterate to redefine , and restart the algorithm (potentially using different hyperparameters to define the learning rate). In the context of machine learning, the statement of Theorem 1.1
applies to one epoch, see §4.
Remark 1.2 (Interpreting the plot of the relative error in Figure 2).
Example 1.1 (Numerical illustration of Theorem 1.1).
Let be an matrix with nonzero entries in each row. Assume these nonzero entries are independent random vectors drawn uniformly at random from the unit sphere: . Let be an -dimensional vector with independent standard normal entries, and set . Let , where is an -dimensional vector with independent mean , variance normal entries. We run the relaxed Kaczmarz algorithm (2) using the learning rate (8) of Theorem 1.1. In particular, we set
Using the estimatesand we define by (8) (we justify this choice of in Corollary 1.4). We plot the numerical relative error together with the bound on the expected relative error in Figure 2. Furthermore, to provide intuition about how varies with , we plot for various values of in Figure 2, keeping other parameters fixed.
Example 1.2 (Continuous version of learning rate ).
the condition number parameter .
The result of Theorem 1.1 states that the function defined in (10) is an upper bound for . From the proof of Theorem 1.1 it will be clear that this upper bound is a good approximation when is small, see §2.5. In this case, it is illuminating to consider a continuous version of the optimal scheduled learning rate of Theorem 1.1. In particular, we define
1.4. Corollaries of the main result
The following corollaries provide more intuition about the error bound function , and cases when the error bound of Theorem 1.1 is sharp. First, we consider the case where the variance of the noise is small, and the other parameters are fixed; in this case we recover the convergence rate of the standard randomized Kaczmarz algorithm with .
Corollary 1.1 (Limit as ).
where the constant in the big- notation depends on and .
Corollary 1.2 (Limit as ).
where the constant in the big- notation depends on , and .
which agrees with the intuition that using independent sources of noise should reduce the error in the -norm by a factor of .
Corollary 1.3 (Matrices with i.i.d. rows).
see [28, Lemma 3.2.3, §3.3.1]. The following corollary gives a condition under which the learning rate (8) is optimal. In particular, this corollary implies that the error bound and learning rate are optimal for the example of matrices whose rows are sampled uniformly at random from the unit sphere discussed above.
Corollary 1.4 (Case when error bound is sharp and learning rate is optimal).
and the learning rate defined by (8) is optimal in the sense that it minimizes the expected error over all possible choices of scheduled learning rates (learning rates that only depend on the iteration number and possibly hyperparameters). Moreover, if (13) holds with , then it follows that (14) holds with equality and hence the learning rate is optimal.
The proof of Corollary 1.4 is given in §3.4. Informally speaking, the learning rate defined by (8) is optimal whenever the bound in Theorem 1.1 is a good approximation of the expected error , and there is reason to expect this is the case in practice, see the related results for consistent systems [26, §3, Theorem 3] and [25, Theorem 1]. For additional discussion about Theorem 1.1 and its corollaries see §4.
2. Proof of Theorem 1.1
The proof of Theorem 1.1 is divided into five steps: in Step 1 (§ 2.1) we consider the relaxed Kaczmarz algorithm for the consistent linear system . This step uses the same proof strategy as ; in Step 2 (§ 2.2), we consider the effect of additive random noise. In Step 3 (§ 2.3), we estimate conditional expectations. In Step 4 (§ 2.4), we optimize the learning rate with respect to an error bound. In Step 5 (§ 2.5), we show that this optimized learning rate is related to a differential equation, which can be used to establish the upper bound on the expected error.
2.1. Step 1: relaxed randomized Kaczmarz for consistent systems
We start by considering the consistent linear system . Assume that is given and let
denote the iteration of the relaxed randomized Kaczmarz algorithm with learning rate on the consistent linear system , and let
be the projection of
on the affine hyperplane defined by the-th equation. The points , and lie on the line , which is perpendicular to the affine hyperplane that contains and , see the illustration in Figure 4.
By the Pythagorean theorem, it follows that
and by definition of and we have
see Figure 4. Thus
Factoring out from the right hand side gives
Since , it follows that
Taking the expectation conditional on gives
We delay discussion of conditional expectation until later.
Remark 2.1 (Optimal learning rate for consistent linear systems).
with equality only when . It follows that, for consistent linear systems of equations, the optimal way to define the learning rate is to set for all . In the following, we show that, under our noise model, defining as a specific decreasing function of is advantageous.
2.2. Step 2: relaxed randomized Kaczmarz for systems with noise
In this section, we redefine and for the case where the right hand side of the consistent linear system is corrupted by additive random noise: . Let
be the iteration of the relaxed randomized Kaczmarz algorithm using , and
be the projection of on the affine hyperplane defined by the uncorrupted -th equation. Note that both and differ from the previous section: is corrupted by the noise term and is the projection of the previously corrupted iterate onto the hyperplane defined by the uncorrupted equation. However, the following expansion still holds:
Indeed, and are still contained on the line , which is perpendicular to the affine hyperplane that contains and , see Figure 4. By the definition of and we have
Expanding the right hand side gives
By using the fact that
we can rewrite as
As in the analysis of the consistent linear system in Step 1 (§2.1) above, we factor out , use the fact that , and take the expectation conditional on to conclude that
In the following section, we discuss the terms and .
2.3. Step 3: estimating conditional expectations
In this section, we discuss estimating the conditional expectations and . First, we discuss , which has two terms (a linear term and quadratic term with respect to ). In particular, we have
Recall that are independent random variables such that has mean zero and variance . Since we assume that is chosen from (that is, they are drawn without replacement) with probability proportional to , see §1.3, it follows that is independent from . Hence
We remark that if was chosen uniformly at random from and we had previously selected equation , say, during iteration for , then the error in the -th equation may depend (or even be determined) by ; thus the assumption that rows are drawn without replacement is necessary for this term to vanish. We use the same estimate for as in . In particular, by [26, eq. 7] we have
Remark 2.2 (Comparison to case ).
Note that when , the linear term
because , which geometrically is the result of an orthogonality relation, which holds regardless of the structure of the noise. Here we consider the case , and the linear term vanishes due to the assumption that are independent mean zero random variables.
2.4. Step 4: optimal learning rate with respect to upper bound
In this section we derive the optimal learning rate with respect to an upper bound on the expected error. In Remark 2.1, we already considered the case , and found that the optimal learning rate is to set for all regardless of the other parameters. Thus, in this section we assume that . By (18), (19), and (20) we have
Iterating this estimate and taking a full expectation gives
where and we use the convention that empty products are equal to . This upper bound satisfies the recurrence relation
where does not depend on . Setting the partial derivative of with respect to equal to zero, and solving for gives
Since the value of defined by (22) does indeed minimize with respect to . It is straightforward to verify that this argument can be iterated to conclude that the values of that minimize satisfy the recurrence relation:
for . Note that we can simplify (23) by observing that
In summary, we can compute the optimal learning rate with respect to the upper bound on the expected error as follows: if , then for all . Otherwise, we define
for . In the following section we study the connection between this recursive formula and a differential equation.
2.5. Step 5: relation to differential equation
In this section, we derive a closed form upper bound for . It follows from (24) that
Making the substitution
gives the finite difference equation
which can be interpreted as one step of the forward Euler method (with step size
) for the ordinary differential equation
where and . It is straightforward to verify that the solution of this differential equation is
where is the Lambert- function (the inverse of the function ) and is determined as the initial condition; in particular, if , then
We claim that is a convex function when . It suffices to check that . Direct calculation gives
Observe that cannot change sign because when . Thus, (28) is always nonnegative when as was to be shown. Since the forward Euler method is a lower bound for convex functions, it follows from (26) and (27) that
Thus if we set
it follows that
This completes the proof of Theorem 1.1.