1 Introduction
Recent advancements in imaging technology have led to many new challenging mathematical problems for image reconstruction. One problem that has gained significant interest, especially in the age of big data, is that of image reconstruction when the number of unknown parameters is huge (i.e., images with very high spatial resolution) and the size of the observation dataset is massive and possibly growing [48, 47]. In streaming data problems, not only is it undesirable to wait until all data have been collected to obtain a reconstruction, but also partial reconstructions may be needed to inform the data acquisition process (e.g., for optimal experimental design or model calibration).
We consider linear inverse problems of the form
(1) 
where is the desired solution, models the forward process, contains observed measurements, and represents additive noise. We investigate sampled limited memory rowaction methods to approximate a solution to the corresponding massive least squares (LS) problem, For with full column rank, the goal is to approximate the unique solution,
(2) 
The LS problem is ubiquitous and core to computational mathematics and statistics. However, for massive problems where both the number of unknown parameters and the size of the observational dataset are massive or dynamically growing, standard techniques based on matrix factorization or iterative methods based on full
matrixvector multiplications (e.g., Krylov subspace methods
[46] or randomized methods [2, 26]) are not feasible. Problems of such nature appear more frequently and are plentiful, for instance in classification problems [5], data mining [28, 49, 58], 3D molecular structure determination [9, 36], and superresolution imaging
[15, 52].Rowaction methods such as the Kaczmarz method have emerged as an attractive alternative due to their scalability, simplicity, and quick initial convergence [12, 17, 24, 43, 40, 41, 54, 53]. Furthermore, for illposed inverse problems such as in tomography reconstruction, rowaction methods are widely used and exhibit regularizing properties [1, 18]. Basically, rowaction methods are iterative methods where each iteration only requires a sample of rows of and , thus circumventing issues with memory or data access. The most widelyknown rowaction methods are the Kaczmarz and block Kaczmarz methods [22], where only one row or one block of rows of and are required at each iteration. Various extensions have been proposed to include random sampling and damping parameters (e.g., [19, 1]), and many authors have studied the convergence properties of these methods [53, 54, 41, 40]. In Section 2 we provide a detailed review of previous works in this area, but first we present the problem setup.
To mathematically describe the sampling process, let be an independent and identically distributed (i.i.d.) sequence of
random matrices uniformly distributed on the set
, where are such that for some . Then, at the th iteration we denote and . In this paper, we focus on a rowaction method called sampled limited memory for LS (slimLS), where given an arbitrary initial guess , the th slimLS iterate is defined as(3) 
with
(4) 
Here is a sequence of positive definite matrices, is sequence of damping parameters, and the parameter is a “memory parameter” where we define with negative index as an empty matrix. Hence, increases in size within the first iterations, and the global curvature of the problem is approximated using previous samples of the data.
Various choices for can be used, e.g., see [14]. Notice that when the realizations of are sparse matrices, only information from a few rows of is extracted at each iteration, and contains partial information of . A straightforward choice of , which we consider here, is where is chosen such that contains selected rows of where . With this sampling scheme, the slimLS method is a generalization of the damped block Kaczmarz method, which is obtained when and . The slimLS method can also be interpreted as a stochastic approximation method [51]. Using the properties of described above, one can show that the LS problem in (2) is equivalent to the following stochastic optimization problem,
(5) 
Stochastic approximation methods for (5) may have the form where and is some sequence of positive semidefinite matrices. For the particular choice of defined in (4), we see that the slimLS method is a stochastic approximation method. Furthermore, since samples of rows of are used at each iteration, randomized rowaction methods can be characterized as stochastic approximation methods applied to (5).
Although our proposed slimLS method can be interpreted as both a rowaction and a stochastic approximation method, the main distinction of slimLS compared to existing methods to approximate (2) is that slimLS exhibits favorable initial and asymptotic convergence properties for constant and vanishing step sizes, respectively. Kaczmarz methods have fast initial convergence, but for inconsistent systems iterates converge asymptotically to a weighted LS solution rather than the desired LS solution [43]. On the other hand, stochastic gradient methods (where ) are guaranteed to converge asymptotically to the LS solution but can have erratic initial convergence. We show linear convergence of the expectation of slimLS iterates, and we prove linear convergence of the expected squared error norm up to a “convergence horizon” for constant damping parameter. Furthermore, it can be shown that slimLS iterates with decaying damping parameter converge asymptotically to the LS solution [14, 8]. The power of the slimLS method is revealed in our numerical examples, where we demonstrate the performance of the slimLS method for massive and streaming tomographic reconstruction problems.
An outline of the paper is as follows. In Section 2, we give an overview of previous work on rowaction methods and make connections to and distinctions from existing methods. In Section 3, we provide convergence results for slimLS iterates. Numerical results are presented in Section 4, where we compare the performance of slimLS to existing methods, and conclusions are provided in Section 5.
2 Previous works on rowaction methods
Different choices of in (3) yield different wellknown rowaction methods. The most computationally simple choice is , which gives the standard stochastic gradient method,
Under mild conditions, the stochastic gradient method converges to the LS solution [8, 14], but convergence can be very slow and depends heavily on the choice of the step size [50, 57]. We remark that
has different interpretations depending on the scientific community. It is often referred to as the learning rate in machine learning, the step size in classical optimization, and the relaxation parameter in algebraic reconstruction techniques for tomography. Notice that for
slimLS iterates, the damping parameter plays the role of the step size.Stochastic Newton or stochastic quasiNewton methods have also been proposed to accelerate convergence [14, 5, 25, 10]. For the stochastic Newton method, we can let in (3), and we get the block Kaczmarz method,
(6) 
where we have used the property that . Thus, we see that the block Kaczmarz method is nothing more than a stochastic Newton method. Note that for , we get the standard block Kaczmarz method, and linear convergence to within a convergence horizon has been shown in [41, 42]. For a decaying , the block Kaczmarz method converges to the solution of a weighted LS problem, rather than to the desired LS solution [11].
For the special case where
is a uniform random vector on the columns of the identity matrix, each iteration only requires one row of
. More precisely, let denote the th row of and letbe a random variable with uniform distribution on the set
, then . In this case, stochastic Newton iterates in (6) are identical to the randomized Kaczmarz method,(7) 
which has been studied extensively, see e.g., [53, 20, 31, 39, 56, 11, 55, 29, 59, 43]. For consistent systems where has full column rank, it was first shown in that the Kaczmarz method with cyclic control (i.e., ) converges to the LS solution [34]. The method has gained widespread popularity in tomography applications, where it is commonly known as the Algebraic Reconstruction Technique [39, 11, 55, 29, 22, 32]. The randomized Kaczmarz method was shown to have an expected linear convergence rate that depends on the condition number of [53, 54]. For inconsistent systems the Kacmarz method does not converge to the LS solution. For a decaying step size , iterates will converge to a weighted LS solution where is a diagonal matrix with diagonal elements , see [11, 14]
. For a constant step size, these iterates will converge linearly to the weighted LS solution to within what is known as a convergence horizon, which accounts for the variance in the iterates
[43, 40].To address problems that arise when some rows have small norm (i.e., a small denominator in (7)), Andersen and Hansen [1] in 2014 considered a variant of the Kaczmarz method to include a damping term. They showed a connection to proximal gradient methods and provided convergence properties under cyclic control. When the blocks are illconditioned, computing the search direction in (6) can become numerically unstable and a similar idea can be used. A damping term can be introduced in the sample Hessian, which leads to the damped block Kaczmarz method,
(8) 
Notice that including the damping parameter eliminates the need for a step size parameter, although one could still be included.
To speed up convergence, stochastic quasiNewton methods use the current and any previous samples of to produce a matrix that approximates the global curvature . For general convex optimization, stochastic quasiNewton methods that use an LBFGS type update have been introduced and analyzed [10, 23, 38]. These methods have been investigated for nonlinear problems; however, for linear problems better approximations can be obtained by exploiting the fact that the Hessian is constant. One rowaction method for linear problems is the randomized recursive LS method where the th iterate, which is given by
(9) 
is the minimum norm solution of
(10) 
This equivalency is shown in A and implies that after sampling all blocks exactly once, we get . The disadvantage is that the randomized recursive LS algorithm is not computationally feasible for very large problems because of the large linear solve required at each iteration and the cost to store , see [13, 35, 52]. Notice that if and in (3), we recover the randomized recursive LS algorithm. Thus, the slimLS iterates (3) can be interpreted as a limited memory variant of the recursive LS algorithm. On the other hand, if the slimLS method can be interpreted as a generalization of the damped block Kaczmarz method.
It should be noted that other methods exist for solving very large LS problems, but many have limitations that prohibit their use for massive or streaming data problems. For example, for problems where is small enough to allow storage of an matrix, the ShermanWoodbury identity can be used to get the exact solution [16]. In our problems, and are on the order of hundreds of millions. Randomized methods such as Blendenpik [2] and LSRN [37] are effective for cases where or , (assuming matrix
has a large gap in the singular values). Nevertheless, these methods require full matrixvectormultiplications and do not work for streaming problems.
Next, for a specific choice of , we make a connection to the sampled limited memory Tikhonov (slimTik) method described in [52] to approximate a Tikhonov regularized solution,
(11) 
where is the regularization parameter and the regularization matrix is invertible^{1}^{1}1This assumption is not required [30] but is used here for notational simplicity.. In particular, let be defined as in Section 1 and define random variable
where has the property that . Then, slimLS applied to (11) with gives iterates,
(12) 
which are equivalent to slimTik iterates with a fixed regularization parameter. The significance of this equivalence is that the analysis and results that we present in the next section can be extended to the Tikhonov problem. It should be noted that a good choice of
may not be known in advance and must be estimated. Methods to update the Tikhonov regularization parameter within the
slimTik method have been studied in [52], but a theoretical analysis for such cases remains a topic of ongoing research.3 Convergence properties of slimLS
In this section we analyze the convergence properties of the slimLS method. In particular, we will show that it exhibits favorable initial convergence properties without the memory burden of having to save all previous samples (e.g., as in randomized recursive LS [4, 52]). This is possible because slimLS iterates can utilize previous samples to better approximate the Hessian . We show that for a fixed damping parameter , memory level , and , slimLS iterates exhibit linear convergence of the expectation of the iterates and linear convergence of the error up to a convergence horizon. This type of analysis is essential for understanding stochastic approximation methods [35, 41, 40, 53, 24], and it may reveal potential tradeoffs between solution accuracy and speed of convergence based on the damping parameter. For example, such analyses have been proved for the Kaczmarz method (for vanishing step size) and for the block Kaczmarz method (for step size one) [24, 40, 41, 53], but to the best of our knowledge results have not been proved for the damped block Kaczmarz method. For clarity of presentation, all proofs have been relegated to B.
The following definitions will be used throughout the paper. We will use the functions and
that provide the smallest and largest eigenvalues of a matrix, and write
where the minimum is across all of the different realizations of that lead to a positive minimum eigenvalue, while the maximum is across all of the realizations. For a fixed we define the matrices
For simplicity we will often write instead of . It is clear that is symmetric positive semidefinite. In fact, it is positive definite with when has full columnrank (see B.1), in which case we define
(13) 
Note that all of the expectations in this paper are understood to be with respect to the joint distribution of the sequence
conditional on the noise.Theorem 3.1.
Let have full columnrank. For arbitrary initial vector and damping parameter , define
Then:

, or more precisely,
where .

The error around can be bounded by
(14) where , with and .
The first part of the theorem shows that as ,
is an asymptotically unbiased estimator of
with a linear convergence rate. The second part shows linear convergence of the error up to a convergence horizon. For the case where , the constant in the first term of (14) approaches one, indicating a slowing linear convergence rate, while the second term in (14) goes to zero, i.e., the convergence horizon gets smaller. This is because as .Having shown that converges to in as and , the next question is how much differs from the LS solution . To answer this question we let and be, respectively, the orthogonal projections onto the column space of and its orthogonal complement. We then have the following equivalent definition of :
(15)  
In particular, when belongs to the column space of . The following result provides a bound for .
Lemma 3.2.
If has full columnrank, then
where .
It is important to notice the relationship between the damping parameter and the upper bound in Lemma 3.2. The bound is smaller for smaller values of , which makes sense in light of the asymptotic property that a.s. for a decaying damping parameter (see [14]). However, there is a tradeoff between the convergence rate and the precision of iterates (i.e., the bias) that depends on : as , we get more accurate approximations, i.e., in . On the other hand, for larger the convergence is faster at the cost of a larger convergence horizon.
In summary, we have shown that with a fixed damping parameter, the slimLS iterates will converge in linearly to within a horizon of , and the expected value of the iterates converges to . A pictorial illustration of this convergence behavior is provided in Figure 1.
This tradeoff between quick initial convergence that comes with using a constant damping parameter at the cost of solution accuracy has been observed in related stochastic optimization methods in the literature, see e.g., [3, 1]. It has also been observed that more accurate solutions can be obtained using a decaying damping parameter, but then convergence can be quite slow (sublinear) [6, 44]. Thus, it is often practical to use a constant damping parameter to obtain quick initial convergence and then switch to a decaying parameter to obtain higher accuracy.
4 Numerical results
In this section we first illustrate the convergence behavior of our proposed slimLS method in a small simulation study. The goal of the first experiment is to illustrate how different memory levels and damping parameters affect the convergence of slimLS. We also compare slimLS to existing rowaction methods and provide a numerical investigation into the sensitivity towards the damping parameter/step size. Then we discuss some of the computational considerations when solving massive problems and investigate the performance of our method on very large tomographic reconstruction problems.
4.1 An Illustrative Example
In the first numerical experiment we use a smaller example to illustrate some of the key features of the slimLS method. We let
have random entries from a standard normal distribution. We further assume that
and the simulated observations are generated as in (1) whereis white noise with noise level
; that is, . The matrix is assumed to be sampled in blocks, each with block size , which corresponds to sampling matricesFirst we illustrate the convergence behavior investigated in Theorem 3.1 for different constant damping parameters . In Figure 2 we provide relative reconstruction errors computed as for various damping factors from to on a loglog scale. We repeatedly run slimLS with random sampling for epochs and with memory level . We observe that larger values of have favorable initial convergence, but then stabilize at a larger relative error. On the other hand, smaller values of have a slower initial convergence, but to a smaller relative error. This illustrates the tradeoff between fast initial convergence and solution accuracy, as discussed in Section 3.
Furthermore, for various we provide the relative difference between and in Figure 3. We notice that for small the relative difference is within machine precision while slowly increasing for .
Next, we investigate how the initial convergence is affected by the choice of the memory parameter . Here, we fix and choose memory levels , and . We run our slimLS method for iterations and provide the median relative reconstruction errors for repeated runs in Figure 4. The errors are computed with respect to the LS solution, i.e., . We empirically observe that with higher memory levels we get faster initial convergence.
We also provide a comparison of slimLS with to other rowaction methods, including the sampled or batch gradient sg method and the online limited memory BFGS olbfgs method [38] with memory level (i.e., storing the previously computed sampled gradient vectors). In particular, we are interested in the sensitivity of the algorithms with respect to the step size. For different constant step sizes/damping factors from to , we computed the reconstruction error norm at (i.e., corresponding to one epoch) relative to . Although reconstruction errors could be computed relative to , we note that the relative error between and is negligible (see Figure 3). In Figure 5 we provide the median relative reconstruction error norms along with the th percentiles after repeating the experiment times. Notice that the plot is on a loglog scale. We observe that the sg method has just a tiny “window” of ’s for which results have small relative reconstruction errors. The window for olbfgs is larger and is centered around step size , which is expected, see [7]. However, compared to sg and olbfgs, the slimLS method provides good reconstructions for a much wider range of damping factors, which is a very attractive property of the slimLS method.
4.2 Computational considerations
Recall that the slimLS method can be interpreted as a rowaction method, which by construction alleviates many of the computational bottlenecks (i.e., data access and memory requirements per iteration are significantly reduced). However, for many realistic problems where is on the order of millions or billions (e.g., in tomography), the computational cost of each iteration can still be large. We remark that a noteworthy distinction of our numerical investigations compared to previously published works on rowaction methods, such as [41, 1, 53], is that we consider very large imaging problems with hundreds of millions of unknowns and focus on the initial convergence (one epoch) rather than the “asymptotic” convergence (hundreds of epochs) behavior. Next we address some considerations with respect to computational cost and comparisons with other methods.
The slimLS iterates can be written as , where
where Thus, each iteration consists of two main steps,

accessing model block and corresponding data , and

computing the update step .
The computational costs for the first step are often overlooked, but since data are usually stored on a harddrive, data access can be timeconsuming. Furthermore, depending on the application, constructing the corresponding matrix block can also be computationally expensive. In datastreaming problems one may not have control over when and which blocks of data become available at any given time.
For the second step, solving for can be done efficiently by first noticing that is the solution to the LS problem,
(16) 
where . Hence, any efficient LS solver that exploits the structure in (16) can be used to compute . Here, we utilize LSQR for damped LS, see [45], where a very efficient implementation is available if . It is worth mentioning that another LS reformulation can be made to solve for directly, where the righthandside becomes dense and will depend on and .
Next, we remark further on the choice of . For the sg method—as illustrated in Figure 5—an acceptable constant step size is hard to come by and is often chosen such that . For olbfgs we choose the “natural” constant step size , with a possible exception at the early iterations. Since olbfgs is equivalent to sg in the first iteration, olbfgs may suffer from large step sizes while building up its memory. To compensate for this we can ramp up the value of in early iterations, e.g., for the first iterations for fixed where is the memory parameter.
In many structured problems and in particular for tomography problems—where each block matrix corresponds to a single projection image (i.e., one angle)— is extremely sparse with the number of nonzero elements in on the order of . Note that in some cases, matrix does not even need to be constructed, but function evaluations can be used within iterative methods [21]. Hence, for extremely largescale problems where memory storage becomes an issue, slimLS can take advantage of any sparsity or structure. In contrast, the olbfgs method requires storing two vectors of length for each memory level, and these vectors are likely dense so the storage becomes cumbersome for large .
4.3 Largescale Tomographic Reconstruction
Next we present two numerical experiments that demonstrate the performance of slimLS for solving massive tomography reconstruction problems. Tomography has become very important in many applications, including medical imaging, seismic imaging, and atmospheric imaging [39, 31]. The goal in computerized tomography is to reconstruct the interior of an object given observed, exterior measurements. However, recent advances in detector technology have led to faster scan speeds and the collection of massive amounts of data. Furthermore, in dynamic or streaming data scenarios (e.g., in microCT reconstruction), partial reconstructions are needed to inform the data acquisition process. The slimLS method can be used to address both of these problems. The first example we consider is a very large, limited angle 2D tomography reconstruction problem that is underdetermined, and the second example is a 3D streaming reconstruction problem that is ultimately overdetermined.
For illposed problems such as tomography, semiconvergence of iterative methods is a concern whereby early iterates converge quickly to a good approximation of the solution but later iterates become contaminated with errors. Iterative regularization techniques (i.e., early termination of the iterative process) are often used to obtain a reasonable solution [30]. For rowaction methods applied to tomography problems, semiconvergence properties have been investigated [18], but due to notoriously slow convergence (after fast initial convergence), the ill effects of semiconvergence tend to appear only after multiple epochs of the data. Thus, we do not include additional regularization in the following results.
(a)  (b) 
TwoDimensional LimitedAngle Tomography.
First we consider a parallelbeam xray tomography example, where the true image represents a crosssection of a walnut. In [27] the authors provide an image reconstruction computed from projections using filtered back projection. Our “ground truth” image provided in Figure 6 (a) is a cleaned image of the filtered back projection reconstruction. For this problem, , and we simulate observations by taking projections at angles between and at degree steps, with rays for each angle. This can be interpreted as a missing data problem where we have a missing wedge of . In this example, . The observed sinogram provided in Figure 6 (b) was generated as in (1) with noise level .
Here is random cyclic uniformly chosen from blocks of size . We apply the slimLS method with memory level and rampedup damping parameter with . Relative reconstruction errors computed as are provided in Figure 7. We provide comparisons to the sg method with step size and the olbfgs method with memory level and rampedup step size with . We found that both methods required a small initial step size to prevent reconstruction errors from getting very large.
sg  olbfgs  slimLS  

Image reconstructions and subimages, along with the true image, are provided in Figure 8. We observe that after one epoch of the data, the slimLS reconstruction contains sharper details and fewer artifacts than the sg and olbfgs reconstructions. As described in Section 4.2, it is difficult to provide a fair comparison of methods, especially in terms of the memory level and the step length. Careful tuning of the step length for sg and obfgs can lead to reconstructions that are similar in quality to the slimLS reconstruction, but that is very time consuming especially for these massive problems. Also, as observed in Section 4.1, there may be only a small window of good values. If good parameters are known in advance, olbfgs can produce good solutions, but if they are not known a priori, then poor results, or even divergence of the relative reconstruction errors, were often observed.
ThreeDimensional Streaming Tomography.
Next we demonstrate the power of slimLS for threedimensional tomography reconstruction. For very large problems where data access is a computational bottleneck and for problems where data is being streamed and partial reconstructions are needed, rowaction methods are the only feasible option. Here we show that sampled limited memory methods can be a good alternative.
In this problem setup, the true image is a modified 3D SheppLogan phantom. We generate projection images of size taken from random directions, which are samples from the uniform distribution over the unit sphere. The raytracing matrix is of size and is certainly never constructed, but matrix blocks are generated using a modification of the tomobox code [33] which represents parallel beam tomography. White noise was added to the projection images such that the noise level over all projection images is . Orthogonal slices of the true 3D phantom, along with four of the observed, projection images are provided in Figure 9.
We run the slimLS method with and damping factor . After one epoch of the data (i.e., the cost of accessing all data once), we compare the slimLS reconstruction to the stochastic gradient and online LBFGS reconstructions. For sg, we use step length , and for olbfgs, we use and memory level . Relative reconstruction error norms per iteration are provided in Figure 10. Notice that even in early iterations, where only a small fraction of the data has been accessed, slimLS reconstructions have smaller relative reconstruction errors than sg and olbfgs.
We computed absolute error images, which correspond to absolute values of the difference between the reconstruction and the true image, and we provide three slices (in the , , and direction) for sg, olbfgs, and slimLS in Figure 11. The absolute errors are provided in inverted colormap so that black corresponds to large errors. All images use the same colormap. We observe that absolute error images for sg are significantly worse than those from olbfgs and slimLS reconstructions, with absolute error images for slimLS having fewer artifacts.
sg  olbfgs  slimLS  
slice 


slice 

slice 

5 Conclusions
In this paper, we investigate sampled limited memory methods for solving massive inverse problems, such as those that arise in modern 2D and 3D tomography applications. Limited memory rowaction methods are relevant in scenarios where full matrixvectormultiplications with coefficient matrix are not possible or too computationally expensive. This includes problems where is so large that it does not fit in computer memory and problems where the data is being streamed. The main theoretical contribution is that, contrary to existing rowaction and stochastic approximation methods, the slimLS method has both favorable initial and asymptotic convergence properties. Additional benefits of slimLS include faster initial convergence due to the use of information from previous iterates and convergence for a wider range of step length or damping parameters. We provide theoretical convergence results, and numerical examples from massive tomography reconstruction problems show the potential impact of these methods.
Acknowledgements
We gratefully acknowledge support by the National Science Foundation under grants NSF DMS 1723048 (L. Tenorio), NSF DMS 1723005 (M. Chung, J. Chung) and NSF DMS 1654175 (J. Chung). J. Chung and M. Chung would also like to acknowledge the Alexander von Humboldt Foundation for their generous support.
Appendix A Randomized recursive LS method
In this section, we show that the th iterate of the randomized recursive LS method defined in (9) is the minimum norm solution of LS problem in (10). We prove this by induction. For and with , (9) yields . Now let us assume that
(17) 
then from (9), we get
(18) 
where we are using the fact that since . A similar argument can be used to show that since , and we arrive at
Appendix B Proofs for Section 3
Lemma B.1.
If has full columnrank, then for any fixed :

with upper bound
where is the number of eigenvalues equal to zero over the different ,

is symmetric positive definite,

, and

.
Proof.
(i) For simplicity we use the notation . By Jensen’s inequality
The last inequality becomes an equality only if a.s., in which case across all realizations of . That is, for all
. Furthermore, an eigenvector
for the largest eigenvalue of is also an eigenvector of all with eigenvalue 1. That is, for all . This is not possible because . Hence, . This implies that and the upper bound in (i) follows again from .(ii) It is clear that is symmetric positive semidefinite because . It is positive definite by (i).
(iii) The upper bound follows from the identity .
(iv) From (iii),
The lower bound follows from (i):
∎
Proof of Theorem 3.1.
We use to denote the natural filtration induced by the sequence :
.
(i) Using the recursion for , we obtain
and therefore
where the last equality comes from the fact that are i.i.d. Using Lemma B.1(i) we get
Thus, linearly.
(ii) Next we show that converges linearly to a convergence horizon of . The recursion of
leads to
We find an upper bound for the last term using Lemma B.1(i):
and by Lemma B.1(iii)
The last two bounds yield
Comments
There are no comments yet.