Convergence acceleration of alternating series

A new simple convergence acceleration method is proposed for a certain wide range class of convergent alternating series. The method has some common features with Smith's and Ford's modification of Levin's and Weniger's sequence transformations, but it leads to less computational and memory cost. The similarities and differences between all three methods are analyzed and some common theoretical results are given. Numerical examples confirm a similar performance of all three methods.


page 1

page 2

page 3

page 4


Anderson Acceleration for Seismic Inversion

The state-of-art seismic imaging techniques treat inversion tasks such a...

Superlinear convergence of Anderson accelerated Newton's method for solving stationary Navier-Stokes equations

This paper studies the performance Newton's iteration applied with Ander...

Convergence of Constrained Anderson Acceleration

We prove non asymptotic linear convergence rates for the constrained And...

Acceleration Methods

This monograph covers some recent advances on a range of acceleration te...

Convergence Acceleration of Preconditioned CG Solver Based on Error Vector Sampling for a Sequence of Linear Systems

In this paper, we focus on solving a sequence of linear systems with an ...

Alternating cyclic extrapolation methods for optimization algorithms

This article introduces new acceleration methods for fixed point iterati...

Latent common manifold learning with alternating diffusion: analysis and applications

The analysis of data sets arising from multiple sensors has drawn signif...

1 Introduction

This paper concerns the convergence acceleration of a certain wide range class of convergent alternating series. More precisely: a new convergence acceleration method is given and its certain theoretical properties are proved; analogous properties for the Smith’s and Ford’s [14] modification of Levin’s and Weniger’s -transformations (see also [15, Eq. (7.3-9)] or [3, § 2.7]) are proved and the similarities, as well as differences between all three methods, are analyzed.

It is convenient to write the alternating series in the form


where are all positive or negative. In this paper we use the notation

for the partial sums of the series (1), whose limit, in case of convergence, we denote by .

The mentioned class of alternating series concerns convergent series (1) such that has an asymptotic expansion (as ) of the form


provided that , and .

It should be remarked that the series (1) with satisfying relation (2) is convergent, only if we make a certain additional assumption on numbers and . Otherwise, we may deal with divergent series, whose summation may also be useful. The detailed analysis of the convergence (and its acceleration) of the considered class of series (and more general, too) can be found in Sidi’s book [12, §8, §9]. Namely, the class of sequences with satisfying (2) is a subset of more general class , given in [12, §6.6].

In the sequel the quantities play a very important role. One can verify that the asymptotic expansion in (2) implies


see, e.g., [12, Thm. 6.6.4, p. 142].

From (3) we conclude that and thus


for any .

In the simplest case of eq. (2) is a natural number and . This happens if ( being polynomials in ). Moreover, the coefficients of the polynomials can depend on , as in the following example:

A much wider class of the series with refers to the hypergeometric functions. Indeed, condition (2) holds if the series (1) is identical, up to the constant factor, with the function

which parameters , and guarantee its alternation; notation means the Pochhammer symbol befined by , , . The relation (3) is then evidently satisfied.

Further, the condition (2) also holds if the terms involve the roots in like, for e.g., . More examples with (and ) are, for instance:


Let us note that such and similar terms can be decomposed to a sum of several terms , for which the related quantities satisfy the equation (3) with . Indeed, one can decompose the expression in (5) as follows:

and thus the series can be transformed to the sum where both quantities , , satisfy the relation (3) with . However, since all the summation methods, considered here, can be applied to the series (1) satisfying (3) with any natural number , it is hard to say if using these methods for each series separately, gives actually better results. One can check this is not true in the case of (6); see Example 6.

The remainder of this paper is organized as follows. Section 2 deals with a certain classic convergence acceleration methods, such as Aitken’s method and both Levin and Weniger transformations; see [2], [5] and [16]

. We consider there a certain choice of the remainder estimates, proposed by Smith and Ford in

[14] (see also [3, § 2.7]), in the case of Levin’s and Weniger’s method, which we denote by the symbols and , respectively.

It should be remarked that the transformation of Levin and Sidi [6] (with ) should also be an effective accelerator for the considered series (see, e.g., [12, §6]), as well as more general transformation developed by Sidi (see the book [12, pp. 147–148] and the recent report [13]).

A new method of convergence acceleration (denoted here by the symbol ) is presented in Section 3, which is followed in Section 4 by a discussion about common theoretical properties including convergence acceleration theorem for all three methods , and . In Section 5 we give some examples examining the efficiency of the new method compared to the methods and . All the examples except the last two consider the convergent series (1) with satisfying the relation (2) with . One can check that the transformation of Levin and Sidi, in the case of these examples, is equivalent to the method , provided the choice of parameters , which is quite reasonable for all of the considered examples in this paper. Last two examples are the case with and thus, besides the comparison of the efficiency of the methods , and , we present the results obtained by the transformation of Sidi (with ), as well.

Finally, in Section 6 we discuss the further properties of the method , such as application to the summation of divergent alternating series. Some remarks on efficient implementation of the method  are given therein, too.

2 Levin and Weniger transformations

The well-known Aitken’s method transforms a given sequence into a new sequence , defined by the formula


If the elements of the sequence to be transformed are partial sums of alternating series (1), then


Thus, the new sequence element is a weighted average of the elements and . These weights are positive. Therefore, the numerical realization of the Aitken’s transformation has good stability properties.

It is important to note that the transformation (7) can be easily iterated. Namely, one can use the sequence as a sequence to be transformed, and obtain a new sequence , and so on; see, e.g., [15, Eq. (5.1-15)]. However, if the elements are the partial sums of series (1), the process of iterating of the transformation (8) is more subtle. Indeed, in order to transform the sequence , one should replace in (8) with the terms of the series Computing these terms is not recommendable since one may be facing with a loss of significance caused by the cancellation of terms. All the methods studied in this paper do not have this disadvantage, although they are somehow derived from Aitken’s transformation.

The idea of the Levin transformation [5] of the series is based on the assumption that the remainders of the partial sums have the following Poincaré-type asymptotic expansion:


where the shift parameter and remainder estimates should be chosen suitably for the considered class of the series. Using the same notation as in [15], Levin transformation can be expresses as follows:

The choice of the remainder estimates has been widely discussed in the literature; see, e.g., [4], [15, § 7.3], [3, § 2.7] or [7, § 5.3]. However, the parameter is usually chosen to be . In a recent paper by Abdalkhani and Levin [1] the optimal value of this parameter was discussed for a certain variant of Levin transformation.

In the case of considered alternating series , the remainder has the following asymptotic expansion


see, e.g., [12, Thm. 6.6.6, pp. 145-147]. Thus it is recommendable to use , i.e., Ford’s and Smith’s [14] modification of Levin’s -transformation; see also [3, § 2.7]. In the sequel we denote this method by the symbol .

Any variant of Levin’s method transforms the sequence into a doubly indexed sequence . By definition, the element is an approximation of the limit resulting from the system of the equations for , where only the terms with are retained. Hence, the element depends on all the values with . For instance, in the case of method , the element satisfies the following system of two equations:

with unknown and auxiliary coefficient . One can easily check that is exactly the value of given by Aitken’s transformation (8).

Weniger transformation is based upon an assumption similar to (9), and is given by


The only difference is that the powers are replaced by Pochhammer symbols ; see [15, § 8.2]. Let us note that transformation (11) was invented independently by Weniger and Sidi [11] and later used by Shelef [8] for the numerical inversion of Laplace transforms. Similarly to the method of Levin, we chose the remainder estimates , and denote this method by the symbol .

One can check that both methods and produce the double indexed arrays of elements , for which , i.e., both transformations are equivalent to Aitken’s transformation. Further, both methods give the same values of , which usually are different than the ones obtained by the Aitken’s iterated process.

The parameter is usually chosen to be for both methods and . We consider the same value for all presented numerical examples.

There are well-known recurrence formulas allowing for the efficient realization of the Levin and Weniger transformations; see, e.g., [15, § 7.2, § 8.3]. Both formulas are quite similar and use certain -term recurrence relations (see [15, Eqs. (7.3-2)–(7.2-6) and (8.3-1)–(8.3-5)]) satisfied by the following numerators and denominators :


Their simplest variant may suffer from an overflow that very often appears during the recursive computation of numerators and denominators . Hence, it is recommendable to use so-called scaled versions of these recurrence formulas; see [15, Eqs. (7.2-8), (8.3-7)]. For the case of being a sequence of partial sums of the alternating series (1), let us write these -term recurrence relations, for both methods and , in the following way:


Since the initial conditions (13) are the same for both methods, which is not common in the literature, the only difference comes from the choice of the function in the -term recurrence relation (15), satisfied by numerators  and denominators .

For the convenience of later analysis and comparison with the new method , let us observe that the quantities , defined by (12), satisfy the recurrence relationship




It is quite remarkable that the above formulas permit to the compute the array without actually using the array of the numerators

. Such realization of Levin and Weniger transformations has probably not been considered in the literature, yet.

The following lemma displays some asymptotic property of the last fraction in the right hand side of equation (17), which we will use later in the comparison involving all three methods , , and .

Lemma 1.

The quantities , defined by eqs. (13)–(15), satisfy the relation


Using induction on  and the relation (3), one can check that the quantities have the following formal power series expansion in variable :

From this, we conclude the result. ∎

It should also be remarked that, in case of alternating series satisfying the relation (3), the functions , defined by eq. (17), satisfy the relationship:


It follows from eq. (4) and Lemma 1.

3 Method

The starting point for the derivation of the aforementioned method  is Aitken’s sequence transformation, given by formula (8). However, the main idea is based upon the relationship involving the dependence of the differences on the terms , which allows us to use, and also iterate, the formulas similar to (8). For instance, the simplest variant (which we denote by symbol ) produces the double indexed array of approximations of the limit  of the series (1) by using the following recursive scheme:


The above formula reduces for and gives () identical with  related to Aitken’s transformation (8), and thus identical with obtained by both methods and (cf. (16)), as well.

According to (19) the element is a weighted average of elements  and . We would like to note that this formula (for ) can be derived by using the following brief analysis, which we will discuss in more details in Section 4. Observing, at least experimentally, that for we obtain the differences being proportional to (more precisely ), one can try to consider the change of the coefficients in the weighted average in order to obtain , and so on; here, and in the sequel, the forward difference operator acts upon the lower index . It is possible if one indeed replaces in formula (19) (having ) with , which is exactly the formula (19) with . In general, for , one should replace with , which gives exactly the formula (19).

The following facts are evident or easy to check: the recursive scheme, which defines the method , differs from the ones for the methods  and . The quantities can be determined in a straightforward way, i.e., without computing their numerators and denominators (cf. (12)); is a function of the terms , , …, ; unlike the methods  and , the formula (19), which defines the method , is not a consequence of any assumption on the asymptotic behavior of the remainder estimates, such as, e.g., (9). Formula (19) is also not a result of any general expression for the partial sums of the series, nor of the system of equations followed from it. It is not even known if such an expression or a system of equations exists; in general, the quantities , , are different from the ones computed by the method  or (or Aitken’s iterated process). Indeed, the methods  and  give

which is the same as computed by the method , if the quantities , related to the series (1), satisfy a certain functional equation.

The justification of the efficiency of the method is discussed in Section 4. More precisely, it refers to a more general method, which we denote in the sequel by the symbol , given by the following formula:


(cf. (16)), where the arbitrary functions are such that


and (which usually follows from the former condition), .

The above conditions are satisfied if, for instance,


where are given by eq. (14) related to the methods and . Moreover, if corresponds to the method  (with ), the formula (20) is equivalent to (19), and thus the method becomes the method . On the other hand, one can also consider such functions , as or , for which the condition (21) can be easily checked. Then, using eq. (20) appears to be more costly, but may have some advantages like better numerical stability. We believe it is worth doing more analysis on this. Let us remark that in both mentioned variants of the functions , holds for all , and thus , like in methods  and .

4 General theoretical results

It is notable that the similarities between all three methods , and follow from eqs. (16) and (20), which vary depending on the choice of the functions ; cf. (17), (21) and (22). For instance, the difference between the choice involving the function (17) (which gives the methods , ) and (22) (method ) is well depicted by the relation (18).

Let us note that the statement of Theorem 1 that follow, in the case of alternating series (1) with satisfying (2) with , is very similar to the classic results for Levin’s and Weniger’s transformations; see, e.g., Weniger’s report [15, §13] For the detailed analysis of the convergence acceleration of the alternating series (1) with satisfying (2) with and the application of Levin’s transformations to it, we refer to papers by Sidi [9], [10].

For our consideration, the relation (18) plays the main role in deriving a theoretical properties common for the new method  and both methods and . For the sake of analysis of all three methods, let us use common symbol to denote the elements of the array computed by them. It is important that in all three cases the quantities satisfy the 3-term recurrence relation (20) where the functions depend on the considered method; cf. (16).

In order to study the convergence acceleration performed by the mentioned methods, it is recommendable to investigate the differences . Indeed, the quantities (together with the element ), , are the terms of the series resulting from the corresponding sequence transformation. The efficiency of the method depends on whether these series (for consecutive ) converge to limit faster and faster. Hence, it is reasonable to compare the differences to the original terms . For this reason, let us define the following quantities:

Lemma 2.

The quantities satisfy the following relationship:


It follows from eq. (20) that

and thus

Now, by multiplying both hand sides by , we obtain the result. ∎

As mentioned in the previous section, the quantities can be identical for all three methods if satisfies a certain functional equation. Indeed, for any series (1), the quantities , defined by eq. (20), are the same as the ones for the methods and , if one takes the following function written in terms of :

For , the analysis of such similarities seems to be meaningless. However, it is quite notable that for the choice of the functions , such as (17) and (14), equation (20), which defines the method , leads to the method  or . Undoubtedly much more important is the meaning of the condition (21) involving the functions , which are, for all three methods, such that


cf. (21), (22) and (18). Namely, it is summarized in the following theoretical results.

Theorem 1.

Let be the two-dimensional array computed by the method , given in (20), applied to the series (1) with  satisfying (2). Then the differences satisfy the following relation:


The proof follows by induction on . Since , the series in (25) (for ) simplifies to the constant . Now, let be given. Taking into account the relation (3) in the definition of the quantities , we conclude that


and thus


In the same way, one may check that Moreover, from (4), it immediately follows that Hence, we get


From the principle of the induction, it follows that

Therefore, in view of (24), we conclude that

and the proof is complete. ∎

The evident meaning of the above result is as follows: the larger is the value of , the less are the absolute values of the differences (at least for sufficiently great values of ), and thus, the faster is the convergence of to . Similar results, but only for the methods  and  (and with in (2)), can be found in the Weniger’s report [15, Thms. 13-5, 13-9, pp. 114, 117].

It is also worth considering the influence of the choice of the functions on the asymptotic behavior of the differences that appear in Theorem 1. Of course, this dependence is related to the values , which, in general, are usually unknown. This is somewhat displayed in the following result.

Theorem 2.

Let the quantities be as in the previous theorem. Then the following relation links the quantities with , given in (21) and (23), respectively:


By replacing with in Lemma 2, we have that

Hence, by Theorem 1, the quotient is of order , and thus, replacing with in (28) yields

Now, the result follows from (26). ∎

As we mentioned in the previous section, for each method , and , we have . The final remark is that simplifies to


which for many series can be easily expressed just in terms of .

5 Numerical examples

Let us consider the method , defined by eq. (19), and the mentioned variants of Levin and Weniger transformations, defined by formulas (13)–(15) and denoted by symbols and , respectively.

If the terms of the series (1) to be transformed are sufficiently simple, and if is rather small, one can try to find explicit expression for the quantities and verify the statements of Theorems 1 and 2, given in the previous section. Let us recall that for all three methods produce the same values of , while, for , it is evidently true only for the methods of Levin and Weniger.

For instance, if , then we have

(the first formula corresponds to the methods and ; second — to the method ). This is in agreement with Theorem 1, since

The comparison of the leading coefficients of the asymptotic expansions of the values shows that the methods  and yield a little bit better result than the method . In contrast, the method is better than the others for . Indeed, one may check that

Further comparison, i.e., for , seems to be pointless.


the expression for is rather complicated, i.e.,

where . However, one can check that

which is in agreement with Theorem 2.

We compared the performance of the methods , and  numerically, by applying them to several alternating series (1) of different types. For each example below, we present as follows: the form of the series (including the values in the relation (2)) and its limit , the accuracy of the quantities , , for all three methods (first row corresponds to the method , second row — , third row — ). First five examples are the case of and last two are not.

Let us remark that transformation of Sidi (with and ) is also effective accelerator of the considered series; see, e.g., [12, §6.6.4, pp. 147–148]. The accuracy of the quantities for is given (in the fourth row) in the case of last two examples, since only then transformation is not equivalent to the method . We choose since for the the considered series we have

see [12, Thm. 6.6.5] and [12, §6.6.1] for the details on the class .

Here, the accuracy of the approximation  of the sum  is measured by , i.e., by the number of exact significant decimal digits. As it was mentioned before, the classic methods of Levin and Weniger give the same values of for .

All the numerical experiments were made using IEEE 754 double extended precision, i.e. 

-bit floating-point arithmetic, which means about 19 decimal digits precision.

Let use note the in the examples that follow, it appears that the numerical results produced by the method  seem to be similar to those obtained by the classic Levin and Weniger transformations, as well as the Sidi’s generalization of them.

Example 1.


Example 2.