1 Introduction
The Volterra integral operator is defined by
(1) 
Integral equations involving Volterra operators occur in diverse applications in various disciplines of engineering unterreiter_volterra_1996; zakes_application_2016, finance and economics apartsyn_classes_2014; nedaiasl_product_2019 as well as the natural sciences micke_application_1990; hethcote_integral_1980; geiser_iterative_2013; pruss_evolutionary_2012; van_den_bosch_pandemics_1999; krimer_nonmarkovian_2014; krimer_sustained_2016; xiang_efficient_2013. Beyond linear Volterra integral equations of first and second kind, which respectively take the forms
there exist a vast range of possible generalizations, the most important of which are linear Volterra integrodifferential equations (VIDEs), nonlinear Volterra integral equations and in particular nonlinear VIDEs, the last of which take the form:
(2) 
where and . Linear Volterra equations thus correspond to the special case . For certain very wellbehaved cases one can use variational iteration, Adomian decomposition or Laplace transform methods wazwaz_linear_2011 to try to obtain analytic solutions for such equations but in general one must use numerical methods to find approximate solutions. We assume in this paper that the equations we intend to solve with our proposed numerical scheme are solvable with unique solutions and omit discussion of illposed problems. A number of results on criteria for the existence of solutions are known, see e.g. meehan_existence_1998; gordji_existence_2011; zhang_existence_2018 and the references therein. Interest in efficient algorithms with good convergence properties is high, resulting in a variety of competitive methods for linear, nonlinear and integrodifferential Volterra equations. For decades brunner_numerical_1973 researchers have been proposing various forms of increasingly refined collocation and iteration methods to approach nonlinear VIDEs song_analysis_2019; allaei_collocation_2017; driscoll_automatic_2010; agbolade_solutions_2017; brunner_collocation_2004; shayanfard_collocation_2019. More recently they were also joined by a number of waveletbased methods saeedi_numerical_2011; biazar_chebyshev_2012; zhu_solving_2012; sahu_legendre_2015; lepik_haar_2006; heydari_wavelets_2014. Numerical solvers based on homotopy perturbation methods have also been proposed ghasemi_numerical_2007. A highly efficient spectral solver for the case of convolution kernels for linear VIDEs using low rank approximations was recently described by Hale hale_ultraspherical_2019. Gutleb and Olver described a general kernel sparse spectral method for linear Volterra integral equations in gutleb_sparse_2019, which forms the background of the present paper.
In this paper we present a spectral method which works for linear, nonlinear, integrodifferential and many other types of Volterra equations of first, second and third kind while utilizing polynomial spaces in which the Volterra operators have a sparse banded structure. As such this paper is a direct generalization of results in gutleb_sparse_2019, which derived said banded structure of the Volterra operator in Jacobi polynomial bases and on the basis of olver_sparse_2019 developed an efficient Clenshaw algorithm based approach to numerically generate the operator. This method thus combines the unmatched exponential convergence of spectral methods with highly efficient sparse linear algebra, a very promising combination which has been successful in recent years snowball_sparse_2019; hale_fast_2018; townsend_automatic_2015; olver_fast_2013; olver_practical_2014. In addition to efficiency via bandedness and exponential convergence rate, the proposed method is furthermore not limited to convolution kernel cases, i.e. kernels of form , a common restriction in competitively fast and accurate approaches hale_ultraspherical_2019. Due to the extensive literature and code libraries available on numerical solutions for linear, nonlinear VIDEs an exhaustive comparison with other methods is far beyond the scope of a single paper. We will however present comparisons with recent collocation methods, as these are the most competitive and ubiquitous solvers available.
The structure of this paper is as follows: In sections 1.1 and 1.2 we introduce the necessary framework of uni and multivariate orthogonal polynomial approximation of functions. Section 1.3 briefly recounts relevant results from gutleb_sparse_2019, detailing the banded structure of the Volterra operator on appropriately chosen triangle domain Jacobi polynomial bases. Section 2 details the extension of the linear Volterra integral equation method to a general linear integrodifferential case by augmenting the system with appropriate evaluation operators. Section 3 details the extension to nonlinear Volterra integral equations using an iterative approach and describes further relevant numerical experiments. Section 4 combines these two approaches, finally extending the method to general kernel nonlinear VIDEs. Section 5 showcases various numerical experiments for linear VIDEs as well as nonlinear VIEs and VIDEs to verify and test applicability, convergence rate and competitiveness including comparisons to collocation methods in Chebfun. We close with notes on convergence for the methods proposed in this paper in section 6 and discuss applicability and potential further research directions in the conclusion.
1.1 Function approximation with univariate orthogonal polynomials
In what follows we introduce the relevant elements of function approximation in univariate orthogonal polynomial bases, primarily focusing on the Jacobi polynomials, which are required to understand the proposed spectral method for nonlinear VIDEs. The reasons for highlighting the specific chosen properties above others will become clear when we discuss the methods in sections 2 and 3. For a more general and complete introduction into the theory of function approximation with univariate orthogonal polynomials, see e.g. gautschi_orthogonal_2004; beals_special_2016.
The Jacobi polynomials are a univariate complete set of polynomials orthogonal with respect to the weight function on their natural domain , meaning they satisfy
where . As such, the Legendre polynomials correspond to the special case and the ultraspherical or Gegenbauer polynomials correspond to the special case in which . Spectral methods can make use of the fact that complete sets of orthogonal polynomials can be used to approximate any sufficiently smooth function defined on a real interval via the expansion
where are the unique constant coefficients of in the given complete polynomial basis which is orthogonal on the domain . These coefficients may be computed efficiently for various sets of orthogonal polynomials using methods and C libraries by Slevinsky slevinsky_conquering_2017; slevinsky_fast_2017; slevinsky_fasttransforms_2019, while evaluation of polynomials is efficiently performed using Clenshaw’s algorithm, see e.g. gautschi_orthogonal_2004. While the interval is the natural choice for the Jacobi polynomials one can easily shift them to any other real interval required by an application. The method in this paper exclusively makes use of the Jacobi polynomials shifted to the unit interval , so we introduce the following shorthand notation:
Once expanded in the above way, performing addition and subtraction of functions has an obvious elementwise implementation. Additionally, being orthogonal polynomials the Jacobi polynomials satisfy a three term recurrence relationship
which allows for efficient computation of function multiplication in this framework via a tridiagonal socalled Jacobi operator:
The exact elements of the Jacobi operator depend on the Jacobi parameters of the chosen basis, see e.g. (nist_2018, 18.9(i)) for explicit values of and . As the sparsity of the operators in this paper relies heavily on correctly moving between bases with different Jacobi parameters
we will index coefficient vectors with the Jacobi parameters of the basis of expansion, i.e. by writing
, where it might otherwise be ambiguous. The above properties allow for the development of software packages capable of performing arithmetic on functions using highly efficient sparse linear algebra, where functions are replaced by coefficient vectors. One such package is ApproxFun.jl olver_juliaapproximation2019 writen in the Julia programming language beks2017, which is used as the background environment of the implementations presented in this paper. Other available software packages include among others the Dedalus project burns_dedalus:_2019, Chebfun pachon_piecewisesmooth_2010; battles_extension_2004; driscoll_chebop_2008.Beyond arithmetic with functions, there are a number of other useful operators in the sparse spectral method toolbox. We may, for example, freely shift from a basis with Jacobi parameters to one with higher parameters via
where using a sequence of upper bidiagonal raising operators (nist_2018, 18.9.5)
and an analogous sequence of operators for the second Jacobi parameter. Lowering operators are also available, although this is in general only possible in a sparse (lower bidiagonal) way with added weights (nist_2018, 18.9.6):
We may also mirror functions on their domain in a given Jacobi polynomial basis by using the very useful symmetry property (nist_2018, Table 18.6.1):
(3) 
In particular, on we can define a diagonal reflection operator via
Differentiation is also a sparse (diagonal) operation if we simultanously increment the Jacobi parameters (nist_2018, 18.9.15), i.e.:
(4)  
(5)  
(6) 
Importantly, this means that repeated sparse differentiation is not equivalent to a repeat application of the same operator . As the derivative operator shifts coefficient vectors to a higher parameter basis the second derivative operator is actually a combination of two distinct derivative operators acting on different bases and so on for higher derivatives. We thus denote the th derivative operator acting on a coefficient vector in basis by , where
instead of the potentially misleading notation which may evoke false intuitions of commutativity of the operators. The last component of theory we need for our univariate function approximation purposes are endpoint evaluation operators which will be used to enforce boundary conditions in integrodifferential equations. From the viewpoint described above, functions are coefficient vectors and multiplications, derivatives and basis changes are operators on coefficient vectors (matrices in finite dimensional approximation space). Functionals, e.g. evaluation operators at an endpoint, must act on coefficient vectors to return a scalar value and are thus represented by row vectors. In particular, for the Jacobi polynomials we can make use of the known property (nist_2018, Table 18.6.1):
where denotes the Pochhammer symbol or rising factorial (nist_2018, 5.2(iii)). Via the symmetry property in Equation (3) we obtain a similar evaluation operator for the other endpoint of our chosen interval domain.
1.2 Function approximation with multivariate orth. polynomials
This section introduces required elements of function approximation in multivariate orthogonal polynomial bases, focusing on the Jacobi polynomials on the triangle domain
which was discussed in detail in olver_sparse_2019. Multivariate polynomials are not yet widely used in numerical methods despite their vast potential. For a more general and complete introduction to theoretical aspects of multivariate orthogonal polynomials, see dunkl_orthogonal_2014.
Function approximation with multivariate orthogonal polynomials works in direct analogy to the univariate case. For a given multivariate function defined on some suitable dimensional domain and given a set of complete multivariate orthogonal polynomials on said domain, we may expand the function via
A generalization to dimensional cases is straightforward, cf. dunkl_orthogonal_2014. On the triangle , a sensible choice of polynomial basis is found in the triangle Jacobi polynomials, also known as Proriol polynomials, which are defined via reference to the univariate Jacobi polynomials (dunkl_orthogonal_2014, Proposition 2.4.1):
Expansion coefficients for functions on the triangle Jacobi polynomial basis, such as required for the kernel discussed in the next section, may be computed efficiently using C libraries by Slevinsky slevinsky_conquering_2017; slevinsky_fast_2017; slevinsky_fasttransforms_2019. As in the univariate case, we can define a variety of operators acting on coefficient space such as multiplication operators based on Jacobi operators, derivative operators and basis change operators olver_sparse_2019. The novelty is that for 2dimensional spaces such as the triangle we need to distinguish between the and variables and thus have two different Jacobi operators: for the variable and for the variable, which are now block tridiagonal operators instead of being tridiagonal, see olver_sparse_2019 for details.
1.3 Banded sparsity of the linear Volterra operator in Jacobi bases
It was shown in gutleb_sparse_2019 that the Volterra integral operator is sparse with banded structure on appropriate Jacobi polynomial spaces. Based on this, a sparse spectral method with exponential convergence for linear Volterra equations with general kernels was motivated and analyzed. The results are based on interpreting the Volterra operator as acting on multivariate Jacobi bases on a triangle domain. The idea behind the linear method follows the schemes in Algorithms 1 and 2. In this section we briefly review these methods to the degree necessary to follow the integrodifferential and nonlinear extension in this paper. For the full discussion of the linear case, we refer to gutleb_sparse_2019.
The move to the triangle domain may initially be motivated by noting that the Volterra integral operator acting on may be considered for other upper bounds than , in particular . The Proriol polynomials with parameters , being orthogonal on the triangle domain, behave well with respect to this integration:
Labeling the as a weight term and referring to what remains as operator , in reference of it being an integration with respect to , aligns our notation with that in gutleb_sparse_2019. Using reflection operators this can be adapted for the more standard case, see gutleb_sparse_2019, which means that considering the kernel means looking at on this domain. This operator acts on a function expanded in the Proriol polynomials with parameters on and as seen above has the form
We may account for the as of now omitted weight term by using a direct multiplication with Jacobi operators but for reasons of efficiency and due to the need to reflect when is better performed using a bidiagonal lowering operator followed by a diagonal reflection and finally a bidiagonal raising operator. The discussion so far explains the form of the operator for linear Volterra integral equations of second kind in Algorithm 2 being . Equations of first kind are somewhat more subtle and we thus omit discussion of further details, referring instead to the original linear method derivations and proofs in gutleb_sparse_2019. We have assumed above that the function may be expanded in the Proriol polynomials but we can make use of additional sparsity structures when instead thinking of as the extension of univariate function coefficients extended to the triangle domain via the expansion operator
Choosing the respectively optimal bases and for this purpose results in the extension operator found in gutleb_sparse_2019, which when multiplied with the integration from to operator above results in the following diagonal operator
Via certain quasicommutativity properties of the abovediscussed operators and the Jacobi operators on the triangle domain and using diagonal reflection operators appropriately one may iteratively build the full Volterra integral operator for a general kernel via the efficient operatorvalued Clenshaw algorithm for general kernel linear Volterra integral equations introduced in gutleb_sparse_2019. We will refer to this Volterra operator without the weight as in the following sections and assume it is computed using the methods outlined here and detailed in gutleb_sparse_2019. The weight is accounted for by using appropriate basis shifts or multiplication as detailed in the algorithm steps.

Expand in .

Generate recursively via an operatorvalued Clenshaw algorithm for the flipped kernel .

Solve the linear system for .

The approximate solution is .

Expand in .

Generate recursively via an operatorvalued Clenshaw algorithm for the flipped kernel .

Solve the linear system for

The approximate solution is .
2 Extension of the linear case sparse spectral method to integrodifferential equations
Volterra integrodifferential equations (VIDEs) are named such because the unknown appears in the equation under the action of both a Volterra integral and a derivative operator. In this section we will consider linear VIDEs of the following generic form:
(7) 
with constants and . Within the context of the present spectral method the integral operator is the Volterra operator from section 1.3, which as we saw maps a coefficient vector of a function in the basis to the solution in the same basis. Consistent basis considerations, specficically the Jacobi parameters of our chosen basis, are crucial when developing a solution algorithm for integrodifferential equations. As noted in section 1.1, there is an abundance of useful structure in the Jacobi polynomials which among other things allows us to take derivatives by shifting the basis parameters as in (46). Applying a derivative operator is the same as applying a parameterscaled raising operator and thus incurs a basis change, which needs to be accounted for in the other operators. We choose the integrodifferential equation of second order
as an example to illustrate this. To consistently obtain a solution from an extension to the above linear method it does not suffice to simply replace the second order derivative operator with the appropriate Jacobi polynomial basis derivative operator. Instead, due to the incurred basis shift, an additional conversion or shift operator must be applied to the Volterra operator as well. Starting from the basis in which we obtain our solution , the second derivative operator carries us into the basis , meaning that, taking note of the steps in Algorithm 2, the appropriate operator form of the above second order example equation is
We may collapse the compatible conversion operators down into a single one to obtain the slightly simpler
(8) 
Note that must be expanded in the basis instead of the basis or converted into said basis using the abovedefined basis shift operators. This is for consistency reasons as the operators acting on shifting the basis from to means that the inverse of said operation must act on a function expanded in . This is not an artifact of our choice of the basis for our solution: While that basis is particularly wellsuited for Volterra integral equations as it results in a far more efficient kernel computation gutleb_sparse_2019, the derivative operator in the VIDE will always shift the basis of our solution, so to optimize efficiency is always initially expanded in the basis where is the order of the highest appearing derivative operator. Even in the general case with multiple derivative operators of different orders the basis for the solution always remains (for efficiency of kernel computations) while the highest order derivative operator determines the basis in which must be expanded along with the shift operators which respectively act on all lower order operators as well as the Volterra integral operator.
Attempting to invert the operator on the lefthand side of Equation (8) asis will yield nonsensical results. This should be unsurprising, as the differential equation it corresponds with does not have a unique solution unless initial conditions are supplied as well. Given a Volterra integrodifferential equation with highest appearing derivative operator of order , we will in general require initial conditions for all lower order derivatives to be given, i.e.:
for given constants . In the example case of Equation 8 the values and must be given. In spectral methods such as the one discussed in this paper, boundary or initial conditions are enforced by extending the tobeinverted operator by appropriate evaluation operators. The relevant Jacobi basis evaluation operators, being functionals, are represented in the coefficient vector and operator language as row vectors, as discussed in section 1.1. For the example in Equation 8 we thus append the two initial condition evaluations at the top of the operator as follows obtaining the now solvable system
(9) 
with consistently modified right hand side. Similar procedures have previously been used to solve differential equations, cf. townsend_automatic_2015; hale_fast_2018. The discussion in this section in combination with the linear Volterra integral method in Algorithms 1 and 2 thus provides a recipe for the solution of general linear Volterra integrodifferential equations satisfying a sufficient set of initial conditions. We produce the general case method in Algorithm 3. The resulting operator on the lefthand side has filledin top rows for each initial condition and thus is no longer fully banded but still retains very wellbehaved sparsity structure (semibanded) leading to fast solutions even for high orders of polynomial approximation.

Expand in .

Generate recursively via an operatorvalued Clenshaw algorithm for the flipped kernel .

Generate the operator .

Append evaluation operators to the top row of the operator and corresponding initial conditions to the top of .

Solve the semibanded linear system for :

The approximate solution is .
3 Nonlinear Volterra equations via iterative methods
In this section we develop an iterative approach for solving nonlinear Volterra integral equations based on the linear case sparse spectral method. Computing solutions to nonlinear Volterra and Fredholm integral equations with iterative methods is not a novel idea in itself, see e.g. driscoll_automatic_2010, but typically comes with significant drawbacks, cf. remarks in atkinson_survey_1992; ezquerro_solving_2011. The core problem with iterative methods is how rapidly their computational cost scales with the expense of evaluation in each iteration. The slower the rate of convergence and the more expensive the individual evaluation in each step, the less feasible iterative methods become. Conversely, the presented sparse spectral method is very well suited to be used in conjunction with iterative methods as it not only converges exponentially but also keeps evaluation cost comparatively low by making use of operator bandedness in the chosen bases.
We will primarily use a simple Newton iteration algorithm without linesearch on the basis of implementations in NLsolve.jl patrick_kofod_mogensen_nlsolve for the numerical experiments but in principle many other iterative approaches may be used, resulting in further speedups in some cases.
The main idea of the extension to the nonlinear case is to notice that given functions , and the general nonlinear, second kind Volterra equation
may be cast into the form of a rootfinding problem in function space for the objective function defined by
The initial guess required for iterative approaches is thus made at the level of coefficient vectors, meaning that a guessed column vector representing the solution in the basis is supplied to the iterative solver. When no convergence automation is used the supplied length of the guess as well as determines the maximum polynomial degree and thus the approximation error. The stepbystep method is stated in Algorithm 4.

Expand in .

Generate recursively via an operatorvalued Clenshaw algorithm for the flipped kernel .

Generate the operator .

Apply a simultaneous rootsearch (e.g. Newton method) to components of objective function .

The approximate solution is the obtained root .
4 Nonlinear integrodifferential Volterra equations
We can straightforwardly combine considerations in sections 2 and 3 to obtain a sparse spectral method suitable for solving Volterra equations featuring both derivative operators and Volterra integral operators with nonlinearities. A very (but not exhaustively) general case of such an equation of second kind is
(10) 
For brevity we only address equations of the form in Equation (10) but the methodology outlined in this paper is applicable for a much broader class of problems. The full stepbystep method is stated in Algorithm 5.

Expand in .

Generate recursively via an operatorvalued Clenshaw algorithm for the flipped kernel .

Generate the operator .

Append evaluation operators to the top row of the operator and corresponding initial conditions to the top of .

Apply a simultaneous rootsearch (e.g. Newton method) to components of objective function defined by
. 
The approximate solution is the obtained root .
5 Numerical experiments
Throughout this section we measure errors between analytic solutions and computed approximate solutions in each point of the domain via the infinity norm of the absolute error
5.1 Numerical experiments with linear VIDEs
5.1.1 Set 1: Second kind, convolution kernels, one derivative operator
As a proofofconcept we first test the above method on three simple convolution kernel cases with analytically known results:
(11)  
(12)  
(13) 
with initial conditions given by
(14)  
(15)  
(16) 
The following analytic solutions derived respectively via variational iteration, Adomian decomposition and Laplace transform methods are found in wazwaz_linear_2011:
We plot the absolute error of the computed solution compared to the analytic solution in semilogarithmic scale in Figure 1, showing exponential convergence to the exact solutions.
5.2 Set 2: Collocation method for third kind integrodifferential equations
A collocation method is used in shayanfard_collocation_2019 to solve certain types of third kind integrodifferential Volterra equations of form
While we do not explicitly treat third kind equations in this paper, the discussion of first and second kind integrodifferential equations in section 2 implies an obvious extension to these cases. Shayanfard, Dastjerdi and Ghaini shayanfard_collocation_2019 discuss two numerical examples and provide a table of error values for differently chosen collocation points. The two numerical experiments with nonconvolution kernels are
(17)  
(18) 
with initial conditions respectively given by
(19) 
and known analytic solutions
In Figure 2 we compare absolute errors of the results obtained with our sparse spectral approach to the errors obtained with their collocation method (as given in Tables 1 and 2 in shayanfard_collocation_2019). The sparse method bandwidth of the operators for these third kind VIDEs is large as they feature additional multiplications with Jacobi operators with poorly approximated rational powers in them, so while our proposed method still performs very well in terms of accuracy, there is not much computation time efficiency to be gained due to sparsity for these two problems.
5.3 Set 3: Integrodifferential equations in Chebfun
The Chebfun package allows stateoftheart computations using polynomial approximations and collocation methods in MATLAB pachon_piecewisesmooth_2010; battles_extension_2004; driscoll_chebop_2008. An implementation of an automatic collocation method for integral and integrodifferential Volterra and Fredholm equations in Chebfun was presented in driscoll_automatic_2010. In this section we aim to compare performance of the sparse method compared to the dense collocation method used in Chebfun for problems requiring low and high polynomial orders.
5.3.1 Low order solutions
The example in this section is given in driscoll_automatic_2010 and is a nonconvolution kernel linear VIDE which previously appeared in a discussion of higher order collocation methods for VIDEs by Brunner brunner_highorder_1986. We seek a solution to
(20) 
with initial condition
(21) 
and known analytic solution
In Figure 3(a) we plot the absolute error of the solution obtained via the sparse spectral method with maximal polynomial approximation order . We present a spy plot of the quasibanded integrodifferential operator generated by our sparse method in Figure 3(b). We find that for orders around , where machine precision accuracy is within reach, the operator for this problem is still dense and thus the proposed method should realistically only match Chebfun’s performance. That a speedup is nevertheless observed, see Table 1, may be explained by language specific differences between MATLAB and Julia, the automatic convergence search which Chebfun performs but was not used for the sparse method or a combination of such factors. Sparsity becomes an important factor for efficiency when treating equations where more complicated solutions are to be expected which require polynomial approximations in the order of hundreds or thousands of coefficients.
5.3.2 High order solutions
For an example which requires a higher to solve with good accuracy we consider
(22) 
given initial condition
(23) 
and righthandside function defined by
For all the analytic solution to this equation is given by
As this approximates a steplike function at for increasing , see Figure 5(a), it is easy to see why polynomial approximations quickly begin to require high orders. Figure 5(b) shows a spy plot of the quasibanded integrodifferential operator generated by our sparse method, while Figure 4 shows the absolute error of some solutions obtained via the sparse spectral method. The solutions needs to be resolved in relatively high order polynomial approximations, so the bandedness of the operator results in notable performance improvements compared to Chebfun’s dense collocation method, see Table 2.
5.4 Set 4: Bessel kernels with highly oscillatory solutions
Hale hale_ultraspherical_2019 discusses an integrodifferential equation with Bessel function kernel, which appears in scattering applications and potential theory, on the basis of previous work on Bessel kernel Volterra equations by Xiang and Brunner xiang_efficient_2013. We use this as the final numerical experiment in this section as it touches on the interesting case of highly oscillatory solutions without known analytic form. Specifically, we discuss the singularly perturbed version of the equation which appears in hale_ultraspherical_2019 and intentionally makes the problem significantly more oscillatory:
(24) 
with defined by
where are first kind Bessel functions, and . Equation (24) is further supplied with initial conditions
To allow comparisons with hale_ultraspherical_2019 we will consider the example parameters
Analytic solutions to this equation are not known and convergence comparisons are thus made to high order approximate solutions () instead. We plot the highly oscillatory solution to this in Figure 6(a) and the convergence to the solution in Figure 6(b). Similarly to results in hale_ultraspherical_2019 we observe rapid exponential convergence once the polynomial order becomes sufficient to resolve the frequency of the oscillations. Better convergence up to machine precision is possible when using a more sophisticated balancing of approximation orders for the kernel, and the solution respectively, as opposed to linearly increasing the approximation order of each of them at the same time. This could also be done using an automated convergence algorithm if needed but this example is primarily presented to show the broad range of applicability even for oscillatory problems – as this particular Bessel kernel is ultimately a convolution kernel, methods which take the additional convolution kernel structure into account, e.g. Hale’s method in hale_ultraspherical_2019, will generally outperform the general kernel method presented in this paper in accuracy or performance (in particular if operating in low polynomial approximation orders or if they themselves make use of sparsity structure).
5.5 Numerical experiments with nonlinear equations
5.5.1 Set 1: Power nonlinearity Volterra integral equations
The simplest case of nonlinear Volterra integral equations and thus also where most analytic solutions are available for direct comparison is the case of power nonlinearities of the form for some positive integer . We thus consider the examples
(25)  
(26) 
whose analytic solutions are derived respectively via a Picard type iteration and Adomian decomposition method in wazwaz_linear_2011:
As discussed above and as is true for any iterative method there are now multiple parameters which may be finetuned to the problems at hand in order to achieve faster and more precise convergence. To that end we may for example finetune the initial guess or the convergence cutoffs. As what can go wrong in a standard application case is of greater interest than what may happen in ideal circumstances, we omit such finetuning and instead simply supply a vector of all zeros of length for Equation (25) and a vector of all ones of length for Equation (26). We plot the maximal absolute errors between true and computed solutions in Figure 7. We observe exponential convergence as increases using simple Newton iteration without linesearch.
5.6 Set 2: Numerical experiments with nonlinear VIDEs
For nonlinear VIDEs we consider:
(27)  
(28) 
with initial conditions
(29)  
(30) 
Analytic solutions to these equations were derived in wazwaz_linear_2011 using the variational iteration method:
As in section 5.5 we avoid making educated guesses for the inital guess supplied to the algorithm and merely increase the maximal allowed length of the solution coefficient vector, i.e., the maximal polynomial degree of the computed approximation. The initial guess for Equation (27) is a vector of all ones and the initial guess for Equation (27) is a vector of all zeros of length respectively. We plot the maximal absolute errors between analytic and computed solutions in Figure 8. We again observe exponential convergence as increases using Newton iteration without linesearch.
Method  CPU time  approx. order  abs. error 

Sparse method  0.1s  20  1.4e15 
Chebfun  0.2s  18 (autom.)  2.7e15 
k (Sparse)  CPU time  approx. order  abs. error 

100  0.3s  300  6.0e15 
1000  0.4s  800  2.9e14 
10000  0.8s  2000  9.3e14 
50000  3.5s  5000  2.8e13 
100000  8.8s  8000  8.1e12 
k (Chebfun)  CPU time  autom. order  abs. error 

100  1.4s  196  4.8e14 
1000  2.1s  540  6.6e12 
10000  10.7s  1477  1.2e09 
50000  10.4s  2863  4.1e08 
6 Notes on algorithm convergence
Convergence of the abovediscussed method in the case of nonlinear equations arises as a function of the convergence properties of the root search algorithm that is utilized, combined with the proofs for the respective linear variants. Proofs of convergence for second kind linear Volterra integrodifferential equations may be given in a similar fashion to linear Volterra integral equations in gutleb_sparse_2019 and differential equations in hale_fast_2018, the basic observation being that the full tobeinverted operator is diagonally dominant for wellbehaved functions and may be written as a compact perturbation of the identity, thus reducing the problem to standard finite section approximation convergence results, cf. bottcher_analysis_2006; olver_fast_2013; slevinsky_singular_2017; lintner_generalized_2015. As seen in the linear case proof for first kind VIEs in gutleb_sparse_2019 proofs for first kind equations would require a deeper functional analysis approach due to being somewhat illconditioned.
7 Discussion
We have presented a competitively fast general kernel sparse spectral method for nonlinear Volterra integrodifferential and integral equations which extends linear results in gutleb_sparse_2019. The method is notably not reliant on the structure of convolution kernels and applies for general kernels. Furthermore, as it does not rely on low rank approximations it is applicable in more general cases where these approximations fail. It thus combines very broad applicability with high performance and accuracy.
One noteworthy drawback of this method is that, although as discussed in the numerical experiments section in gutleb_sparse_2019 the method may yield sensible results for some types of singular kernels, there are as of now no known guarantees for such cases. That said, the presented method was shown to be convergent and wellbehaved with problems that may be well approximated in the specified polynomial bases, which allow for a very general range of kernels.
The numerical experiments in this paper serve an illustrative purpose – in a practical application setting one would choose more sophisticated and efficient root search algorithms than a simple Newton iteration without linesearch and make an educated initial guess for the root search based on background knowledge about the structure of the problem instead of supplying simple zero or one filled coefficient vectors. These points were specifically ignored in this paper to illustrate that such more sophisticated methods are not required to achieve competitive performance and accuracy.
Comments
There are no comments yet.