Isogeometric Analysis of the Gray-Scott Reaction-Diffusion Equations for Pattern Formation on Evolving Surfaces and Applications to Human Gyrification

10/28/2019
by   Jochen Hinz, et al.
0

We propose a numerical scheme based on the principles of Isogeometric Analysis (IgA) for a geometrical pattern formation induced evolution of manifolds. The development is modelled by the use of the Gray-Scott equations for pattern formation in combination with an equation for the displacement of the manifold. The method forms an alternative to the classical finite-element method. Our method is based on partitioning the initially spherical geometry into six patches, which are mapped onto the six faces of a cube. Major advantages of the new formalism are the reconstruction of the manifold based on bicubic spline-functions used for the computation of the concentrations as well as the evolution of the mapping operator. These features give a smooth representation of the manifold which, in turn, may lead to more realistic results. The method successfully reproduces the smooth but complicated geometrical patterns found on the surface of human brains.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 19

page 22

page 23

page 24

page 25

04/10/2021

Finite element error analysis for a system coupling surface evolution to diffusion on the surface

We consider a numerical scheme for the approximation of a system that co...
10/29/2021

A comparison of Cahn-Hilliard and Navier-Stokes-Cahn-Hilliard models on manifolds

We consider phase-field models with and without lateral flow for the num...
10/05/2019

A Conservative Finite Element ALE Scheme for Mass-Conserving Reaction-Diffusion Equations on Evolving Two-Dimensional Domains

Mass-conservative reaction-diffusion systems have recently been proposed...
02/04/2020

A local radial basis function method for the Laplace-Beltrami operator

We introduce a new local meshfree method for the approximation of the La...
08/17/2020

A modified Allen-Cahn model for pattern synthesis on surfaces

We propose an extension of the Allen-Cahn model for pattern synthesis on...
03/15/2020

Finite-Element Formulation for Advection-Reaction Equations with Change of Variable and Discontinuity Capturing

We propose a change of variable approach and discontinuity capturing met...
10/01/2014

Pattern Encoding on the Poincare Sphere

This paper presents a convenient graphical tool for encoding visual patt...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In this section, we first present the problem statement and motivation, which is followed by the introduction of the notation and a recap of differential operators over surfaces as well as a short introduction to Isogeometric Analysis.

1.1 Problem statement

The study of pattern formation is an active topic of reasearch that deals with qualitative and quantitative models for reproducing the many patterns found in the natural world, for instance in hair follicles [7], leaves [8], butterfly wings and mammalian coat markings [9]. Since Turing’s paper from [29], a variety of reaction-diffusion (RD) systems have served as a standard model for how these patterns may arise naturally. So-called Turing Patterns formed in the concentration of the substrates subject to these RD-type systems show a great degree of resemblance with the stripes and spots as they appear in the many labyrinth-like patterns found in nature.
Pattern formation, resulting from RD-models, has been verified both experimentally [6] and computationally [5] and the field has broadened, greatly facilitated by computational simulation. A theoretical framework for the extension of RD-models to evolving surfaces, i.e., systems in which the two-dimensional geometry is a function of time, is presented in [10]. The evolution of the surface, in particular, may hereby be governed by the substrates on the surface.
Unfortunately, the underlying equations of such evolving RD-systems are often too complex to allow for an analytical treatment. This suggests the use of a numerical approach, typically one based on the principles of finite element analysis (FEA). The purpose of the present paper is to give an example of the discretization of the Grey-Scott RD-model on an evolving geometry, initially given by a spherical shell using the principles of Isogeometric Analysis [3], a variant of classical FEA. To this end, we adopt the Grey-Scott RD-based model for human brain development, proposed by Lefèvre et al. in [2] and present an IgA-based numerical scheme to tackle the underlying nonlinear equations. In the following, we present a short recap of models for human brain development and a motivation for the use of IgA as a numerical technique.
We note that the current model provides a description of human gyrification that is too simplistic from a biophysical point of view. As such, the presented work should be considered as a feasibility study for simulating pattern formation induced geometrical development of manifolds by the use of isogeometric analysis.

1.2 Models for Human Brain Development

Neural development has become a topic of growing interest in the past decades. On the one hand healthy adult individuals exhibit qualitatively similar neural structures, on the other hand neural development exhibits a substantial degree of randomness, which is largely confirmed by the observation that even monozygotic twins exhibit significant anatomical differences [1]. Among other factors, this neural ‘fingerprint’ manifests itself mainly through the patterns formed in the neural folding and buckling process occurring naturally after the twentieth week of fetal development (see Figure 1).

Figure 1: Typical patterns formed at the surface of human brains.

This suggests that environmental factors can have a profound influence on the course of neural development, which in turn suggests that the underlying biological process, mathematically, exhibits a high degree of sensitivity toward perturbations in the initial condition. On the other hand, a proficient model for human brain development should be capable of producing qualitatively similar outcomes for similar setups and explain neural pathologies like lissencephaly [28] and polymicrogyria [27] (see Figure 2) by quantitatively different starting conditions.

(a)
(b)
Figure 2: Polymicrogyria (a) and Lissencephaly (b).

The derivation of proficient models for human brain development is greatly hindered by the unethicalness of experimentation on human fetuses. For this reason existing models postulate various driving forces behind pattern formation and assess their validity by comparing the results of simulations to existent brains. Furthermore quality is assessed through above criteria. Existing models for brain pattern formation can be split into two broad categories: intrinsic [22, 25, 26, 23] and extrinsic [20, 21, 24]. As the name suggests, intrinsic models postulate an intrinsic process as the driving force behind neural buckling and folding, such as a chemical process, whereas extrinsic models postulate an external force, such as stresses exerted by the skull. Researchers in the field aim to identify the most-likely driving force by comparing various models in order to reduce the amount of experimentation needed to a minimum.
The intrinsic model proposed by Lefèvre et al. in [2] is the subject of this manuscript. It adopts a modified version of the Gray-Scott reaction-diffusion equations as a basic model for pattern formation. The concentration of one of the two chemical species considered in the model is postulated as the growth-activator, leading to deformations in the geometry that resemble typical folding patterns found in human brains. The implicit assumption of the model is that neural growth can, to a good approximation, be regarded as a process taking place only at the surface. The results from the numerical implementation presented in the article exhibit a high degree of qualitative similarities for similar setups as well as quantitative differences resulting from perturbations in the initial condition. The outcomes also show qualitative differences for different reaction rates and the authors were able to reproduce certain characteristics from various brain anomalies by changing the numerical values of the parameters. In the manuscript, we demonstrate that the various cases studied in [2] can also be reproduced by the use of isogeometric analysis, in which we obtain a higher degree of smoothness in the manifold.

1.3 Motivation

The model proposed by [2] results in a complex system of equations that cannot be solved analytically. The main challenge is the fact that the chemical species affect the local parametric properties of the geometry which in turn affect the local expressions of the differential operators acting on the concentrations of the chemical species, leading to a highly nonlinear system. Complexity is further increased by the existence of nonlinear reaction terms. The authors of [2] present a numerical scheme that utilizes a finite-difference discretization in the temporal component, treating some terms implicitly and others explicitly, as well as a classical finite-element approach in the spatial components. The initial geometry is given by a triangularly tessellated spherical shell.

Growth is incorporated by extracting the magnitude of the velocity vector from the concentration of one of the chemical species at a triangle vertex, and a subsequent shift of its position in the direction of the local normal vector. On a tessellated surface, due to the non-smooth transition between adjacent triangles, the normal vector formally does not exist at the triangle vertices. The article does not explicitly state how the normal vector is computed but it is apparent that it is approximated by a weighed average of the normal vectors of the surrounding (planar) triangles. Within an implementation that is mainly focused on minimizing computational costs (in order to allow for a large amount of simulations), such an approach is reasonable.


Being able to construct smooth geometries would obviously constitute an improvement of above shortcoming since the normal vector is defined in every point on the geometry . To this end, the evident choice is to replace the classical finite-element approach with an approach based on isogeometric analysis (IgA). IgA aims to bridge the gap between classical FEA and the geometric modelling techniques from computer aided design (CAD). As such, the proposed approach, apart from some restrictions, allows for the reconstruction of the evolving surface by smooth B-spline basis functions of arbitrary polynomial order by using a spline-based mapping operator . The geometry hereby inherits the regularity properties of the basis (locally up to -regularity is possible) and the mapping operator becomes another unknown in the problem formulation. Numerically, we therefore treat it in the same way as the unknown concentrations on and, by that, add more mathematical rigour to the scheme.
On the one hand, smoothness in the geometry will most-likely result in more appealing outcomes since non-smooth geometries are not realistic from a biological standpoint (we present more quantitative statements about accuracy in Section 3.4). Since the plausibility of a model can only be assessed by the quality of the results, we hereby regard the spline-based reconstruction of the evolving surface, made possible by an IgA-based numerical approach, as particularly important. We introduce an IgA-based numerical scheme in the hopes of adding more credibility to the model. Furthermore, smoothness allows for a (non-discrete) measure of curvature, which can subsequently serve as a local refinement criterion.
On the other hand, modelling an evolving geometry that is initially given by a spherical shell, (not approximated by a triangular tesselation), is challenging and requires a computational domain that is comprised of several quadrilaterals. With a polynomial order of

, the computational costs per degree of freedom (DOF) of the proposed IgA-scheme are expected to be higher than in the FEA-approach proposed in

[2] (unless , in which case the costs are the same). Hence, this approach should be considered quality-oriented and may serve as inspiration for IgA-based numerical schemes of similar models for evolving surfaces.

1.4 Notation

We represent vectors and vector-valued functions utilizing bold-faced letters. The -th entry of vector(-valued function) is denoted by or . Matrices are presented in square brackets. The -th entry in the -th column of matrix is denoted by .
Let and . We define the vector-by-vector derivative (Jacobian matrix) of with respect to as follows:

(1.1)
Figure 3: Depiction of the relation between local and global functions.

In this manuscript, we will frequently work with finite-element bases. Usually the basis is denoted by . When referring to elements of , we allow for elements with weights from with . The value of will always be clear from the context. We make frequent use of parameterizations. By , with and , , we denote the mapping that parameterizes the geometry with points from the parametric domain . ‘Local’ functions living on are generally represented utilizing lower-case letters, for example . Functions living in parameter space can be made ‘global’ by utilizing the inverse of and are generally represented by the corresponding upper-case letter, here: . Their relationship is given by

(1.2)

see Figure 3.

1.5 Differential Operators on Geometric Objects

Let

(1.3)

denote the Jacobian matrix of the mapping . By

(1.4)

we denote the metric tensor associated with

. For convenience, we introduce the short hand notation

(1.5)

for the canonical geometric factor induced by the metric.
Let and , where denotes the tangent plane of spanned by the column vectors of . Similarly to scalar functions, the local counterparts of functions receive the corresponding lower case letter. Their relationship is hence given by:

(1.6)

Hence, induces a canonical metric for vector-valued functions in local coordinates, since

(1.7)

Analogous to the standard divergence, the geometric divergence is defined as the negative adjoint of the surface gradient. This translates to local coordinates as follows:

(1.8)

for all functions that vanish on (compact support). The function that satisfies (1.8) is, in local coordinates, given by [15, p. 18]

(1.9)

In the remainder, we shall replace for convenience.
Similarly, the geometric gradient, in local coordinates, is given by [18, p. 62]

(1.10)

where denotes the nabla operator in local coordinates.
The Laplace-Beltrami operator, being the counterpart of the ordinary Laplace-operator, consequently satisfies . After some rearrangement, we find [4]

(1.11)

where the are the entries of .
The above expressions can be regarded as generalizations of the commonly-encountered differential operators on manifolds. As such, most of the operations involving partial integration that are commonly applied in FEA are still applicable. In particular

(1.12)

where denotes the unit outward normal vector from the tangent plane of .

1.6 Isogeometric Analysis

As stated in Subsection 1.3, Isogeometric Analysis (IgA), first introduced by Hughes et al. in [3], is a numerical technique aimed at bridging the gap between the principles of CAD and FEA. As CAD-geometries (typically) come in the form of (B)-Spline- or NURBS-based [30] mapping operators , the isoparametric principle, which states that the unknowns on the geometry should be treated in the same way as itself, lies at the heart of any IgA-based numerical simulation. To this end, the same NURBS basis used for the geometry is employed in the finite element analysis and the NURBS-based mapping operator is left unaltered, avoiding the need for tesselation. In the following, we present a brief recap of B-Splines, as well as spline-based surface mappings.
B-splines are piecewise-polynomial functions that can be constructed so as to satisfy various continuity properties at the places where the polynomial segments connect. Their properties are determined by the entries of the so-called knot vector

(1.13)

The knot vector is a non-decreasing sequence of parametric values that determine the boundaries of the segments on which the spline-basis is polynomial. Selecting some polynomial order , the -th order spline-functions are constructed recursively, utilizing the relation (with )

(1.14)

starting from

(1.15)

and iterating until . The support of basis function is given by the interval and the amount of continuous derivatives across knot is given by , where is the multiplicity of in . In practice, is repeated times as well as such that and . As a result, the resulting basis forms a non-negative partition of unity on the entire parametric domain , that is:

(1.16)

with

(1.17)

for all spline functions [3]. Figure 4 shows the B-spline basis resulting from the knot vector

(1.18)
Figure 4: The univariate B-spline basis resulting from the knot vector .

The extension to bivariate spline bases is now straight-forward: given two univariate bases and , we build a bivariate basis , living on , by means of a tensor-product, where

(1.19)

The values contained in and (without knot-repetitions) hereby become the boundaries of the polynomial segments, which can be regarded as the counterparts of classical elements.
We construct the mapping of a B-spline surface as follows:

(1.20)

where the are referred to as the control points. If the dimensionaly of the control points is given by , we are thus parameterizing a two-dimensional manifold . Replacing the tensor-product index by a single global index , we may therefore say that is contained in , with . The isoparametric principle now states that the same basis should be used for the unknowns on parameterized by . As such, integrals of the form

(1.21)

as they commonly arise in FEA, are computed by an appropriate pull-back into . We have:

(1.22)

where is the geometric factor induced by (see Subsection 1.5), while and are related through (1.2).

2 The Gray-Scott Reaction-Diffusion Model for Human Brain Development

We use the Gray-Scott reaction-diffusion based model that was proposed by Lefèvre et al [2]. Assuming that the reaction takes place on a planar surface, the Gray-Scott two-component model for human brain growth reads [2]:

(2.1)

subject to initial and boundary conditions.
Here and are the diffusion constants of and on the surface and is the ordinary two-dimensional Laplace operator. Further, and , respectively, denote the feeding/drainage rate and the -terms are reaction terms. The function constitutes a linearly stable steady-state solution of (2.1) [5].
The (non-equilibrium) solutions of (2.1) are characterized by a large number of patterns for different values of and (alternating between low and high concentration).
The reaction-diffusion process need not take place on a planar geometry. The geometry can, for instance, be a curved surface in parameterized by the mapping . As reaction-diffusion type equations commonly follow from mass-conservation considerations, the local curvature of the surface must have an effect on the system of PDEs. According to [10], in the presence of curvature, one has to replace in (2.1) (see Section 1.5). Thus, in the global sense, the reaction-diffusion equation reads:

(2.2)

subject to initial and boundary conditions.
The above system of PDEs is easily translated to local coordinates by replacing and .
The most distinguishing feature of the model is the inclusion of surface deformation. It is assumed that species acts as a growth activator and as inhibitor. To be specific, it is suggested that the species affect the mapping operator in the following way:

(2.3)

where is some growth function and is the unit outward normal vector to the surface. The most straightforward choice for is

(2.4)

for some , as proposed in [2]. With this choice, the geometry will deform at places where is nonzero, which is why acts as a growth activator.
As , in the presence of growth, becomes a time-dependent function, the geometry will receive a time-subscript , where is parameterized by . Similarly, the surface metric receives a time-indicator as well as the Riemannian volume form .
Obviously, the inclusion of geometry deformation has an influence on mass-conservation considerations over control surfaces. According to [10], in equation (2.2), the time-evolutions of the concentrations and have to be modified so as to contain additional terms. The terms are, in local coordinates, given by and , respectively. With that in mind, the modified equations, in local coordinates, read:

(2.5)

where .
These additional terms ensure that the average concentrations decrease upon surface expansion and increase upon surface contraction. Since whenever the surface locally expands and whenever it (locally) contracts, the modification makes intuitive sense.
The basic idea behind the model is that patterns formed on the surface will manifest themselves in surface deformations through the extension of the model via (2.3) and (2.4). With the right choice for and , the patterns formed resemble the typical patterns found on the surface of human brains.

3 Isogeometric Implementation

In this section, we present a general numerical scheme with which the differential equation (2.5) from Section 2 is tackled.
In the remainder of this section, we will assume that the Riemannian volume form satisfies the following condition: for all , there exist strictly positive constants , with , such that (e.g., is ‘well-behaved’). First we summarise very briefly the IgA procedure combined with the parametrisation of the spherical surface. Subsequently, we describe issues like time integration and refinement criterions.
Our initial geometry is given by a spherical shell. At first sight, it is tempting to use the classical parametrisation of a sphere on the basis of spherical coordinates. This strategy, however, leads to problems, since in particular near the poles the segments are contracted towards the poles. This leads to singularities in the metric, which cause numerical problems such as large quadrature errors as well as more fundamental problems such as the fact that can not be guaranteed due to the metric not being well-behaved. Therefore, we use an alternative approach in which the sphere is partitioned into six segments, which are mapped onto the faces of a (hollow) cube, where each face represents a section of the sphere. These segments are referred to as patches. Each face can be mapped into the reference domain through an additional (trivial) pull-back. Given a hollow cube with faces at , we map points from the cube onto a spherical shell via the operator

(3.1)

where denotes the radius of the sphere.
The mapping from equation (3.1) constitutes the initial parameterization and is part of the initial condition. The discretized initial condition, including expressions for the initial concentrations of and are expressed as elements from through least-squared projections, where the choice of the basis is discussed in Section 3.3.

3.1 Temporal Discretization

We can formulate the system comprised of substrates and and geometry parameterized by as a system of equations in local coordinates

(3.2)

with

(3.3)

Defining

(3.4)

and

(3.5)

the system of equations can be written as:

(3.6)

The individual components of shall be referred to as and . Note that is itself a vector-valued function.
We integrate both sides of (3.6):

(3.7)

Before proceeding to the discretization, we define as the approximation of , resulting from the discretized scheme.
The right-hand-side integral of (3.1) is approximated by a mixed implicit / explicit (IMEX) quadrature. Defining , we utilize:

(3.8)

Here,

(3.9)

and

(3.10)

with , and .
The expression represents the time-discretization of :

(3.11)

We utilize a backward-difference scheme to avoid having to include which is unknown at time-instance .
We cast the discretized system into the form

(3.12)

where

(3.13)

As before, the individual components of and are referred to utilizing , and subscripts. Note that is linear in , and that each component of only depends on the corresponding component in .

3.2 Spatial Discretization

We first multiply equation (3.12) by a test-function and integrate over . For the sake of brevity, we utilize . Note that integrals over are locally reformulated in terms of integrals over the hollow cube . The additional pull-back from the faces of into the reference domain is omitted for the sake of simplicity. We have:

(3.14)

where we have made use of integration by parts, see equation (1.12), in conjunction with the assumption that is a manifold without a boundary.
Similarly, we obtain:

(3.15)

As a next step, we introduce a local basis . In equations (3.2) and (3.2), we successively replace by and we approximate and by elements from ,

(3.16)

Introducing , , , and with

(3.17)

we can construct a system of equations for and . They satisfy:

(3.18)

and

(3.19)

Note that, strictly speaking, the matrices and vectors from equation (3.2) should receive a time-superscript which has been omitted for the sake of brevity.
After systems (3.18) and (3.19) have been solved at time-instance , we are in the position to update the geometry. Equations (3.12) and (3.13) suggest that we should update according to the relation

(3.20)

We solve (3.20) in the weak sense, leading to

(3.21)

finalizing the iteration.

3.3 Essential Boundary Conditions and Choice of Basis

Assuming that parameterizes a boundaryless geometry for all , there are no essential spatial boundary conditions for functions . There do, however, exist requirements for functions and some initial condition

(3.22)

The projections of the components of onto the basis become the first iterates , and that jointly form the discretized initial condition.
The requirements we have to impose on functions follow from the requirement that any must be a function on and hence single-valued on . In order to be compatible with the finite-element discretization, we have to furthermore require that functions satisfy . Clearly, thanks to the well-behavedness of the metric induced by the mapping (see Section 1.5), the boundedness of first-order derivatives induced by are conditional on the boundedness of first order derivatives in local coordinates. As such, a sufficient condition for this requirement is straightforwardly translated to local functions living in :

(3.23)

As we shall see, this is a critical requirement especially on shared edges / vertices of the various patches under the mapping . We enforce requirement (3.23) directly on the local basis functions such that the global counterpart of any satisfies by default.
Let refer to the various faces of . The connectivities of the various patches on follow trivially from the connectivity of the on . Hence there are pairs of faces for which . For functions , we have to impose:

(3.24)
Figure 5: Two functions on either side of the edge connecting and are combined into one function. In the absense of coupling the individual parts living on both faces would not comply with (3.24) as they are not single-valued on the connecting edge. Integrals over the cube are computed by a patchwise pull back of function restrictions into the reference domain .

With condition (3.24) in mind, it becomes apparent that we can construct valid bases from the patchwise discontinuous bases living on the with appropriate degree of freedom coupling. We will assume that, upon pull-back into the reference domain, the constitute spline-bases with identical open and uniform knot vectors in both coordinate directions. As such, the bases are compatible at shared edges / vertices in the sense that oppositely-faced boundary-functions can be combined into one function that is compatible with (3.23) (see Figure 5). Besides the pairs of functions that need to be coupled on either side in the interior of shared edges, it is worth noting that the cube has eight extraordinary vertices where three functions need to be coupled (see Figure 6).

Figure 6: At the cube vertices three basis functions have to be coupled (one from each patch).

3.4 Properties of the Numerical Scheme

In equation (3.8), we introduced the temporal discretization of the right-hand side term with

(3.25)

The scheme can be regarded as a mixed implicit / explicit (IMEX [11]) scheme with the additional feature that is present in . The local truncation errors for the individual components of the temporal discretization are given by:

(3.26)

The global error is of the same order [12].
In the following, we investigate the properties of the scheme introduced in Section 3.2, in particular symmetry and coercivity. The weak form leads to a left hand side operator of the form:

(3.27)

As each component of only depends on the corresponding component in , the analysis can be carried out component-wisely. In equation (3.2), we derived:

(3.28)

Clearly, is symmetric. Furthermore, we have:

(3.29)

As such, a sufficient condition for coercivity is:

(3.30)

It is easy to verify that, for a given growth rate , we have:

(3.31)

As the growth rate typically satisfies , in conjunction with the fact that the velocity vector points in the direction of the normal vector corresponding to (leading to surface expansion, i.e., the magnitude of is expected to increase over time), it is reasonable to assume that (3.30) is not violated. As such, is coercive with coercivity constant

(3.32)

Similarly, a sufficient condition for coercivity of is:

(3.33)

which is likely not violated either. As such, we may conclude that is symmetric and coercive and the fully discretized scheme symmetric positive definite and hence invertible. This suggests the use of a CG-type algorithm for the inversion of the linear systems resulting from relations (3.18) and (3.19).
In the following, we analyse the impact of the higher-order smoothness, made possible by the use of B-Spline basis functions, on the reconstruction of the surface that is initially given by the projection of (3.1) onto . For the analysis, we use a basis resulting from employing the uniform open knot vector

(3.34)

patchwise in both directions. For fixed , the use of above knot vector leads to a basis of equal cardinality, regardless of .

(a)
(b)
(c)
Figure 7: Approximation of using spline bases resulting from (a), (b) and (c). For , we clearly see the planar elements resulting from the tesselation while the approximation becomes virtually indistinguishable from a spherical shell parameterized by for .

Let denote the mismatch between (see (3.1)) and its approximation resulting from .

1 2 3
Table 1: Value of for various .

Table 1 clearly demonstrates the superiority of the higher-order spline functions in reconstructing the spherical shell. The table shows that, for equal , projections with bases of order are consistently one order of magnitude more accurate than those of order . To achieve a similar accuracy as for (amounting to DOFs), for instance, a basis with is sufficient (amounting to only DOFs). The improved accuracy is also clearly visible in Figure 7.
For a more in-depth analysis of the approximation properties of spline basis functions, we refer to [19, Chapter 4].

3.5 Time-Step Selection

So far we have not discussed the choice of the time-step . Since the system-matrices have to be rebuilt after each iteration, the time-step selection should be based on the following principles:

  • the time-step selection should be as large as possible with respect to numerical stability;

  • the time-step selection should be small enough with respect to the characteristic time-scale of the equation to warrant numerical accuracy;

  • the time-step selection should be a cheap operation.

With the above principles in mind, time-step selection based on the proportional-integral-derivative (PID) controller proposed by Valli et al. [13] constitutes a proficient choice. Here, we use the relative change of the -norm of the components of as control parameters at time instance . This is a cheap operation since, for instance,