1 Introduction
Partitioned solution schemes for fluidstructure interaction (FSI) are widely used in modern computational engineering science, as they offer great flexibility and modularity concerning the two solvers employed for the fluid and the structure. Treated as black boxes, the solvers are coupled only via the exchange of interface data. The major downside of partitioned schemes, however, is an inherent instability caused by the socalled addedmass effect FoersterPhD; Wall_AddedMass. Depending on the application, its influence might be severe enough to impede a numerical solution – if no countermeasures are taken.
The simplest way to retain stability is a relaxation of the exchanged coupling data. The relaxation factor might either be constant or updated dynamically via Aitken’s method FSI_DynamicRelax; irons1969version. Unfortunately, the high price to be paid in terms of the coupling’s convergence speed often renders this approach infeasible.
As opposed to this, interface quasiNewton (IQN) methods have proven capable of both stabilizing and accelerating partitioned solution schemes GatzhammerPhD; degroote2010performance; degroote2010partitioned. Identifying the converged solution as a fixed point, their basic idea is to speed up the coupling iteration using Newton’s method. Since the required (inverse) Jacobian is typically not accessible, interface quasiNewton approaches settle for approximating it instead.
Early work in the field has been done by Gerbeau et al. gerbeau2003quasi as well as van Brummelen et al. van2005interface. A breakthrough, however, was the interface quasiNewton inverse leastsquares (IQNILS) method by Degroote et al. PerformancePartitionedMonolithic: On the one hand, it directly solves for the inverse Jacobian required in the Newton linearization; on the other hand, the IQNILS variant introduces the leastsquares approximation based on inputoutput data pairs that is still common today.
Research of the last decade has shown that a reutilization of data pairs from previous time steps is extremely advantageous. Unfortunately, an explicit incorporation in the IQNILS method suffers from numerical difficulties such as rank deficiency; moreover, good choices for the number of incorporated time steps are in general problemdependent scheufele2017robust; ComparisonQuasiNewton. However, the works by Degroote and Vierendeels Degroote_MultiSolver as well as Haltermann et al. haelterman2016improving have shown that filtering out linearly dependent data pairs is an effective way of alleviating these issues.
As an alternative, Bogaers et al. Bogaers_IQNMVJ and Lindner et al. ComparisonQuasiNewton
formulated a very beneficial implicit reutilization of past information, yielding the interface quasiNewton inverse multivector Jacobian (IQNIMVJ) method. Its only real drawback is the required explicit representation of the approximated inverse Jacobian: Since the related cost is increasing quadratically with the number of degrees of freedom at the FSI boundary, this explicit form seriously slows down the numerical simulation for larger problem scales.
In this work, we present an enhancement of the IQNIMVJ concept tackling exactly this shortcoming:
Retaining the effective implicit reutilization of past data,
the new interface quasiNewton implicit multivector leastsquares (IQNIMVLS) method
completely avoids any explicit Jacobian or quadratic dependency
on the interface resolution.
While the advantages of the multivector approach are preserved,
the resulting linear complexity ensures the negligibility of computational cost
even for large problem scales.
This paper is structured as follows: Section 2 presents the governing equations of the considered FSI problems, before Section 3 briefly covers the numerical methods applied to solve them. In Section 4, the IQNILS and the IQNIMVJ approach are discussed in detail, before the IQNIMVLS method is derived. The efficiency of the new approach is validated in Section 5 based on numerical test cases.
2 Governing Equations
In general, fluidstructure interaction considers a fluid domain and a structure , that are connected via a shared boundary , the FSI interface. The subscript refers to the time level, while denotes the number of spatial dimensions. This section introduces the models and equations employed for the two subproblems along with the boundary conditions interlinking their solutions at the shared interface.
2.1 Fluid Flow
The velocity and the pressure of the fluid are governed by the unsteady NavierStokes equations for an incompressible fluid, reading
(1a)  
(1b) 
Therein, is the constant fluid density, while denotes the resultant of all external body forces per unit mass of fluid. For a Newtonian fluid with the dynamic viscosity
, the Cauchy stress tensor
is modeled by Stokes law as(2) 
The problem is closed by defining not only a divergencefree initial velocity field , but also a prescribed velocity on the Dirichlet boundary and prescribed tractions on the Neumann boundary with its outer normal :
(3a)  
(3b)  
(3c) 
2.2 Structural Deformation
The response of the structure to external loads is expressed via the displacement field , which is governed by the equation of motion, stating a dynamic balance of inner and outer stresses:
(4) 
In this relation, denotes the material density and the resultant of all body forces per unit volume, whereas represents the Cauchy stress tensor.
As constitutive relation, the St. VenantKirchhoff material model is used: It relates the 2nd PiolaKirchhoff stresses to the GreenLagrange strains via a linear stressstrain law, reading
(5) 
Therein, denotes the deformation gradient, while and are the Lamé constants. As the GreenLagrange strain definition forms a nonlinear kinematic relation, the structural model is geometrically nonlinear. Hence, it is capable of representing large displacements and rotations, but only small strains ogden2001nonlinear; BatheFEM.
Collecting all this information, the equation of motion can be expressed in the (undeformed) reference configuration in a total Lagrangian fashion:
(6) 
Again, the problem is closed by defining an initial displacement , which is typically zero, and a set of boundary conditions on two complementary subsets of : Prescribing the displacement on the Dirichlet part and the tractions on the Neumann part , with the outer normal in the reference state , the conditions read
(7a)  
(7b)  
(7c) 
2.3 Coupling Conditions at the Interface
The essence of fluidstructure interaction is that the two subproblems cannot be solved independently, since their solution fields are interlinked via the socalled coupling conditions arising at the shared interface :

The kinematic coupling condition requires the continuity of displacements and , velocities and , and accelerations and across the FSI boundary NorbertPhD:
(8a) (8b) (8c) 
In agreement with Newton’s third law, the dynamic coupling condition,
(9) enforces the equality of stresses at the interface. Therein, and denote the associated normal vectors CarstenBraun_2007.
In the continuous problem formulation, satisfying both these coupling conditions for every time ensures the conservation of mass, momentum, and energy over the FSI boundary Kuttler_Partitioned.
3 Numerical Methods
3.1 Flow Solution
The NavierStokes equations formulated in Section 2.1
are discretized in this work using P1P1 finite elements, i.e., linear interpolations for both velocity and pressure. Numerical instabilities, caused by P1P1 elements violating the LBB condition, are overcome using a
Galerkin/LeastSquares (GLS) stabilization LutzPhD; FEM_Flows.Contrary to the usual practice, both space and time are discretized by finite elements. More precisely, we employ the deformingspatialdomain/stabilizedspacetime (DSD/SST) approach introduced by Tezduyar and Behr ArticleDSDSST_1; ArticleDSDSST_2. By formulating the variational form over the spacetime domain, this approach inherently accounts for an evolving spatial domain. While the finiteelement interpolation functions are continuous in space, the linear discontinuous Galerkin ansatz in time allows to solve one spacetime slab after another.
The mesh is adjusted to moving boundaries, e.g., the FSI interface, by means of interface tracking NorbertPhD; Steffi2015. The resulting mesh distortion is compensated by the elastic mesh update method (EMUM) Behr_Abraham.
3.2 Structural Solution
The structural subproblem is discretized in space by isogeometric analysis IGA_Hughes, while the time integration is performed using the generalized scheme Bossak_Chung; Bossak_Kuhl.
3.2.1 Isogeometric Analysis (IGA)
Introduced by Hughes et al. IGA_2004 in 2005, isogeometric analysis is a finiteelement variant aimed at achieving geometrical accuracy and a closer linkage between CAD and numerical analysis . The essential idea is to express the solution space via the same basis functions as used for the geometry description. Since CAD systems are commonly based on nonuniform rational BSplines (NURBS), we choose a NURBS ansatz for the unknown displacement field. Mathematically, a NURBS is a linear combination of its control points and the rational basis functions , defined in the parameter space , with the spline dimension thenurbsbook. A NURBS surface hence has the form
(10) 
3.2.2 Shell Theory
Shell structures are thinwalled structures capable of providing lightweight, costefficient and yet stable constructions for numerous engineering applications Shells_SensitiveRelation. Mathematical shell models exploit the small thickness by reducing the structure from a volumetric description to the midsurface plus an interpolation over the thickness . Combining isogeometric analysis with ReissnerMindlin shell theory, a NURBS midsurface and a linear interpolation are chosen; any position in the structural domain is hence expressed as
(11) 
The parametric coordinate changes along the thickness direction, defined by the director vector . Consequently, the unknown displacement field is a combination of the midsurface displacement and a second term accounting for changes of the director vector :
(12) 
For detailed information on nonlinear isogeometric ReissnerMindlin shell elements, the works by Dornisch and Klinkel Dornisch_PhD; Dornisch_A; Dornisch_D are recommended.
3.3 Coupling Approach
This work pursues a partitioned FSI coupling approach: Two distinct solvers are employed for the fluid and the structure; they are connected via a coupling module, which handles the exchange of interface data in accordance to the coupling conditions. While the strengths of this approach are its great flexibility and modularity regarding the singlefield solvers, these advantages come at the price of additional considerations to be made in terms of data exchange:

In general, the meshes – or even the discretization techniques – do not match at the FSI boundary. Hence, a conservative projection is required to transfer relevant data between the two solvers. In this work, we employ a splinebased variant of the finite interpolation elements (FIE) method; a detailed description can be found in Hosters et al. Hosters2017. For the sake of observability, however, any effects of this spatial coupling are neglected in the following, since they do not interfere with the presented methods.

Due to the (potentially strong) interdependency between the two subproblems, a consistent FSI solution in general requires an iterative procedure, which is referred to as strong (temporal) coupling NorbertPhD; Degroote_MultiSolver.
3.3.1 DirichletNeumann Coupling Scheme
Nowadays, the de facto standard among strong coupling approaches for FSI problems is the DirichletNeumann coupling scheme illustrated in Figure 1:
The first coupling iteration () of a new time step starts with the fluid solver: Based on the interface deformation of time level (or an extrapolated one), it computes a new flow field. In compliance with the dynamic coupling condition, the resulting fluid stresses acting on are passed as a Neumann condition to the structural solver, which determines the corresponding interface deformation . If it matches (within a certain accuracy) the previous deformation , i.e., , the coupled system has converged and we can go on to the next time step. Otherwise, the interface deformation is passed back to the flow solver as a Dirichlet condition, following kinematic continuity, and the next coupling iteration is started.
From a mathematical point of view, the DirichletNeumann coupling scheme can be interpreted as a fixedpoint iteration of the deformation at the interface FSI_DynamicRelax; ComparisonQuasiNewton; the fixedpoint operator corresponds to the two subsequent solver calls, i.e., to running one coupling iteration :
(13) 
Finding the converged solution of the next time level is hence equivalent to finding a fixed point of the interface deformation. Defining the fixedpoint residual and its inverse as
(14a)  
(14b) 
with , the convergence criteria can analogously be expressed as a root of the residual, i.e., .
Note that although it is much less common, the fixedpoint ansatz and the convergence criteria could analogously be formulated based on the fluid loads at the FSI interface.
3.3.2 AddedMass Instability
Unfortunately, partitioned algorithms for fluidstructure interaction involving incompressible fluids exhibit an inherent instability, which may be severe, depending on the simulated problem. This socalled (artificial) addedmass effect originates from the CFD forces inevitably being determined based on a defective structural deformation during the iterative procedure. As a consequence, they differ from the correct loads of the next time level. This deviation potentially acts like an additional fluid mass on the structural degrees of freedom. More precisely, due to the kinematic continuity an overestimated structural deformation entails an exaggerated fluid acceleration – and hence excessive inertia stress terms. In many cases the effect is amplified throughout the coupling iterations, causing a divergence of the coupled problem. A major influencing factor are high density ratios between the fluid and the structure, , but dependencies on the viscous terms and the temporal discretization are observed, too. We recommend the works by Förster FoersterPhD, Förster et al. Wall_AddedMass, and Causin et al. CausinAddedMass for more details on the addedmass effect. Moreover, van Brummelen et al. BrummelenAddedMass investigate similar difficulties for compressible flows.
3.3.3 Stabilizing and Accelerating the Coupling Scheme
One way to deal with the addedmass instabilities is to adjust the computed interface deformation by an update step before passing it back to the flow solver. Depending on the chosen update technique, both the stability and efficiency of the coupling scheme can be improved.
The simplest way to increase stability is a constant underrelaxation of the interface deformation:
(15) 
with . Effectively, it yields an interpolation between the current and the previous interface displacements and . As the factor must be chosen small enough to avoid the coupling’s divergence for any time step, unfortunately this approach often comes at a very high price in terms of efficiency degroote2010performance. Aitken’s dynamic relaxation FSI_DynamicRelax; irons1969version tackles this issue by dynamically adapting the relaxation factor in Equation (15) for each coupling iteration by
(16) 
i.e., based on the two most recent fixedpoint residuals and .
Despite its rather simple implementation,
in many cases
Aitken’s relaxation provides significant speedups
without perturbing the stability of the relaxation.
Still, the performance of the coupling scheme can be pushed further by
more sophisticated approaches like
interface quasiNewton (IQN) methods.
Remark 1: In the following, we will express the interface deformation as the structural solution at the FSI interface. While its representation via the boundary displacement of the fluid mesh is likewise possible, the typically finer interface resolution would result in higher numerical cost for the discussed update techniques.
4 Interface QuasiNewton Methods
4.1 Interface QuasiNewton Inverse LeastSquares (IQNILS) Method
In a partitioned solution procedure for fluidstructure interaction, the singlefield solvers are typically the most expensive part, rendering all further costs, e.g., for data exchange, negligible in comparison Bogaers_IQNMVJ; uekermann2016partitioned.
Since the converged time step solution has been identified as a root of the fixedpoint residual or its inverse form defined in Equation (14), increasing the efficiency essentially comes down to reducing the number of coupling iterations required to find such a root. Consequently, employing Newton’s method as an update step of the interface deformation would be a promising approach to speed up convergence, yielding PerformancePartitionedMonolithic; ComparisonQuasiNewton
(17) 
Unfortunately, an evaluation of the required inverse Jacobian in general is not accessible, as it would involve the derivatives of both the fluid and the structural solver. The idea of interface quasiNewton methods is to circumvent this issue by approximating the (inverse) Jacobian rather than evaluating it exactly. Using a Taylor expansion of , we can approximate via
(18) 
based on an inputoutput data pair. Such a pair is formed by some change in the interface deformation and the corresponding change of the fixedpoint residual . Therein, denotes the number of structural degrees of freedom at the FSI interface. Essentially, this idea is an dimensional version of approximating a derivative with the slope of a secant. To avoid additional solver calls, the required data pairs are formed from the intermediate results and of the coupling iterations already performed for the current time step. More precisely, they are stored in the input matrix and the output matrix scheufele2017robust:
(19a)  
(19b) 
With the collected data, an approximation of the inverse Jacobian can be formulated via
(20) 
Since the number of data pairs stored in and typically is much smaller than the number of structural degrees of freedom at the FSI interface, i.e., , the linear system of equations (20) is underdetermined. The existence of a unique solution is ensured by demanding the minimization of the Frobenius norm ComparisonQuasiNewton:
(21) 
Together, equations (20) and (21) form a constrained optimization problem, which leads to an explicit form of the inverse Jacobian approximation scheufele2017robust; ComparisonQuasiNewton:
(22) 
Inserting this inverse Jacobian approximation into the Newton update formula in Equation (17) yields a quasiNewton update scheme for the interface deformation,
(23) 
Defining the vector exploits that the inverse Jacobian is not needed explicitly here, but only its product with the residual vector. An efficient way of computing is to solve the leastsquares problem PerformancePartitionedMonolithic; scheufele2017robust
(24) 
e.g., using a QR decomposition via Householder reflections. In the first coupling iteration, a relaxation step is used, as no data pairs are available yet.
The approach presented so far is referred to as interface quasiNewton inverse leastsquares (IQNILS) method PerformancePartitionedMonolithic; it can be interpreted as the basis of the other IQN variants discussed in this work.
4.2 Benefit from Previous Time Steps
So far, the inverse Jacobian of the residual operator is approximated only based on information gathered in the coupling iterations of the current time step. In principle, however, the efficiency of IQN methods can be significantly increased by incorporating data from previous time steps as well. The most straightforward way to do so is to explicitly include the data pairs of past time steps in the input and output matrices and PerformancePartitionedMonolithic.
Unfortunately, apart from increasing costs for handling big data sets, a growing number of data pairs stored in these matrices entails two major problems Bogaers_IQNMVJ; ComparisonQuasiNewton; IQN_POD: (1) The matrix might be very close to rankdeficient due to (almost) linear dependent columns; related to that, the condition number of the leastsquares problem quickly increases; (2) information gathered in different time steps might be contradictory. Together, these drawbacks carry the risk of preventing the system from being numerically solvable at all.
An obvious remedy is to incorporate only the data of the most recent time steps. While this approach is in fact capable of yielding a superior convergence speed, its major shortcomings are the risk of rank deficiency and in a good choice for the parameter being problemdependent Bogaers_IQNMVJ; scheufele2017robust. One way to mitigate these drawbacks is by employing a filtering technique Degroote_MultiSolver; haelterman2016improving.
4.3 Interface QuasiNewton Inverse MultiVector Jacobian Method
An alternative way of reusing data from previous time steps was introduced by Bogaers et al. Bogaers_IQNMVJ and developed further by Lindner et al. ComparisonQuasiNewton: The interface quasiNewton inverse multivector Jacobian (IQNIMVJ) method combines the IQN approach with the idea of Broyden’s method: In a Newton iteration, one might expect the Jacobians evaluated in subsequent iterations to be similar in some sense. Therefore, Broyden’s method limits the typically costly Jacobian evaluation to the first iteration only; after that, the new Jacobian is instead approximated by a rankone update of the previous one. For the interface quasiNewton framework, this concept is adopted by formulating the current inverse Jacobian approximation as an update of the one determined in the previous time step :
(25) 
where denotes the update increment. Introducing this concept changes the constrained optimization problem discussed in Section 4.1 to
(26) 
This system allows for a quite descriptive interpretation: Fitting Broyden’s idea, the difference between the inverse Jacobians of the current and the previous time step is minimized, rather than the norm of the approximation itself.
Again, the result is an explicit approximation of the current inverse Jacobian ComparisonQuasiNewton:
(27) 
In contrast to the IQNILS approach, the IQNIMVJ method explicitly updates the Jacobian in every coupling iteration via Equation (27). Therein, the matrix is determined by solving the leastsquares problem
(28) 
for each column with the th unit vector , again using a Householder QR decomposition. After has been updated, the quasiNewton step is performed:
(29) 
Since the inverse Jacobian is initialized to zero, i.e., , the first time step is equivalent to the IQNILS method.
Note that the matrices and only contain data collected in the current time step; because, rather than explicitly including data pairs collected in previous time steps, the IQNIMVJ approach incorporates them implicitly in form of the previous Jacobian approximation . This implicit reutilization of past information entails some significant advantages Bogaers_IQNMVJ; ComparisonQuasiNewton: (1) Since the matrices and are not affected, neither is the condition number of the leastsquares problem; (2) it renders any tuning of (strongly) problemdependent parameters obsolete; (3) since past information is matched in a minimum norm sense only, it is automatically less emphasized than that of the current time step. This avoids the risk of relying on outdated or contradicting data.
These benefits come at the price of an explicit representation of the inverse Jacobian approximation, that is often very expensive to store and handle, as the entailed complexity is growing quadratically with the problem scale.
4.4 Interface QuasiNewton Implicit MultiVector LeastSquares Method
In this paper, we introduce an enhancement of the IQNIMVJ approach
tackling its main issue, i.e., the high cost
related to the explicit Jacobian approximation.
By successively replacing the expensive parts,
this section derives the new interface quasiNewton implicit multivector leastsquares (IQNIMVLS) method.
As a first step, the explicit use of the current inverse Jacobian approximation is eliminated: By inserting the Jacobian update (27) into the quasiNewton step, the vector can be identified in analogy to the IQNILS approach, changing the update formula to
(30) 
By reducing the leastsquares problem back to Equation (24), this drastically reduces the computational cost. The past inverse Jacobian , however, is still explicitly required: Once a time step has converged, it is updated to the next time level by Equation (27); the associated cost is quadratic in . Beyond that, is involved in the quasiNewton formula in Equation (30). In particular, it is needed for the potentially very costly matrix product in , which has a complexity of . As mentioned before, however, the matrices and are successively built up only by appending new columns. Taking this into account, we can reformulate
(31) 
As a consequence, restoring the terms and from the previous iteration in fact allows to reduce the number of matrixvector products with the inverse Jacobian to one per coupling iteration, that is .
This Jacobianvector product remains the only operation in Equation (30) involving an complexity. For an increasing number of structural degrees of freedom at the interface , this term’s cost therefore may become dominant and slow down the overall procedure (see Remark 4.4).
In order to tackle this issue, we introduce an alternative purely implicit formula for evaluating without any explicit previous Jacobian . Note that the explicit Jacobian update will become obsolete as a direct consequence. Unraveling the recursion in Equation (27), the update of the inverse Jacobian can be reformulated to
(32) 
see A for the proof. Therein, the upper index of the matrices , , and refers to the time step they were determined in. Considering the initial Jacobian approximation, , this expression simplifies to
(33) 
While being mathematically equivalent, this formulation entails two main advantages: First, the matrixvector product can be determined in an implicit manner via
(34) 
without an complexity, and hence potentially cheaper. However, the cost for evaluating the expression in Equation (34) obviously increases with the number of processed time steps. This issue is addressed using the second advantage: While data from past time steps is still incorporated in an implicit manner, with all the associated advantages discussed in Section 4.3, the new formulation explicitly identifies the contribution of each time step, in the form of the matrices , , and . As a consequence, this update technique allows to incorporate only the most recent time steps for the previous Jacobian approximation as a way of limiting the cost of the matrixvector product, which then reads
(35) 
In fact, by taking advantage of the repetition of terms in the product (see B), this expression can be evaluated in , where denotes the average number of coupling iterations per time step. This step can be justified by the fact that a certain time step’s contribution to the previous Jacobian approximation is gradually becoming less and less important the further the simulation progresses.
At a first glance, one might argue that the parameter reintroduces exactly the problems the multivector approach was designed to avoid in the first place, i.e., the dependency of the IQNILS method on the number of reused time steps. However, this issue arose from the explicit incorporation of past data, e.g., due to high condition numbers and lineardependent columns in . Since the IQNIMVLS approach reuses data in an implicit manner, it does not suffer from these drawbacks. Instead, the method’s quality in general benefits from increasing the number of past time steps taken into account, as the limit of reusing all steps is analytically equivalent to the explicit Jacobian multiplication.
For this variant, the matrix still has to be determined and stored once after each time step. Since the complexity of computing it via the leastsquares problems (28) is for the Householder QR approach, i.e., growing quadratically with , we use a matrix inversion of via a LU decomposition using partial pivoting with row interchanges instead golub1996matrix. While being slightly less robust to bad conditioning, the big advantage is that it requires operations and therefore scales linearly with the problem size.
Combining all this, the modified multivector update completely avoids any terms that might slow down the IQNIMVLS approach for large systems, where holds, see Table 1. A pseudocode realization of the purely implicit IQNIMVLS method is outlined in Algorithm 1.
Increment step:  
Operation  Expression  Explicit  Implicit 
Leastsquares problem  
via Householder QR  
Matrixvector product  
Compute new column  
Matrixvector product  
Jacobian update:  
Operation  Expression  Explicit  Implicit 
Leastsquares problem  –  
via Householder QR  
Matrixmatrix product  –  
Determine and store  
Operation  Expression  Explicit  Implicit 
Matrix inversion via  –  
LU decomposition  
Matrixmatrix product  – 
In direct analogy to the computational complexity, the memory requirements, too, are no longer scaling quadratically but linearly with the problem size: Although the matrices as well as have to be stored for the most recent time steps, the required amount of storage is much smaller than for the explicit Jacobian as long as holds.
Of course, the effectiveness of the purely implicit IQNIMVLS update strongly depends on this ratio of and ,
so that the explicit Jacobian approximation might still be the better option for small systems.
However, the assumption is not very restrictive for common application scales.
Initialization:  , , 

First iteration:  
Form residual: 
Underrelaxtion step: 
Previous inverse Jacobian times residual (implicitly):  

IQNupdate with Jacobian product: 
Perform coupling iteration:  

Form residual:  
Append new column to input matrix:  
Append new column to output matrix:  
Previous inverse Jacobian times residual (implicitly):  
Build restoring terms from previous iteration:  
Solve leastsquares problem:  
Implicit update step: 
Determine via LU decomposition:  
Store for future time steps:  , , Store 
Remark 2: Aside from suggesting improvements similar to using an explicit past Jacobian within the IQNIMVLS method, Scheufele and Mehl scheufele2017robust also derived a multivector variant with linear complexity. Therein, the past inverse Jacobian is represented based on the matrices and from past time steps, i.e.,
(36) 
While the idea to avoid the costly explicit Jacobian via a sum formulation is similar to the IQNIMVLS method, there is one essential difference: Due to the recursive definition of the multivector Jacobian approximation, each matrix contains information from the first time steps. By implication, this means that the contribution of a time step is contained in all subsequent matrices ,
Comments
There are no comments yet.