A Multi-Vector Interface Quasi-Newton Method with Linear Complexity for Partitioned Fluid-Structure Interaction

01/22/2020 ∙ by Thomas Spenke, et al. ∙ RWTH Aachen University 0

In recent years, interface quasi-Newton methods have gained growing attention in the fluid-structure interaction community by significantly improving partitioned solution schemes: They not only help to control the inherent added-mass instability, but also prove to substantially speed up the coupling's convergence. In this work, we present a novel variant: The key idea is to build on the multi-vector Jacobian update scheme first presented by Bogaers et al. (2014) and avoid any explicit representation of the (inverse) Jacobian approximation, since it slows down the solution for large systems. Instead, all terms involving a quadratic complexity have been systematically eliminated. The result is a new multi-vector interface quasi-Newton variant whose computational cost scales linearly with the problem size.



There are no comments yet.


page 13

page 17

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Partitioned solution schemes for fluid-structure interaction (FSI) are widely used in modern computational engineering science, as they offer great flexibility and modularity concerning the two solvers employed for the fluid and the structure. Treated as black boxes, the solvers are coupled only via the exchange of interface data. The major downside of partitioned schemes, however, is an inherent instability caused by the so-called added-mass effect FoersterPhD; Wall_AddedMass. Depending on the application, its influence might be severe enough to impede a numerical solution – if no counter-measures are taken.

The simplest way to retain stability is a relaxation of the exchanged coupling data. The relaxation factor might either be constant or updated dynamically via Aitken’s method FSI_DynamicRelax; irons1969version. Unfortunately, the high price to be paid in terms of the coupling’s convergence speed often renders this approach infeasible.

As opposed to this, interface quasi-Newton (IQN) methods have proven capable of both stabilizing and accelerating partitioned solution schemes GatzhammerPhD; degroote2010performance; degroote2010partitioned. Identifying the converged solution as a fixed point, their basic idea is to speed up the coupling iteration using Newton’s method. Since the required (inverse) Jacobian is typically not accessible, interface quasi-Newton approaches settle for approximating it instead.

Early work in the field has been done by Gerbeau et al. gerbeau2003quasi as well as van Brummelen et al. van2005interface. A breakthrough, however, was the interface quasi-Newton inverse least-squares (IQN-ILS) method by Degroote et al. PerformancePartitionedMonolithic: On the one hand, it directly solves for the inverse Jacobian required in the Newton linearization; on the other hand, the IQN-ILS variant introduces the least-squares approximation based on input-output data pairs that is still common today.

Research of the last decade has shown that a reutilization of data pairs from previous time steps is extremely advantageous. Unfortunately, an explicit incorporation in the IQN-ILS method suffers from numerical difficulties such as rank deficiency; moreover, good choices for the number of incorporated time steps are in general problem-dependent scheufele2017robust; ComparisonQuasiNewton. However, the works by Degroote and Vierendeels Degroote_MultiSolver as well as Haltermann et al. haelterman2016improving have shown that filtering out linearly dependent data pairs is an effective way of alleviating these issues.

As an alternative, Bogaers et al. Bogaers_IQNMVJ and Lindner et al. ComparisonQuasiNewton

formulated a very beneficial implicit reutilization of past information, yielding the interface quasi-Newton inverse multi-vector Jacobian (IQN-IMVJ) method. Its only real drawback is the required explicit representation of the approximated inverse Jacobian: Since the related cost is increasing quadratically with the number of degrees of freedom at the FSI boundary, this explicit form seriously slows down the numerical simulation for larger problem scales.

In this work, we present an enhancement of the IQN-IMVJ concept tackling exactly this shortcoming: Retaining the effective implicit reutilization of past data, the new interface quasi-Newton implicit multi-vector least-squares (IQN-IMVLS) method completely avoids any explicit Jacobian or quadratic dependency on the interface resolution. While the advantages of the multi-vector approach are preserved, the resulting linear complexity ensures the negligibility of computational cost even for large problem scales.

This paper is structured as follows: Section 2 presents the governing equations of the considered FSI problems, before Section 3 briefly covers the numerical methods applied to solve them. In Section 4, the IQN-ILS and the IQN-IMVJ approach are discussed in detail, before the IQN-IMVLS method is derived. The efficiency of the new approach is validated in Section 5 based on numerical test cases.

2 Governing Equations

In general, fluid-structure interaction considers a fluid domain and a structure , that are connected via a shared boundary , the FSI interface. The subscript refers to the time level, while denotes the number of spatial dimensions. This section introduces the models and equations employed for the two subproblems along with the boundary conditions interlinking their solutions at the shared interface.

2.1 Fluid Flow

The velocity and the pressure of the fluid are governed by the unsteady Navier-Stokes equations for an incompressible fluid, reading


Therein, is the constant fluid density, while denotes the resultant of all external body forces per unit mass of fluid. For a Newtonian fluid with the dynamic viscosity

, the Cauchy stress tensor

is modeled by Stokes law as


The problem is closed by defining not only a divergence-free initial velocity field , but also a prescribed velocity on the Dirichlet boundary and prescribed tractions on the Neumann boundary with its outer normal :


2.2 Structural Deformation

The response of the structure to external loads is expressed via the displacement field , which is governed by the equation of motion, stating a dynamic balance of inner and outer stresses:


In this relation, denotes the material density and the resultant of all body forces per unit volume, whereas represents the Cauchy stress tensor.

As constitutive relation, the St. Venant-Kirchhoff material model is used: It relates the 2nd Piola-Kirchhoff stresses to the Green-Lagrange strains via a linear stress-strain law, reading


Therein, denotes the deformation gradient, while and are the Lamé constants. As the Green-Lagrange strain definition forms a nonlinear kinematic relation, the structural model is geometrically nonlinear. Hence, it is capable of representing large displacements and rotations, but only small strains ogden2001nonlinear; BatheFEM.

Collecting all this information, the equation of motion can be expressed in the (undeformed) reference configuration in a total Lagrangian fashion:


Again, the problem is closed by defining an initial displacement , which is typically zero, and a set of boundary conditions on two complementary subsets of : Prescribing the displacement on the Dirichlet part and the tractions on the Neumann part , with the outer normal in the reference state , the conditions read


2.3 Coupling Conditions at the Interface

The essence of fluid-structure interaction is that the two subproblems cannot be solved independently, since their solution fields are interlinked via the so-called coupling conditions arising at the shared interface :

  • The kinematic coupling condition requires the continuity of displacements and , velocities and , and accelerations and across the FSI boundary NorbertPhD:

  • In agreement with Newton’s third law, the dynamic coupling condition,


    enforces the equality of stresses at the interface. Therein, and denote the associated normal vectors CarstenBraun_2007.

In the continuous problem formulation, satisfying both these coupling conditions for every time ensures the conservation of mass, momentum, and energy over the FSI boundary Kuttler_Partitioned.

3 Numerical Methods

3.1 Flow Solution

The Navier-Stokes equations formulated in Section 2.1

are discretized in this work using P1P1 finite elements, i.e., linear interpolations for both velocity and pressure. Numerical instabilities, caused by P1P1 elements violating the LBB condition, are overcome using a

Galerkin/Least-Squares (GLS) stabilization LutzPhD; FEM_Flows.

Contrary to the usual practice, both space and time are discretized by finite elements. More precisely, we employ the deforming-spatial-domain/stabilized-space-time (DSD/SST) approach introduced by Tezduyar and Behr ArticleDSDSST_1; ArticleDSDSST_2. By formulating the variational form over the space-time domain, this approach inherently accounts for an evolving spatial domain. While the finite-element interpolation functions are continuous in space, the linear discontinuous Galerkin ansatz in time allows to solve one space-time slab after another.

The mesh is adjusted to moving boundaries, e.g., the FSI interface, by means of interface tracking NorbertPhD; Steffi2015. The resulting mesh distortion is compensated by the elastic mesh update method (EMUM) Behr_Abraham.

3.2 Structural Solution

The structural subproblem is discretized in space by isogeometric analysis IGA_Hughes, while the time integration is performed using the generalized- scheme Bossak_Chung; Bossak_Kuhl.

3.2.1 Isogeometric Analysis (IGA)

Introduced by Hughes et al. IGA_2004 in 2005, isogeometric analysis is a finite-element variant aimed at achieving geometrical accuracy and a closer linkage between CAD and numerical analysis . The essential idea is to express the solution space via the same basis functions as used for the geometry description. Since CAD systems are commonly based on non-uniform rational B-Splines (NURBS), we choose a NURBS ansatz for the unknown displacement field. Mathematically, a NURBS is a linear combination of its control points and the rational basis functions , defined in the parameter space , with the spline dimension thenurbsbook. A NURBS surface hence has the form


3.2.2 Shell Theory

Shell structures are thin-walled structures capable of providing lightweight, cost-efficient and yet stable constructions for numerous engineering applications Shells_SensitiveRelation. Mathematical shell models exploit the small thickness by reducing the structure from a volumetric description to the midsurface plus an interpolation over the thickness . Combining isogeometric analysis with Reissner-Mindlin shell theory, a NURBS midsurface and a linear interpolation are chosen; any position in the structural domain is hence expressed as


The parametric coordinate changes along the thickness direction, defined by the director vector . Consequently, the unknown displacement field is a combination of the midsurface displacement and a second term accounting for changes of the director vector :


For detailed information on nonlinear isogeometric Reissner-Mindlin shell elements, the works by Dornisch and Klinkel Dornisch_PhD; Dornisch_A; Dornisch_D are recommended.

3.3 Coupling Approach

This work pursues a partitioned FSI coupling approach: Two distinct solvers are employed for the fluid and the structure; they are connected via a coupling module, which handles the exchange of interface data in accordance to the coupling conditions. While the strengths of this approach are its great flexibility and modularity regarding the single-field solvers, these advantages come at the price of additional considerations to be made in terms of data exchange:

  • In general, the meshes – or even the discretization techniques – do not match at the FSI boundary. Hence, a conservative projection is required to transfer relevant data between the two solvers. In this work, we employ a spline-based variant of the finite interpolation elements (FIE) method; a detailed description can be found in Hosters et al. Hosters2017. For the sake of observability, however, any effects of this spatial coupling are neglected in the following, since they do not interfere with the presented methods.

  • Due to the (potentially strong) interdependency between the two subproblems, a consistent FSI solution in general requires an iterative procedure, which is referred to as strong (temporal) coupling NorbertPhD; Degroote_MultiSolver.

3.3.1 Dirichlet-Neumann Coupling Scheme

Nowadays, the de facto standard among strong coupling approaches for FSI problems is the Dirichlet-Neumann coupling scheme illustrated in Figure 1:

Fluid Solver Structural Solver Yes Next Time Step No Previous Time Step CFD Loads Interface Deformation
Figure 1: Sketch of the Dirichlet-Neumann coupling scheme.

The first coupling iteration () of a new time step starts with the fluid solver: Based on the interface deformation of time level (or an extrapolated one), it computes a new flow field. In compliance with the dynamic coupling condition, the resulting fluid stresses acting on are passed as a Neumann condition to the structural solver, which determines the corresponding interface deformation . If it matches (within a certain accuracy) the previous deformation , i.e., , the coupled system has converged and we can go on to the next time step. Otherwise, the interface deformation is passed back to the flow solver as a Dirichlet condition, following kinematic continuity, and the next coupling iteration is started.

From a mathematical point of view, the Dirichlet-Neumann coupling scheme can be interpreted as a fixed-point iteration of the deformation at the interface FSI_DynamicRelax; ComparisonQuasiNewton; the fixed-point operator corresponds to the two subsequent solver calls, i.e., to running one coupling iteration :


Finding the converged solution of the next time level is hence equivalent to finding a fixed point of the interface deformation. Defining the fixed-point residual and its inverse as


with , the convergence criteria can analogously be expressed as a root of the residual, i.e., .

Note that although it is much less common, the fixed-point ansatz and the convergence criteria could analogously be formulated based on the fluid loads at the FSI interface.

3.3.2 Added-Mass Instability

Unfortunately, partitioned algorithms for fluid-structure interaction involving incompressible fluids exhibit an inherent instability, which may be severe, depending on the simulated problem. This so-called (artificial) added-mass effect originates from the CFD forces inevitably being determined based on a defective structural deformation during the iterative procedure. As a consequence, they differ from the correct loads of the next time level. This deviation potentially acts like an additional fluid mass on the structural degrees of freedom. More precisely, due to the kinematic continuity an overestimated structural deformation entails an exaggerated fluid acceleration – and hence excessive inertia stress terms. In many cases the effect is amplified throughout the coupling iterations, causing a divergence of the coupled problem. A major influencing factor are high density ratios between the fluid and the structure, , but dependencies on the viscous terms and the temporal discretization are observed, too. We recommend the works by Förster FoersterPhD, Förster et al. Wall_AddedMass, and Causin et al. CausinAddedMass for more details on the added-mass effect. Moreover, van Brummelen et al. BrummelenAddedMass investigate similar difficulties for compressible flows.

3.3.3 Stabilizing and Accelerating the Coupling Scheme

One way to deal with the added-mass instabilities is to adjust the computed interface deformation by an update step before passing it back to the flow solver. Depending on the chosen update technique, both the stability and efficiency of the coupling scheme can be improved.

The simplest way to increase stability is a constant under-relaxation of the interface deformation:


with . Effectively, it yields an interpolation between the current and the previous interface displacements and . As the factor must be chosen small enough to avoid the coupling’s divergence for any time step, unfortunately this approach often comes at a very high price in terms of efficiency degroote2010performance. Aitken’s dynamic relaxation FSI_DynamicRelax; irons1969version tackles this issue by dynamically adapting the relaxation factor in Equation (15) for each coupling iteration by


i.e., based on the two most recent fixed-point residuals and . Despite its rather simple implementation, in many cases Aitken’s relaxation provides significant speed-ups without perturbing the stability of the relaxation. Still, the performance of the coupling scheme can be pushed further by more sophisticated approaches like interface quasi-Newton (IQN) methods.

Remark 1: In the following, we will express the interface deformation as the structural solution at the FSI interface. While its representation via the boundary displacement of the fluid mesh is likewise possible, the typically finer interface resolution would result in higher numerical cost for the discussed update techniques.

4 Interface Quasi-Newton Methods

4.1 Interface Quasi-Newton Inverse Least-Squares (IQN-ILS) Method

In a partitioned solution procedure for fluid-structure interaction, the single-field solvers are typically the most expensive part, rendering all further costs, e.g., for data exchange, negligible in comparison Bogaers_IQNMVJ; uekermann2016partitioned.

Since the converged time step solution has been identified as a root of the fixed-point residual or its inverse form defined in Equation (14), increasing the efficiency essentially comes down to reducing the number of coupling iterations required to find such a root. Consequently, employing Newton’s method as an update step of the interface deformation would be a promising approach to speed up convergence, yielding PerformancePartitionedMonolithic; ComparisonQuasiNewton


Unfortunately, an evaluation of the required inverse Jacobian in general is not accessible, as it would involve the derivatives of both the fluid and the structural solver. The idea of interface quasi-Newton methods is to circumvent this issue by approximating the (inverse) Jacobian rather than evaluating it exactly. Using a Taylor expansion of , we can approximate via


based on an input-output data pair. Such a pair is formed by some change in the interface deformation and the corresponding change of the fixed-point residual . Therein, denotes the number of structural degrees of freedom at the FSI interface. Essentially, this idea is an -dimensional version of approximating a derivative with the slope of a secant. To avoid additional solver calls, the required data pairs are formed from the intermediate results and of the coupling iterations already performed for the current time step. More precisely, they are stored in the input matrix and the output matrix scheufele2017robust:


With the collected data, an approximation of the inverse Jacobian can be formulated via


Since the number of data pairs stored in and typically is much smaller than the number of structural degrees of freedom at the FSI interface, i.e., , the linear system of equations (20) is underdetermined. The existence of a unique solution is ensured by demanding the minimization of the Frobenius norm ComparisonQuasiNewton:


Together, equations (20) and (21) form a constrained optimization problem, which leads to an explicit form of the inverse Jacobian approximation scheufele2017robust; ComparisonQuasiNewton:


Inserting this inverse Jacobian approximation into the Newton update formula in Equation (17) yields a quasi-Newton update scheme for the interface deformation,


Defining the vector exploits that the inverse Jacobian is not needed explicitly here, but only its product with the residual vector. An efficient way of computing is to solve the least-squares problem PerformancePartitionedMonolithic; scheufele2017robust


e.g., using a QR decomposition via Householder reflections. In the first coupling iteration, a relaxation step is used, as no data pairs are available yet.

The approach presented so far is referred to as interface quasi-Newton inverse least-squares (IQN-ILS) method PerformancePartitionedMonolithic; it can be interpreted as the basis of the other IQN variants discussed in this work.

4.2 Benefit from Previous Time Steps

So far, the inverse Jacobian of the residual operator is approximated only based on information gathered in the coupling iterations of the current time step. In principle, however, the efficiency of IQN methods can be significantly increased by incorporating data from previous time steps as well. The most straightforward way to do so is to explicitly include the data pairs of past time steps in the input and output matrices and PerformancePartitionedMonolithic.

Unfortunately, apart from increasing costs for handling big data sets, a growing number of data pairs stored in these matrices entails two major problems Bogaers_IQNMVJ; ComparisonQuasiNewton; IQN_POD: (1) The matrix might be very close to rank-deficient due to (almost) linear dependent columns; related to that, the condition number of the least-squares problem quickly increases; (2) information gathered in different time steps might be contradictory. Together, these drawbacks carry the risk of preventing the system from being numerically solvable at all.

An obvious remedy is to incorporate only the data of the most recent time steps. While this approach is in fact capable of yielding a superior convergence speed, its major shortcomings are the risk of rank deficiency and in a good choice for the parameter being problem-dependent Bogaers_IQNMVJ; scheufele2017robust. One way to mitigate these drawbacks is by employing a filtering technique Degroote_MultiSolver; haelterman2016improving.

4.3 Interface Quasi-Newton Inverse Multi-Vector Jacobian Method

An alternative way of reusing data from previous time steps was introduced by Bogaers et al. Bogaers_IQNMVJ and developed further by Lindner et al. ComparisonQuasiNewton: The interface quasi-Newton inverse multi-vector Jacobian (IQN-IMVJ) method combines the IQN approach with the idea of Broyden’s method: In a Newton iteration, one might expect the Jacobians evaluated in subsequent iterations to be similar in some sense. Therefore, Broyden’s method limits the typically costly Jacobian evaluation to the first iteration only; after that, the new Jacobian is instead approximated by a rank-one update of the previous one. For the interface quasi-Newton framework, this concept is adopted by formulating the current inverse Jacobian approximation as an update of the one determined in the previous time step :


where denotes the update increment. Introducing this concept changes the constrained optimization problem discussed in Section 4.1 to


This system allows for a quite descriptive interpretation: Fitting Broyden’s idea, the difference between the inverse Jacobians of the current and the previous time step is minimized, rather than the norm of the approximation itself.

Again, the result is an explicit approximation of the current inverse Jacobian ComparisonQuasiNewton:


In contrast to the IQN-ILS approach, the IQN-IMVJ method explicitly updates the Jacobian in every coupling iteration via Equation (27). Therein, the matrix is determined by solving the least-squares problem


for each column with the -th unit vector , again using a Householder QR decomposition. After has been updated, the quasi-Newton step is performed:


Since the inverse Jacobian is initialized to zero, i.e., , the first time step is equivalent to the IQN-ILS method.

Note that the matrices and only contain data collected in the current time step; because, rather than explicitly including data pairs collected in previous time steps, the IQN-IMVJ approach incorporates them implicitly in form of the previous Jacobian approximation . This implicit reutilization of past information entails some significant advantages Bogaers_IQNMVJ; ComparisonQuasiNewton: (1) Since the matrices and are not affected, neither is the condition number of the least-squares problem; (2) it renders any tuning of (strongly) problem-dependent parameters obsolete; (3) since past information is matched in a minimum norm sense only, it is automatically less emphasized than that of the current time step. This avoids the risk of relying on outdated or contradicting data.

These benefits come at the price of an explicit representation of the inverse Jacobian approximation, that is often very expensive to store and handle, as the entailed complexity is growing quadratically with the problem scale.

4.4 Interface Quasi-Newton Implicit Multi-Vector Least-Squares Method

In this paper, we introduce an enhancement of the IQN-IMVJ approach tackling its main issue, i.e., the high cost related to the explicit Jacobian approximation. By successively replacing the expensive parts, this section derives the new interface quasi-Newton implicit multi-vector least-squares (IQN-IMVLS) method.

As a first step, the explicit use of the current inverse Jacobian approximation is eliminated: By inserting the Jacobian update (27) into the quasi-Newton step, the vector can be identified in analogy to the IQN-ILS approach, changing the update formula to


By reducing the least-squares problem back to Equation (24), this drastically reduces the computational cost. The past inverse Jacobian , however, is still explicitly required: Once a time step has converged, it is updated to the next time level by Equation (27); the associated cost is quadratic in . Beyond that, is involved in the quasi-Newton formula in Equation (30). In particular, it is needed for the potentially very costly matrix product in , which has a complexity of . As mentioned before, however, the matrices and are successively built up only by appending new columns. Taking this into account, we can reformulate


As a consequence, restoring the terms and from the previous iteration in fact allows to reduce the number of matrix-vector products with the inverse Jacobian to one per coupling iteration, that is .

This Jacobian-vector product remains the only operation in Equation (30) involving an complexity. For an increasing number of structural degrees of freedom at the interface , this term’s cost therefore may become dominant and slow down the overall procedure (see Remark 4.4).

In order to tackle this issue, we introduce an alternative purely implicit formula for evaluating without any explicit previous Jacobian . Note that the explicit Jacobian update will become obsolete as a direct consequence. Unraveling the recursion in Equation (27), the update of the inverse Jacobian can be reformulated to


see A for the proof. Therein, the upper index of the matrices , , and refers to the time step they were determined in. Considering the initial Jacobian approximation, , this expression simplifies to


While being mathematically equivalent, this formulation entails two main advantages: First, the matrix-vector product can be determined in an implicit manner via


without an complexity, and hence potentially cheaper. However, the cost for evaluating the expression in Equation (34) obviously increases with the number of processed time steps. This issue is addressed using the second advantage: While data from past time steps is still incorporated in an implicit manner, with all the associated advantages discussed in Section 4.3, the new formulation explicitly identifies the contribution of each time step, in the form of the matrices , , and . As a consequence, this update technique allows to incorporate only the most recent time steps for the previous Jacobian approximation as a way of limiting the cost of the matrix-vector product, which then reads


In fact, by taking advantage of the repetition of terms in the product (see B), this expression can be evaluated in , where denotes the average number of coupling iterations per time step. This step can be justified by the fact that a certain time step’s contribution to the previous Jacobian approximation is gradually becoming less and less important the further the simulation progresses.

At a first glance, one might argue that the parameter reintroduces exactly the problems the multi-vector approach was designed to avoid in the first place, i.e., the dependency of the IQN-ILS method on the number of reused time steps. However, this issue arose from the explicit incorporation of past data, e.g., due to high condition numbers and linear-dependent columns in . Since the IQN-IMVLS approach reuses data in an implicit manner, it does not suffer from these drawbacks. Instead, the method’s quality in general benefits from increasing the number of past time steps taken into account, as the limit of reusing all steps is analytically equivalent to the explicit Jacobian multiplication.

For this variant, the matrix still has to be determined and stored once after each time step. Since the complexity of computing it via the least-squares problems (28) is for the Householder QR approach, i.e., growing quadratically with , we use a matrix inversion of via a LU decomposition using partial pivoting with row interchanges instead golub1996matrix. While being slightly less robust to bad conditioning, the big advantage is that it requires operations and therefore scales linearly with the problem size.

Combining all this, the modified multi-vector update completely avoids any -terms that might slow down the IQN-IMVLS approach for large systems, where holds, see Table 1. A pseudo-code realization of the purely implicit IQN-IMVLS method is outlined in Algorithm 1.

Increment step:
Operation Expression Explicit Implicit
Least-squares problem
via Householder QR
Matrix-vector product
Compute new column
Matrix-vector product
Jacobian update:
Operation Expression Explicit Implicit
Least-squares problem
via Householder QR
Matrix-matrix product
Determine and store
Operation Expression Explicit Implicit
Matrix inversion via
LU decomposition
Matrix-matrix product
Table 1: Complexities of the suboperations required for the IQN-IMVLS update method with an explicit previous Jacobian compared to the purely implicit version.

In direct analogy to the computational complexity, the memory requirements, too, are no longer scaling quadratically but linearly with the problem size: Although the matrices as well as have to be stored for the most recent time steps, the required amount of storage is much smaller than for the explicit Jacobian as long as holds.

Of course, the effectiveness of the purely implicit IQN-IMVLS update strongly depends on this ratio of and , so that the explicit Jacobian approximation might still be the better option for small systems. However, the assumption is not very restrictive for common application scales.

Time Step Loop: for  do
Initialization: , ,
First iteration:
Form residual:
if  then
Under-relaxtion step:
Previous inverse Jacobian times residual (implicitly):
IQN-update with Jacobian product:
       end if
       Coupling Loop: for  until convergence do
Perform coupling iteration:
Form residual:
Append new column to input matrix:
Append new column to output matrix:
Previous inverse Jacobian times residual (implicitly):
Build restoring terms from previous iteration:
Solve least-squares problem:
Implicit update step:
       end for
Determine via LU decomposition:
Store for future time steps: , , Store
end for
Algorithm 1 Pseudo-code of the interface quasi-Newton implicit multi-vector least-squares (IQN-IMVLS) method.

Remark 2: Aside from suggesting improvements similar to using an explicit past Jacobian within the IQN-IMVLS method, Scheufele and Mehl scheufele2017robust also derived a multi-vector variant with linear complexity. Therein, the past inverse Jacobian is represented based on the matrices and from past time steps, i.e.,


While the idea to avoid the costly explicit Jacobian via a sum formulation is similar to the IQN-IMVLS method, there is one essential difference: Due to the recursive definition of the multi-vector Jacobian approximation, each matrix contains information from the first time steps. By implication, this means that the contribution of a time step is contained in all subsequent matrices ,