Constructive interpolation points selection in the Loewner framework

08/30/2021 ∙ by Pierre Vuillemin, et al. ∙ ONERA 0

This note describes a constructive heuristic to select frequencies of interest within the context of reduced-order modelling by interpolation. The approach is described here through the Loewner framework. Numerical illustrations highlight the benefit it can bring to decrease the required number of interpolation points which is key when the data come from numerically expensive solvers.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The Loewner framework (LF) [4] is a data-driven method aimed at building a descriptor realisation such that the associated transfer function

interpolates given frequency-domain data

, i.e. . When , the interpolation is tangential. Provided is large enough, the LF both encodes the minimal McMillan degree and the minimal realisation order to describe the data.

In practical engineering applications, the interpolation points are often located on the imaginary axis and the associated data then represent the transfer function of the underlying system, i.e. - assuming the latter is linear. These data may be evaluated by experiments or a dedicated numerical solver.

From the authors’ experience in (industrial) aeronautic applications (see e.g. [7]), the LF has proven to be particularly useful to address the challenges associated with the modelling of complex systems111Complexity refers here both to the underlying system and the process to generate the associated data (e.g. through computational fluid dynamics, experiments, etc.) and also to the large dimension required to catch perfectly all the dynamics they embed.. Indeed, it is simple, numerically efficient and the resulting finite-dimensional linear state-space representation of the interpolant model is particularly suited to control-engineering applications. Especially as the dimension of can be decreased even further by a suitable projection in exchange for a loss of accuracy in the interpolation.

Recent work [6] addresses the issue of memory-management within the LF when the number of data is large. Yet, considering costly data generated by dedicated high-fidelity solvers, the problematic is reversed as is likely to be very limited. In that case, the main question lies in the adequate choice of interpolation points to obtain a satisfactory compromise between the accuracy of the model and the numerical cost to built it.

The LF has been exploited previously in [1] in a fixed-point algorithm where the interpolation points are updated iteratively to fulfil the first-order optimality conditions. However, the a priori fixed dimension and the potentially large number of required evaluations of (in the whole complex plane) before convergence do not suit the considered problem. The AAA algorithm [5] chooses the next interpolation points based on the worst mismatch with the available data. This supposes that a lot of data are available beforehand.

In this context, we propose in section 2 a constructive heuristic for the selection of adequate interpolation points. At each step, it exploits the current interpolant model to infer the next frequencies of interest. As highlighted in section 3, it enables to reduce drastically the required number of function evaluation while reaching an accurate representation of the underlying model.

While writing this note, the work [2] came to our attention. The authors develop a similar iterative approach. It differs mainly by the heuristic for selecting the next iteration points which is an interesting alternative.

2 Constructive Loewner

In the sequel, interpolation points are assumed to be chosen solely on the imaginary axis, i.e. , thus allowing to reason only in terms of the frequencies . Let us also assume that an interval of interest where is available.

The principle of the constructive approach is summarised in algorithm 1. Starting from the boundary points of , the set of interpolation points is completed at each iteration by the frequency where the current interpolant model has the strongest dynamic. As detailed thereafter, this vague concept can be translated in various ways. But independently, the underlying idea here consists in ensuring that any strong dynamic exhibited by the interpolant model is actually representative of . The algorithm stops when either the maximum allowed number of interpolation points has been reached or the interpolating model does not change more than a prescribed threshold

from one iteration to the other. A drop in the singular values of the Loewner pencil may also be monitored but as highlighted as it suggests that some data are redundant.

0:  Initial model , interval , max. number of interpolation points , tolerance .
1:  Let
2:  Evaluate ,
3:  ,
4:  while  do
6:     Build interpolating
7:     if  then
8:        break
9:     end if
10:     Find where has the strongest dynamic
12:     Evaluate
14:  end while
15:  Return
Algorithm 1 Constructive interpolation

Notion of strong dynamics.

From a reduction perspective, a low approximation error between the complex model and its reduction is, to some extent, related to a look-alike between the singular value plots of both models.

As the error is unavailable here, the objective is to make the singular value plot of similar to the one of . The latter being also unknown, the thinking is reversed: for each frequency not yet interpolated where exhibits a strong dynamic, one must ensure that has actually a corresponding strong dynamic.

What a strong dynamic means may largely depend on the application. Flexible structures as those encountered in aeronautic are generally characterised by a resonant frequency response. In that case, the look-alike is obtained by matching both its peaks and valleys. Large static gains, derivative or integral actions are also very common and should also be considered. Therefore, in the sequel, line of algorithm 1 will consist in looking first at the largest and lowest gains where vanishes, then at the largest and lowest slopes.

Many refinement and variations can be imagined but this interpretation has proven to be very effective as highlighted in section 3. Before that, the implementation details are discussed below.

Practical considerations.

Both the determination of the strong dynamics of and the relative error at line can be conducted in an exact manner by working with the (small) realisation of the interpolant model. However, we believe that a fully data-driven approach is accurate enough and much simpler to implement.

More specifically, let be a fine discretisation of the interval . Then, the relative error at line can be replaced by


Similarly, finding the strong dynamics simplifies as the real domain of search is reduced to the discrete set . The latter can also be refined to avoid clustering of the additional frequencies nearby points already in . The actual research of strong dynamics is done in two consecutive phases. Let , then,

  • First, to detect the main peaks and valleys: the derivative , is approximated by finite differences and the frequencies associated with a change of sign are identified. Among those frequencies, the ones with largest or lowest value which are not already in are retained.

  • Should no peak/valley be detected in , then the frequencies such that is maximal or minimal are retained.

In practice, it is interesting to enrich the interpolation set by two points at each iteration. This avoids premature convergence of the approach by stimulating the appearance of new strong dynamics in .

3 Numerical illustration

To highlight the interest of the proposed Constructive Loewner (CLOE), it is applied on several models from Complieb [3] considering minimum knowledge, i.e. for a common interval . The resulting relative error,


is compared to the one obtained with the Loewner model (referred as coarse in the sequel) built with the same number of points but spread logarithmically over . Within CLOE, the set is discertised logarithmically with a number of samples varying from to and the tolerance varies from to . The results are reported in figure 1.

Figure 1: Ratio for several models which names are reported in abscissa together with their dimension, and the range of interpolation points found.

One can notice that the proposed heuristic is an improvement in comparison to a logarithmic selection of the interpolation points as the resulting error is lower in all the considered cases222Note that several points are overlapping as the number of interpolation points found by CLOE is evolving by steps.. This is partly luck as those performances are not to be expected for the largest values of (see the iterations on the LAH model thereafter). No clear trend appears when the dimension or the tolerance vary. However, when looking at the relative error of the reduced-order model in figure 2, one can see that a lower approximation error is reached when the tolerance is chosen below . As illustrated thereafter, it is indeed important to stop the algorithm when the interpolant model does not evolve anymore throughout the iterations.

Figure 2: Relative error of against the tolerance for various values of and all the considered test models.
Figure 3: Frequency responses of the LAH model (black) together with the interpolant model obtained with CLOE at iteration (dashed blue). The first two iterations are displayed at the top and the last two iterations at the bottom. Selected interpolation points (set ) are in red and candidate points are in magenta. The parameter is set to .
Figure 4: Relative errors between the LAH model and the models and obtained respectively with CLOE at iteration or with the coarse grid with the same number of points. The stopping criterion (1) is also reported.

To illustrate the evolution of the model within CLOE, the interpolating model is shown together with the initial model in figure 3 at various iterations. The evolution of the relative approximation error throughout the iterations is reported in figure 4 together with the error of corresponding coarse model and the value of the stopping criterion (1).

In figure 3 top left, the initial model is interpolated at the boundary of the frequency interval, and the candidate interpolation points are near the only resonance of : one candidate point is the detected peak and the other one the most negative slope. During the next iteration, a peak and a valley are detected. The two last iterations show that the main peaks are reproduced by the interpolant model and that smaller dynamics are progressively being matched.

One can see in figure 4 that the approximation errors do not decrease monotonically as the number of interpolation points increases. Still, the stopping criteria appears to be relevant as it follows the trend of the real approximation error reached by CLOE. The importance of the tolerance is highlighted here by the fact that throughout the iterations, the error can be larger than with the coarse grid, especially when is large.

4 Conclusion

This note details an heuristic approach for constructive Loewner interpolation. The number of interpolation points in gradually increased in order to try to improve the quality of the interpolant model while limiting the number of required evaluation of the underlying system . The choice of new interpolation points is the key of the approach. It relies solely on the interpolant model and aims at ensuring that any strong dynamic it contains is actually representative of . These dynamics are identified in the frequency domain by looking for the peaks, valleys and large slope areas of the singular values plot.

In spite of its simplicity, this heuristic has shown to be effective in order to determine an accurate model while limiting the number of required data from the underlying system. It opens interesting perspectives for the modelling from parsimonious frequency-domain data provided the latter can be selected.