Log In Sign Up

Stabilization and Variations to the Adaptive Local Iterative Filtering Algorithm: the Fast Resampled Iterative Filtering Method

Non-stationary signals are ubiquitous in real life. Many techniques have been proposed in the last decades which allow decomposing multi-component signals into simple oscillatory mono-components, like the groundbreaking Empirical Mode Decomposition technique and the Iterative Filtering method. When a signal contains mono-components that have rapid varying instantaneous frequencies, we can think, for instance, to chirps or whistles, it becomes particularly hard for most techniques to properly factor out these components. The Adaptive Local Iterative Filtering technique has recently gained interest in many applied fields of research for being able to deal with non-stationary signals presenting amplitude and frequency modulation. In this work, we address the open question of how to guarantee a priori convergence of this technique, and propose two new algorithms. The first method, called Stable Adaptive Local Iterative Filtering, is a stabilized version of the Adaptive Local Iterative Filtering that we prove to be always convergent. The stability, however, comes at the cost of higher complexity in the calculations. The second technique, called Resampled Iterative Filtering, is a new generalization of the Iterative Filtering method. We prove that Resampled Iterative Filtering is guaranteed to converge a priori for any kind of signal. Furthermore, in the discrete setting, by leveraging on the mathematical properties of the matrices involved, we show that its calculations can be accelerated drastically. Finally, we present some artificial and real-life examples to show the powerfulness and performance of the proposed methods.


page 13

page 14


One or two frequencies? The Iterative Filtering answers

The Iterative Filtering method is a technique aimed at the decomposition...

Hyperspectral Chemical Plume Detection Algorithms Based On Multidimensional Iterative Filtering Decomposition

Chemicals released in the air can be extremely dangerous for human being...

Convergence analysis of Adaptive Locally Iterative Filtering and SIFT method

Adaptive Local Iterative Filtering (ALIF) is a currently proposed novel ...

Conjectures on spectral properties of ALIF algorithm

A new decomposition method for nonstationary signals, named Adaptive Loc...

Time-frequency representation of nonstationary signals: the FIFogram

Iterative filtering methods were introduced around 2010 to improve defin...

CoverBLIP: scalable iterative matched filtering for MR Fingerprint recovery

Current proposed solutions for the high dimensionality of the MRF recons...

Complex-valued image denosing based on group-wise complex-domain sparsity

Phase imaging and wavefront reconstruction from noisy observations of co...

1 Introduction

The analysis and decomposition of non-stationary signals is an active research direction in both Mathematics and Signal Processing. In the last decades many new techniques have been proposed. Among them, the Iterative Filtering (IF) algorithm [20] was proposed a decade ago as an alternative technique to the celebrated Empirical Mode Decomposition (EMD) technique [17] and its variants [28, 30, 25, 31]

. The EMD and its variants, in fact, were missing a rigorous mathematical analysis, due to the usage of a number of heuristic and ad hoc elements. Some results have been presented in the literature

[15, 26, 16], but a complete rigorous mathematical analysis is still missing nowadays.

The EMD-like methods are based on the iterative computation of the signal moving average via envelopes connecting its extrema. The computation of the signal moving average allows to split the signal itself into a small number of simple oscillatory components, called Intrinsic Mode Functions (IMFs), which are separated in frequencies and almost uncorrelated [11]. The IF method has been developed following the same structure of EMD, but with a key difference: the moving average is now obtained through an iterated convolutional filtering operation on the signal, with the aim to single out all its non-stationary components, starting from the highest frequency one.

The IF algorithm structure allowed to develop a complete mathematical analysis of this method [8, 12, 14]. On the other side, this method is “rigid” in the sense that it allows to extract only IMFs which are amplitudes modulated, but almost stationary in frequencies. This is a clear limitation if the signal contains chirps or whistles, that are components with quickly changing instantaneous frequencies. For this reason in [12] the authors proposed a generalization of IF called Adaptive Local IF (ALIF). ALIF does not suffer anymore of the rigidity of IF in extracting IMFs containing rapidly varying instantaneous frequencies. However, this new technique loses most of mathematical background of IF. Even if the algorithm gained visibility since its introduction five years ago, we can mention here, for instance, [1, 2, 3, 4, 5, 18, 19, 21, 22, 23, 29], an initial mathematical analysis has been only recently developed [10, 13], and much more study on extensions, variations and stabilization methods is currently ongoing, see, for instance, [6].

Due to the missing theoretical background of the ALIF method, in this paper we introduce two new algorithms for which such analysis is possible. The first, called Stable ALIF (SALIF) method, is always convergent, even in presence of noise, but it presents an increased computational cost with respect to ALIF. The second, called Resampled IF method (RIF), is actually a modification of the IF algorithm, that preserves IF convergence property, but, at the same time, presents the same flexibility as ALIF. Furthermore RIF method can be made, in the discrete case, highly computational efficient via the FFT computation of the convolutions, in what is called the Fast Resampled IF method (FRIF).

The rest of this paper is structured as follows. Section 2 is a review of the IF and ALIF methods, and introduces the new SALIF method. Here we compare their features, stressing their strength and weaknesses. Section 3 is dedicated to the RIF algorithm, its analysis, properties and acceleration via FFT, in what is called FRIF technique. In this section we show how RIF combines the convergence and stability of IF with the flexibility of ALIF, and how it can be made computationally efficient. In Section 4 we compare those algorithms on artificial and real data, reporting the efficiency and accuracy of each method. Eventually, in Section 5, we draw conclusions and suggest future lines of research.

2 Iterative Filtering based Methods

Throughout this document, a signal is intended to be a real function , and we study its behaviour in the reference interval . Outside this interval the signal is usually not known, and so we have to impose some boundary conditions, discussed for example in [9] and [24]. In particular, in [24], the authors show how any signal can be pre-extended and made periodical at the boundaries. Therefore, from now on, for simplicity and without losing generality, we will assume that the signals to be decomposed are always periodical at the boundaries.

The Iterative Filtering (IF) methods mimics the EMD algorithm in the application of a moving average that captures the main trend of the signal, and allows us to decompose it into simple IMF components. If we call the moving average, then both EMD and IF algorithms extract the first IMF as


Repeating iteratively the same procedure on , we can extract all the IMFs until becomes a trend signal, meaning that it possesses at most two extrema.

The difference between these two algorithms is that, while for EMD the moving average operator is changing at each iteration and depends completely on the shape of a given signal, in IF can be rewritten as the convolution of with what is called a filter . Here a filter is an even, nonnegative, bounded and measurable real function with compact support and unit mass, meaning .

A generalization of the IF method is called Adaptive Local Iterative Filtering (ALIF), and utilizes the convolution with a family of filters as moving average, whose support is in , i.e. it varies with . Therefore, the moving average computation operator can be written as


Following [12], we can rewrite the same expression as


where is a filter with constant support in and is a measurable function. In the following subsections we report the most common choice for the filters, and a description of the resulting method.

2.1 Linear ALIF

When we talk about the ALIF method, we usually refer to Linear ALIF. After having fixed a filter with support , and a positive “length” function , then the linear ALIF method prescribes

Notice that is a filter with support for every .

Given now a signal , we can compute a length function , that usually depends on the relative positions of local extrema in if the signal does not contain noise, and apply the iteration in (1) with the appropriate filter.


Repeating iteratively the same procedure on , we obtain a decomposition of the signal into IMFs. Notice that changes after we identify each different IMFs. Here we report the resulting algorithm.

  IMFs =
  initialize the remaining signal
  while the number of extrema of is  do
     for each compute the length function , depending on
     while the stopping criterion is not satisfied do
     end while
  end while
Algorithm 1 (ALIF Algorithm)

The operation is designed to catch the fluctuation part of the signal, that usually presents high frequency. The operation is iterated until a stopping criterion is satisfied, usually regarding the norm of the difference , or the number of iterations themselves. For more details on the stopping criterion, we refer the interested reader to [12, 20]. The IMFs are thus extracted from the signal until it becomes just a trend signal with or less extrema. Since the sum of all the IMFs and the trend signal returns the original signal, it can effectively be called a decomposition.

Regarding the length function identification in signals containing noise, we observe that it is always possible to first run a time-frequency representation (TFR) algorithm, see [27] for a comprehensive review of modern TFR techniques, and then use the acquired information to designed the optimal . This procedure is really important for ALIF algorithm, but it is also a research topic per se. This is why, from now on, we assume that the length function can be computed accurately and we postpone the analysis of how actually compute it to a future work.

Conceptually, the ALIF method separates non-stationary components of the signal, even with varying amplitudes, starting from the highest frequencies. For example, on real data, the method first extract high frequency noise IMFs, and then starts to produce clean components. The main feature of the produced IMFs is that their instantaneous frequencies are pointwise sorted in decreasing order. In formulae, if is the instantaneous frequency of the -th IMF at the point , we have that

The method, albeit being very powerful and having already been utilized in a variety of applications, still lacks a theoretical analysis proving the convergence of (4), except in a few notorious cases [6, 10, 12]. In the next sections, we report some of the available convergence results for the discrete version of the algorithm.

2.2 Discrete ALIF and Stabilization

Usually in a discrete setting, a signal is given as a vector of sampled values

where and for . As a consequence, one can discretize the relation (2) with a simple quadrature formula.


In turn, this lets us write the sampling vector of , called , as a matrix-vector multiplication. In fact, if we assume all the indexes start from zero,


In the Linear ALIF paradigm, we fix a filter and choose a length function depending on the signal, to produce our family of filters . The resulting algorithm is reported here.

  initialize the remaining signal
  while the number of extrema of is  do
     compute and the matrix
     while the stopping criterion is not satisfied do
     end while
  end while
Algorithm 2 (Discrete ALIF Algorithm)

From the algorithm, it is evident that the convergence of the internal loop only depends on the spectral properties of the matrix . In fact, since , we find that a necessary condition for the convergence is


If the zero eigenvalue of

has equal geometric and algebraic multiplicities, and it is the only eigenvalue for which , then the condition is also sufficient. From the same analysis, one can notice that the algorithm actually produces a projection of the signal on the approximated null space of .

Notice that if

is an invertible matrix, then

, meaning that substituting in the algorithm doesn’t significantly change the output. As a consequence, we can always suppose that

is a stochastic matrix, so that we don’t have to worry about large

. Nonetheless, we still cannot assert the absence of negative or complex eigenvalues which are not fulfilling the relation (7). Recent studies [10, 6] show that for large and continuous functions , , almost all eigenvalues of the matrix are real and nonnegative, but it is still not enough to establish the convergence of the method. Moreover, it has been ascertained experimentally that such cases may arise, especially whit a fast changing function .

A simple way to stabilize the method is to choose , so that is a nonnegative matrix that is also positive semidefinite, with all the eigenvalues bounded by . Notice that we can also use instead of , where , or in general any constant satisfying . We call the resulting method Stable ALIF (SALIF).

  initialize the remaining signal
  while the number of extrema of is  do
     compute and the matrix
     while the stopping criterion is not satisfied do
     end while
  end while
Algorithm 3 (Stable Discrete ALIF Algorithm)

The method is called stable since a perturbation of the matrix does not prevent the convergence of the inner loop. Moreover, we will show in the experiments, ref. Section 4, that SALIF is able to produce more accurate solutions than the other methods.

The algorithm, though, comes with an increased computational cost with respect to ALIF, mainly due to two factors.

  • The iterative step in the SALIF algorithm takes at least double the time with respect to the respective step in the ALIF algorithm. Since the number of iterations is usually much smaller than , even computing beforehand does not improve the speed.

  • The order of the smallest eigenvalues of is approximately the square of the smallest ones in

    . The algorithm thus requires more iterations to attain the same accuracy of ALIF, since it must separate eigenspaces that are now closer.

A different way to stabilize the method is to take constant, producing a much faster algorithm, i.e. the IF algorithm, whose spectrum of application is though more limited.

2.3 IF and Discrete IF

When we talk about the IF method, we refer to the linear ALIF method with constant length function in (4), or equivalently, where in (3). The IF method only separates IMF components of the signal which are amplitude modulated, but quasi-stationary in frequency, starting from the highest frequencies. Nevertheless, it has been proved [15] that in this case the iterations (4) always converge whenever

is a filter with nonnegative Fourier transform. The condition is satisfied, for example, by

, where is a generic filter and is the convolution operator.

In the discrete setting, the IF algorithm has the advantage of a fast implementation based on FFT, in what is called Fast Iterative Filtering (FIF), and an advanced theoretical analysis [8, 14]. Recall that we only know the signal on the interval , so we can always suppose that the original signal is -periodic (for example, by reflecting the signal on both sides and making it decay [24]). We can thus rewrite the moving average (2) as


and discretize it on a regular grid of . Here, the integral is always well-defined, since the filter has compact support. Moreover is inversely proportional to the target frequency of the extracted IMF, and usually indicates that we already have a trend signal , so we always suppose .

Following the same steps as in the ALIF algorithm, we find

Notice that the above formula can be expressed through a Hermitian circulant matrix with first row

where . The sampling vector of on the points , can be thus rewritten as a matrix-vector multiplication


The resulting algorithm is thus the same as Algorithm 2, but where is Hermitian and circulant. The IF method is consequently much faster than the ALIF algorithm since the multiplication can be performed very efficiently through an FFT. Actually, in [14] we can find an even faster implementation, the so called FIF algorithm, and the proof that is also positive semidefinite.

Keeping in mind that, as in ALIF, we can always multiply by a diagonal matrix and make it stochastic, we have the following result.

Lemma 1 ([14, Theorem 1, Corollary 3])

Given a double-convoluted filter , then for the IF operator in (8) the limit

converges for any function . Moreover, if , then for the IF matrix in (9) the limit

converges for any vector .

To summarize, we have

  • the IF and FIF algorithms always converge and are very fast, but cannot capture non-stationary components with quickly varying frequencies,

  • the ALIF algorithm is enough flexible to extract fully non-stationary components, but its convergence is not guaranteed,

  • the SALIF algorithm is always convergent and it has an output which is more accurate than the ALIF algorithm one, but it is very slow.

In the next section, we show how to design an alternative method, that is flexible enough to perform non-stationary analysis on the signals, but at the same time fast and provably convergent.

3 Resampled Iterative Filtering

The linear ALIF method makes use of a length function to locally stretch a fixed filter so that the convolution with the signal smooths out the high oscillatory behaviour. The idea behind the Resampled Iterative Filtering (RIF) algorithm is to set a fixed length for and instead modify the signal through a global resampling function. In a sense, we want to locally stretch the signal, making the component of higher frequency approximately stationary, so that we are able to identify it through the fast IF algorithm.

Given a resampling function that is increasing and regular enough, the moving average for the RIF method will coincide with the IF one applied on , as


where we assume that the resampled signal is -periodic. If we consider the first-order expansion of , and after a change of variable , , we have


that is analogous to the linear ALIF moving average in (4), where, equivalently,


With (12), now we have a way to derive the resampling function from the length . The full RIF algorithm is thus reported as Algorithm 4.

  IMFs =
  initialize the remaining signal
  while the number of extrema of is  do
     compute and derive the resampling and the resampled signal
     while the stopping criterion is not satisfied do
     end while
  end while
Algorithm 4 (Resampled IF Algorithm)

From the algorithm it is evident that, after the resampling, the steps are the same of the IF algorithm. In fact, we always extract almost stationary IMFs from the resampled signal, and then we operate the inverse sampling to obtain the respective IMFs for the original signal. Moreover, we point out that depends on , so it must be computed every time we want to extract a new component.

This observation is also enough to show that the internal loop always converge to some IMF. In the next section we see how these properties carry to the discrete case.

Notice that RIF is actually a particular ALIF method, since

with in (3).

From the relations (3), we can say that Linear ALIF is a first-order approximation of RIF, and since RIF is a convergent method, we could ask whether it produces the same output as Linear ALIF. The answer is provided in the following

Theorem 1

The RIF method produces the same output as the Linear ALIF algorithm only when is a constant function, i.e. when the linear ALIF algorithm reduces to IF.

The proof follows from the observation that the derivation of equation 3 holds true only if

meaning that is constant for every .

3.1 Fast Resampled Iterative Filtering

First of all we review how to possibly implement a discrete version of RIF. One way is by discretizing the IF moving average on the resampled signal, as in

Notice that has domain , so we need to discretize it on the regular grid for . Recall that and that in the IF algorithm, a constant indicates that is already a trend signal. This shows that we can safely assume and thus .

We now extend the signal cyclically on the real line, meaning that for every and every . The quadrature rule on the discretization points yields

Notice that the above formula coincides with the IF moving average with length , and can be expressed through a Hermitian circulant matrix with first row

where . The moving average thus becomes

where is still a Hermitian and circulant matrix, so that the matrix vector multiplication can be performed efficiently through a FFT. In particular,

where stands for the Hadamard (or element-wise) product between vectors, and DFT, iDFT stand for Discrete Fourier Transform and its inverse, respectively. Moreover, since

and since the stopping criterion can be checked on , we can further accelerate the method by computing the DFTs on and and the iDFT outside the loop, thus avoiding iterated computations of Fourier transforms.

The resulting method is reported in Algorithm 5.

  IMFs =
  initialize the remaining signal
  while the number of extrema of is  do
     compute , the resampling functions , the constant and the matrix
     compute the vector

through interpolation of

on the points where
     while the stopping criterion is not satisfied do
     end while
     compute the vector through interpolation of on the points where
  end while
Algorithm 5 (Fast Resampled Iterative Filtering)

Notice that while the internal loop only consists of Hadamard multiplications among vectors, and its convergence properties can be analysed with the same tools used for the IF algorithm [8, 14], in the outer loop we perform operations that may lead to a loss in accuracy of the method. We can thus adopt a spline interpolation to mitigate the accuracy loss, and even in this case, the computational cost of the outer loop is still operations due to the Fourier transforms.

As for the previous algorithms the matrix , and thus the vector , can be multiplied by a constant to upper bound its eigenvalues, and from Lemma 1 one can state an analogous convergence result.

Corollary 1

Given a double-convoluted filter , then the inner loop of the RIF Algorithm 4 converges for any initial function . Moreover, if , then in the FRIF Algorithm 5 the limit

converges for any vector .

We have seen that the FRIF algorithm is provably convergent, and that its computational time is comparable with the FIF method. In the numerical examples, we will also show that empirically it produces sensible decompositions, but first let us address another property of the method.

3.2 Anti-Aliasing Property

In the discrete setting, the resampling of the signal may in theory come with an undersampling of the highest frequencies, leading to aliasing effects. Here we show that in the FRIF algorithm, this is actually not a problem.

Suppose that the signal can be split into components , where has the highest instantaneous frequency among all the components. In the FRIF algorithm we choose the resampling where and . The resampled signal has thus domain , but in the discrete setting we treat it as a signal over , so we are actually working with

The signal presents now a new decomposition in components , where and if was the instantaneous frequency of , then the respective frequency of is . Notice that the function is chosen so that is now approximately a stationary signal, so

for some constant and for every . As a consequence,

that is surely less than . Moreover, since is increasing and , then

meaning that has still the biggest instantaneous frequency among the . This proves that the resampling does not create artificial high frequency components, so the FRIF algorithm does not suffer from aliasing problems.

3.3 Avoiding Interpolation

As pointed out before, the interpolations may introduce a loss in accuracy on the output of Algorithm 5. One can though formulate a different, but equivalent, version of the continuous algorithm that does not require a resampling of the signal. Taking from the start of (3), let , and , so that .


As a consequence, we can discretize the relation

by applying a quadrature rule on the points , as

that coincides with multiplying the discretized signal by the matrix , where

Notice that is positive definite, since from (12), and is symmetric since the filter is an even function. If we call the matrix in the case , then by Corollary 3 of [14], is positive semidefinite. Since for a big enough , the matrix is approximated up to an arbitrary small error by a principal submatrix of the matrix , then we can conclude that is also positive semidefinite. As a consequence,

and all its eigenvalues are real and less than 1. Eventually, as in the precedent algorithms, the matrix can be multiplied by a constant so that its eigenvalues are upper bounded for example by 1, so that and the method becomes provably convergent.

The resulting algorithm is thus equivalent in its continuous version to Algorithm 4, and in its discrete version it avoids the need to interpolate the signal two times per IMF. Moreover, its internal loop has been proved to be convergent and it presents the same flexibility properties as ALIF.

At the same time, though, the matrix is not cyclic, so we lose the fast implementation that was possible in Algorithm 5. For this reason, we do not test this version of the RIF algorithms in the following numerical experiments.

4 Numerical Experiments

In this section we show and compare the performances of all the reviewed techniques. In order to study the signals and their decompositions in time-frequency, we will rely on the so called IMFogram, a recently developed algorithm [7], which allows to represent the frequency content of all IMFs. The IMFogram proves to be a robust, fast and reliable way to obtain the time-frequency representation of a signal, and it has been shown to converge, in the limit, to the well know spectrogram based on the FFT [11].

The following tests have been conducted using MATLAB R2021a installed on a 64–bit Windows 10 Pro computer equipped with a 11th Gen Intel Core i7-1165G7 at 2.80GHz processor and 8GB RAM. All tested examples and algorithms are freely available at

4.1 Example 1

We consider the artificial signal , plotted in the left panel, bottom row, of Figure 1, which contains two nonstationary components with exponentially changing frequencies and , plus a trend . In particular


where vary in and is sampled over points.

The and components and signal are plotted in the left panel of Figure 1, whereas and frequencies are shown in the central panel.

Figure 1: Example 1. Left panel: the components and , respectively first and second row,the trend, third row, and the signal , bottom row. Central panel: exponential instantaneous frequencies of and . Right panel: relative error in norm 2 between the ground truth and produced by ALIF, SALIF, and FRIF algorithms.

in Table 1 we report the computational time required by ALIF, SALIF and FRIF with a fixed stopping criterion. In the same table we summarize the performance of the three techniques in terms of inner loop iterations required to produce the two IMFs and the relative error measured as ratio between the norm 2 of the difference between the computed IMF and the corresponding ground truth, and the norm 2 of the ground truth itself.

num of iter
num of iter
Table 1: performance of various techniques when applied on Example 1, measured as relative errors in norm 2 and number of iterations.

From Table 1 results it is clear that FRIF proves to converge quickly to a really accurate solution. In fact, it takes less than a second to produce a decomposition which has a relative error which is order of magnitudes smaller than the ones produced using ALIF and SALIF methods. Furthermore ALIF and SALIF decompositions require more than 16 and 26 seconds, respectively, to converge. This is confirmed by the results shown in the right panel of Figure 1, where we compare the norm 2 relative error of the obtained using ALIF, SALIF, and FRIF algorithms for subsequent steps in the inner loops when we remove the stopping condition. ALIF initially tends toward the right solution. At 35 steps the relative error reach the minimum value of , and then, after that, the instabilities of the method show up and drive the solution far away from the right one. SALIF, instead, is clearly convergent, in fact the solution is moving steadily to the exact one. However SALIF converge rate is small, as proven by the relative error which is slowly decaying. In fact, after 500 inner loop steps, the relative error is still around . Finally, FRIF quickly converge to a really good approximation of the right solution, at 73 steps the error is minimal with a relative error of . After this step, the relative error restarts growing due to the chosen stopping criterion. It is important to remember, in fact, that, in general, the ground truth is not known. This is the reason why the stopping criterion adopted in these techniques does not rely on the ground truth knowledge. Hence, as a consequence, FRIF, ALIF and SALIF, do not necessarily stop when the actual best approximation of the ground truth is achieved. For example, one can see that the ALIF algorithm doesn’t stop in the computation of the second IMF of the signal. Studying what it could be an ideal stopping criterion and how to tune it properly is outside the scope of this work.

4.2 Example 2

In this second example, we start from the artificial signal which contains two nonstationary components, and , and a trend ,


where vary in and is sampled over 8000 points.

The , , the trend component, and signal are plotted in the left column of Figure 2, whereas and frequencies are shown in the right panel.

Figure 2: Example 2. Left panel: the components and , respectively first and second row, and the signal , bottom row. Right panel: exponential instantaneous frequencies of and .
Figure 3: Example 2. Difference between the ground truth and the derived decomposition via ALIF (left), SALIF (central), FRIF (right).

in Table 2 we report the performance of ALIF, SALIF and FRIF techniques. In Figure 3 we show the differences between the IMFs produced by the different methods and the known ground truth. It is evident both from the table and the figure that the proposed FRIF method outperform the other approaches both from the computational and the accuracy point of view.

num of iter
num of iter
Table 2: Example 2 performance of ALIF, SALIF and FRIF, measured as relative errors in norm 2 and iteration number.

4.3 Example 3

In this example we show the robustness of the proposed FRIF approach to noise. To do so, we consider the signal studied in Example 2 and we perturb it by additive Gaussian noise. In Figure 4

we plot on the left panel the perturbed signal when the signal to noise ratio (SNR) is of 8.6 dB. On the right panel we report the decomposition produced by FRIF. It is evident that the method can separate properly the random perturbation in the first row, from the deterministic components in the following three rows.

Figure 4: Example 3. Left panel, the noisy signal compared with the noiseless signal defined in Example 2. The SNR is around 8.6 dB. Right panel, the IMF decomposition derived by FRIF.

This result is confirmed even if we increase the SNR to 1.3 dB, left panel of Figure 5. It is evident from this figure that this level of noise is quite high. Nevertheless FRIF method proves to be able still to separate the deterministic signal from the additive Gaussian contribution, as shown in the left panel of Figure 5.

Figure 5: Example 3. Left panel, the noisy signal with SNR around 1.3 dB compared with the noiseless signal of Example 2. Right panel, the corresponding FRIF decomposition compared with the ground truth.

4.4 Example 4

We conclude the numerical section with an example based on a real life signal. We consider the recording of the sound emitted by a bat, shown in the left panel of Figure 6. In the central panel, we show the associated time-frequency plot obtained using the IMFogram [7]. From this plot we observe that this signal appears to contain three main simple oscillatory components which present rapid changes in frequencies. Those are classical examples of the so called chirps. By using a curve extraction method, it is possible to derive from the IMFogram the instantaneous frequency curves plotted in the right panel of Figure 6. As briefly mentioned earlier, the identification of these instantaneous frequency curves is of fundamental importance for the proper functioning of FRIF, but it is also a research topic per se. In this work, we assume that they can be computed accurately and we postpone the analysis of how to compute them in a robust and accurate way to future works.

Figure 6: Example 4. Left panel, sound produced by a bat. Central panel, the corresponding IMFogram time-frequency plot. Right panel, instantaneous frequency curves inferred from the IMFogram plot.

By leveraging on the extracted curves, we run FRIF algorithm and derive the decomposition shown in the left most panel of Figure 7. The first three IMFs produced correspond to the three main chirps observed in the IMFogram, which is depicted in the central panel of Figure 6. This is confirmed by running IMFogram separately on the first three IMFs produced by FRIF. The results are shown in the rightmost 3 panels of Figure 7. From these plots it becomes clear that the algorithm is able to separate in a clean way the three chirps contained in the signal.

Figure 7: Example 4. Left most panel, IMF decomposition produced by FRIF. From central left to right most panel, the IMFogram time-frequency plots associated with the first, second and third row in the FRIF decomposition, respectively.

5 Conclusions

Following the success of the Empirical Mode Decomposition (EMD) method for the decomposition of non-stationary signals, and given that its mathematical understanding is still very limited, in recent years the Iterative Filtering (IF) first, and then the Adaptive Local Iterative Filtering (ALIF) have been proposed. They inherit the same structure of EMD, but rely on convolution for the computation of the signal moving average. On the one hand, the mathematical understanding of IF is now pretty advanced, this include its acceleration in what is called Fast Iterative Filtering (FIF) and its complete convergence analysis. On the other hand, IF proved to be limited in separating, in a physically meaningful way, components which exhibit quick changes in their frequencies, like chirps or whistles. For this reason ALIF was proposed as a generalization of IF which overcome the limitations that are present in IF. However, even though some advances have been obtained in recent years, the theoretical understanding of ALIF is far from being complete. In particular, it is not yet clear under which assumptions it is possible to guarantee a priori its convergence.

For this reason, in this work we introduced the Resampled Iterative Filtering (RIF), and, in the discrete setting, the Stable Adaptive Local Iterative Filtering (SALIF) and the Fast Resampled Iterative Filtering (FRIF), that are capable of decomposing non-stationary signals into simple oscillatory components, even in presence of fast changes in their instantaneous frequencies, like in chirps. We have analyzed them from a theoretical stand point, showing, among other things, that it is possible to guarantee a priori their convergence. Furthermore, we have tested them using several artificial and real life examples.

More is yet to be said about the argument. In particular, all these methods are dependent on the computation of a length function which is, de facto, the reciprocal of the instantaneous frequency curve associated with each component contained in the signal. This function is required to guide the aforementioned methods, including ALIF itself, in the extraction of physically meaningful IMFs. The identification of instantaneous frequency curves associated with each component contained in a given signal is a research topic per se, and it is out of the scope of the present work. This is why we plan to study this problem in future works.

Another open problem regards the selection of an optimal stopping criterion and its tuning to be used in this kind of methods. The stopping criterion implemented can influence consistently the performance of these techniques. We plan to work in this direction in the future.

Finally, we plan to work on the extension of the proposed techniques to handle multidimensional and multivariate signals.


The authors are members of the Italian “Gruppo Nazionale di Calcolo Scientifico” (GNCS) of the Istituto Nazionale di Alta Matematica “Francesco Severi” (INdAM). AC thanks the Italian Space Agency, for the financial support under the contract ASI - LIMADOU scienza+n 2020-31-HH.0, and the ISSI-BJ project “the electromagnetic data validation and scientific application research based on CSES satellite”.


  • [1] X. An. Local rub-impact fault diagnosis of a rotor system based on adaptive local iterative filtering. Transactions of the Institute of Measurement and Control, 39(5):748–753, 2017.
  • [2] X. An, C. Li, and F. Zhang. Application of adaptive local iterative filtering and approximate entropy to vibration signal denoising of hydropower unit. Journal of Vibroengineering, 18(7):4299–4311, 2016.
  • [3] X. An and L. Pan. Wind turbine bearing fault diagnosis based on adaptive local iterative filtering and approximate entropy. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, 231(17):3228–3237, 2017.
  • [4] X. An, W. Yang, and X. An. Vibration signal analysis of a hydropower unit based on adaptive local iterative filtering. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, 231(7):1339–1353, 2017.
  • [5] X. An, H. Zeng, and C. Li. Demodulation analysis based on adaptive local iterative filtering for bearing fault diagnosis. Measurement, 94:554–560, 2016.
  • [6] G. Barbarino and A. Cicone. Conjectures on spectral properties of alif algorithm, 2021. arXiv:2009.00582.
  • [7] P. Barbe, A. Cicone, W. Suet Li, and H. Zhou. Time-frequency representation of nonstationary signals: the imfogram. Pure and Applied Functional Analysis, 2021.
  • [8] A. Cicone. Iterative filtering as a direct method for the decomposition of nonstationary signals. Numerical Algorithms, pages 1–17, 2020.
  • [9] A. Cicone and P. Dell’Acqua. Study of boundary conditions in the iterative filtering method for the decomposition of nonstationary signals. Journal of Computational and Applied Mathematics, 373:112248, 2020.
  • [10] A. Cicone, C. Garoni, and S. Serra-Capizzano. Spectral and convergence analysis of the discrete alif method. Linear Algebra and its Applications, 580:62–95, 2019.
  • [11] A. Cicone, W. S. Li, and H. Zhou. New theoretical insights in the decomposition and time-frequency representation of nonstationary signals: the imfogram algorithm. preprint, 2021.
  • [12] A. Cicone, J. Liu, and H. Zhou. Adaptive local iterative filtering for signal decomposition and instantaneous frequency analysis. Applied and Computational Harmonic Analysis, 41(2):384–411, 2016.
  • [13] A. Cicone and H.-T. Wu. Convergence analysis of adaptive locally iterative filtering and sift method. submitted, 2021.
  • [14] A. Cicone and H. Zhou. Numerical analysis for iterative filtering with new efficient implementations based on fft. Numerische Mathematik, 147(1):1–28, 2021.
  • [15] C. Huang, L. Yang, and Y. Wang. Convergence of a convolution-filtering-based algorithm for empirical mode decomposition. Advances in Adaptive Data Analysis, 1(04):561–571, 2009.
  • [16] N. E. Huang. Introduction to the hilbert–huang transform and its related mathematical problems. Hilbert–Huang transform and its applications, pages 1–26, 2014.
  • [17] N. E. Huang, Z. Shen, S. R. Long, M. C. Wu, H. H. Shih, Q. Zheng, N.-C. Yen, C. C. Tung, and H. H. Liu. The empirical mode decomposition and the hilbert spectrum for nonlinear and non-stationary time series analysis. Proceedings of the Royal Society of London. Series A: mathematical, physical and engineering sciences, 454(1971):903–995, 1998.
  • [18] S. J. Kim and H. Zhou. A multiscale computation for highly oscillatory dynamical systems using empirical mode decomposition (emd)–type methods. Multiscale Modeling & Simulation, 14(1):534–557, 2016.
  • [19] Y. Li, X. Wang, Z. Liu, X. Liang, and S. Si. The entropy algorithm and its variants in the fault diagnosis of rotating machinery: A review. IEEE Access, 6:66723–66741, 2018.
  • [20] L. Lin, Y. Wang, and H. Zhou. Iterative filtering as an alternative algorithm for empirical mode decomposition. Advances in Adaptive Data Analysis, 1(04):543–560, 2009.
  • [21] I. Mitiche, G. Morison, A. Nesbitt, M. Hughes-Narborough, B. G. Stewart, and P. Boreham. Classification of partial discharge signals by combining adaptive local iterative filtering and entropy features. Sensors, 18(2):406, 2018.
  • [22] M. Piersanti, M. Materassi, A. Cicone, L. Spogli, H. Zhou, and R. G. Ezquer. Adaptive local iterative filtering: A promising technique for the analysis of nonstationary signals. Journal of Geophysical Research: Space Physics, 123(1):1031–1046, 2018.
  • [23] R. Sharma, R. B. Pachori, and A. Upadhyay. Automatic sleep stages classification based on iterative filtering of electroencephalogram signals. Neural Computing and Applications, 28(10):2959–2978, 2017.
  • [24] A. Stallone, A. Cicone, and M. Materassi. New insights and best practices for the successful use of empirical mode decomposition, iterative filtering and derived algorithms. Scientific Reports, 10:15161, 2020.
  • [25] M. E. Torres, M. A. Colominas, G. Schlotthauer, and P. Flandrin. A complete ensemble empirical mode decomposition with adaptive noise. In 2011 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 4144–4147. IEEE, 2011.
  • [26] N. Ur Rehman and D. P. Mandic. Filter bank property of multivariate empirical mode decomposition. IEEE transactions on signal processing, 59(5):2421–2426, 2011.
  • [27] H.-T. Wu. Current state of nonlinear-type time-frequency analysis and applications to high-frequency biomedical signals. Current Opinion in Systems Biology, 23:8–21, 2020.
  • [28] Z. Wu and N. E. Huang. Ensemble empirical mode decomposition: a noise-assisted data analysis method. Advances in adaptive data analysis, 1(01):1–41, 2009.
  • [29] D. Yang, B. Wang, G. Cai, and J. Wen. Oscillation mode analysis for power grids using adaptive local iterative filter decomposition. International Journal of Electrical Power & Energy Systems, 92:25–33, 2017.
  • [30] J.-R. Yeh, J.-S. Shieh, and N. E. Huang. Complementary ensemble empirical mode decomposition: A novel noise enhanced data analysis method. Advances in adaptive data analysis, 2(02):135–156, 2010.
  • [31] J. Zheng, J. Cheng, and Y. Yang. Partly ensemble empirical mode decomposition: An improved noise-assisted method for eliminating mode mixing. Signal Processing, 96:362–374, 2014.