ZeLiC and ZeChipC: Time Series Interpolation Methods for Lebesgue or Event-based Sampling

06/06/2019 ∙ by Matthieu Bellucci, et al. ∙ 0

Lebesgue sampling is based on collecting information depending on the values of the signal. Although the interpolation methods for periodic sampling have been a topic of research for a long time, there is a lack of study in methods capable of taking advantage of the Lebesgue sampling characteristics to reconstruct time series more accurately. Indeed, Lebesgue sampling contains additional information about the shape of the signal in-between two sampled points. Using this information would allow us to generate an interpolated signal closer to the original one. That is to say, the average distance between the interpolated signal and the original signal will be smaller than a signal interpolated with other interpolation methods. In this paper, we propose two novel time series interpolation methods specifically designed for Lebesgue sampling called ZeLiC and ZeChipC. ZeLiC is an algorithm that combines both Zero-order hold interpolation and Linear interpolation to reconstruct time series. ZeChipC is a similar idea, it is a combination of Zero-order hold and PCHIP interpolation. Zero-order hold interpolation is favourable for interpolating abrupt changes while Linear and PCHIP interpolation are more suitable for smooth transitions. In order to apply one method or the other, we have introduced a new concept called tolerated region. ZeLiC and ZeChipC include a new functionality to adapt the reconstructed signal to concave/convex regions. The proposed methods have been compared with the state-of-the-art interpolation methods using Lebesgue sampling and have offered higher average performance. Additionally, we have compared the performance of the methods using both Riemann and Lebesgue sampling using an approximate number of sampled points. The performance of the combination "Lebesgue sampling with ZeChipC interpolation method" is clearly much better than any other combination.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Nowadays, a lot of time series data is produced, which represents the state of the environment over a period of time [15]. These data points are generally captured by a piece of equipment called a sensor. The sensor can detect different events or changes in the environment and quantify the changes in the form of temperature, pressure, noise, or light intensity, among others. A limitation of collecting data points is the frequency at which the sensor records the changes or events. The more frequently a sensor records a reading, the more expensive the running cost is. Likewise, the less frequent the sensor records the reading, the more difficult it is to capture and reconstruct the original behaviour of the event.

In practice, all signals have to be sampled because the number of points in a continuous environment is infinite. Sampling is the mechanism that collects the information by setting the frequency of the collected points over a time period. Capturing readings more often is economically more expensive due to the amount of data being stored, transmitted, and processed. The challenge while performing sampling is to preserve the vital information in the less amount of data points so that the objective of recording changes is met.

The periodic or Riemann sampling [15] is a conventional approach of sampling in the time series data. In this approach, the data is captured periodically, i.e. at an equidistant time intervals (such as each second or each microsecond). Even though the approach is simple to implement, the shortcoming is that, when the sampled data fails to indicate changes that happen between the interval (also known as frequency aliasing), sampling needs to be readjusted at a higher frequency, resulting into more data collection. Firstly, making such an adjustment requires manual assessment, and in addition to that, it bears the additional cost concerning more data being generated. Due to this pitfall, many research findings advocated for the use of Lebesgue sampling, instead of Riemann sampling [24]. Furthermore, some authors [2] have demonstrated Lebesgue sampling being a more efficient strategy compared to Riemann sampling.

The Lebesgue sampling [4], also known as Event-based sampling, is an alternative sampling strategy to the more popular Riemann sampling strategy. In the Lebesgue sampling, the time-series data is sampled whenever a significant change takes place or when the measurement passes a certain limit [17]. A few motivating examples of sampling strategy would be: whenever a specific value of the sensor reading crosses a limit, when a data packet arrives at a node on a computer network, or when the system output has changed with a specified amount.

The overall intuition of the Lebesgue sampling is to save the unnecessary data from being stored, processed, or transmitted which represents either no change or a trivial change compared to the previous data point. The nature of sampling based on events in the Lebesgue sampling is very appealing and natural in many domains where the systems remain constant for an extended period such as wireless communications [19] or systems with an on-off mechanism like those in the satellite control [2].

Increasing the battery life of the sensors and reducing their use [25], reducing network traffic by decreasing the amount of information transferred [32], or using fewer computer resources while maintaining the same performance [31], are some of the advantages of event-based control over control based in time. By contrast, the management of the systems that implement Lebesgue sampling becomes more complicated [2].

When a time series signal is sampled, generally, the subsequent step is to reconstruct the original signal as accurately as possible [14]. The interpolation method is one of the well-known criteria to reconstruct the signal by filling the missing values between the range of the discrete set of data points. Despite the significance of the interpolation methods, the challenge of reconstructing the signal remains an important area of research. Moreover, common interpolation methods do not perform well on Lebesgue sampling as demonstrated in this contribution.

In this contribution, it is proposed interpolation methods to reconstruct the time-series data sampled using Lebesgue sampling. To the best of our knowledge, this is the first interpolation method designed exclusively for Lebesgue sampling. The proposed methods have a higher performance because they exploit the particular properties of this kind of sampling.

Two novel interpolation methods are proposed: ZeLiC and ZeChipC. ZeLiC uses Zero-order hold and Linear interpolation with specific shape approximation on the basis of Concavity/Convexity111Note that the name ZeLiC is extracted from this combinations of Zero-order hold, Linear interpolation and the new functionality Concavity/Convexity. The same criteria applies to ZeChipC.. On the other hand, ZeChipC uses Zero-order hold and PChip interpolation with the same Concavity/Convexity improvement as ZeLiC.

The rest of the paper is organised as follows. In section 2, it has been elaborated a review of the state-of-the-art interpolation methods and the background on the Lebesgue sampling technique. In section 3, it is presented the proposed interpolation methods: ZeLiC and ZeChipC, along with their simpler versions, ZeLi and ZeChip, which do not observe shape approximation on the basis of Concavity/Convexity. In section 4, two experiments using 67 different data sets are carried out. The objective of the first experiment is to compare the performance of the proposed methods against that of the state-of-the-art interpolation methods for Lebesgue sampling. The second experiment is used to compare the performance of Lebesgue and Riemann sampling with approximately the same number of samples and using the same methods. This is very useful in order to decide which is the best combination of sampling and interpolation methods when time series need to be sampled. Finally, in section 5, the conclusions of the research are shown along with some possible future directions. Additionally, due to the great extension of the results of the experiments, all the tables are presented in the Appendix section.

2 State of the art

In this section, two well-known topics in the domain of time series are described: First, a summary of the state of the art about Event-based or Lebesgue sampling technique. And second, an overview of some popular time series interpolation methods.

2.1 Lebesgue sampling

Lebesgue sampling is an alternative to the traditional approach of sampling time series at a constant frequency. Instead of periodically taking samples from a system like in Riemann sampling, the event-based method takes samples only when a predefined event happens as shown in Figure 1. Some examples of typical events could be a sudden change in the signal, the signal reaching a preset limit, the arrival of a data package, or a change in the state of a system [2]. Even though Lebesgue sampling is more accurate than Riemann sampling it is less extended because those systems are more difficult to implement [2].

Figure 1: Riemann sampling (left) takes points at an equidistant time intervals while Lebesgue sampling (right) does so based on the output value of the signal. In this particular case, when the absolute difference is higher than 0.2.

In recent years, a great interest has aroused in applications implementing event-based sampling. For example, the “Send on Delta” algorithm takes advantage of Lebesgue sampling to reduce the information transmitted by wireless networks in order to increase the lifetime of the sensors batteries. Under this scheme, the sampling is performed only when there is a deviation in the signal higher than the delta value. Results show that using this approach it is possible to increase the lifetime of the sensors without any loss of quality in the resolution of the signals [25].

We can find another positive example in the domain of Networked Control Systems (NCSs), where the advantages of Lebesgue sampling become clear. In this type of systems increasing the sampling frequency can be counterproductive since the information load increases and the traffic of the network can collapse the whole system functioning. In the last decade, many NCS have successfully implemented event-triggered control reducing the required resources and the bandwidth of the network [32].

It is also interesting pointing out the convenience of using event-based sampling in the Fault Diagnosis and Prognosis (FDP). In the last years, it has been increasingly difficult to manage microcontrollers and embedded systems due to the volume of the information collected by sensors and to the complexity of the programs that they implement. Increasing computational resources is not a good solution in the long term since it increases economic costs. Yan et al. [31] found an efficient solution to this problem applying the philosophy of “execution only when necessary” based on Lebesgue sampling, which reduces computational costs substantially without diminishing the performance of the system.

Some research has been done to find the optimal balance between the number of samples and the performance of the system. For example, Andrén, et al. [1]

studied this balance for a linear-quadratic-Gaussian (LQG) control problem setting with output feedback. However, sampling based on changes works well with signals that remain constant for some time and present sudden variations. This is because this kind of sampling captures a higher number of points when the signal has abrupt changes while it does not take points when the signal remains constant. For example, when a natural phenomenon like an earthquake takes place, the sensor values can go from zero to a positive high value in an instant. With sampling based on events, more points of the critical moments would be captured, which gives important information about the behaviour of the phenomenon while if nothing occurs no information will be captured.

Lebesgue sampling can minimise energy consumption, storage space, computation time and the amount of data to be transmitted as it has been claimed in many investigations [30]. It is therefore very interesting for companies who are in charge of monitoring complex systems, as it significantly reduces the expense, without a negative impact on the precision of the measures.

In summary, the traditional approach for sampling and digital control has been working well in many applications for many years. However, there are new domains where Riemann sampling has major problems that can be easily solved by implementing Lebesgue sampling. That is why in recent years a general interest has been awakened by this new approach [3].

2.2 Time series interpolation methods

The downsampled time series data can be reconstructed using different interpolation techniques. The interpolation function estimates the missing data points within the range of the discrete set of known data points with the objective of preserving the shape of the original signal before the application of downsampling

[22].

Let be pairs of real values, where is the interpolation function, are the indexes of the downsampled data points, and are the values of those points. The objective of the optimal interpolation technique is to satisfy the condition where verifies .

(1)

There are a number of interpolation techniques ranging from simpler ones such as Zero-order hold [11] and Linear [11], to a more complex ones such as Multiquadric [18]

which is based on radial basis function, Shannon

[23] which is based on Nyquist–Shannon sampling theorem, Lasso [29] which is based on regression, Natural Neighbour [8] which is a spatial method and provides smoother approximation compared to simple interpolation technique. Cubic Hermite spline [21] and Piecewise Cubic Hermite Interpolating Polynomial (PCHIP) [20] are interpolation techniques based on splines and cubic function respectively. They are often a preferred choice in the polynomial interpolation.

The relation 2 describes the objective function of an interpolation method. Let pairs of real values. We want to find a function (easy to calculate), where verifies

(2)

2.2.1 Zero-order hold interpolation

The zero-order hold (ZOH) interpolation is one of the simplest signal reconstruction techniques [11]. In this technique, the missing values between two sampled points are interpolated with a constant value. This constant value is the same value of the preceding known point before encountering missing values. This technique has several applications in electrical communication and the main advantage is its low computational complexity. However, this interpolation strategy fails to reconstruct continuity or trend in time series with non zero values for the first derivative.

In ZOH, the polynomial is of 0th degree. Therefore, where is constant. Because of (2), . If in , a sudden change is presented, we cannot represent it because . Therefore, this interpolation cannot be used neither to represent continuous functions nor to produce a natural curves.

2.2.2 First order or Linear interpolation

Linear interpolation is also another popular choice for the reconstruction of a signal due to its simplicity and low computational complexity [11]. In this method, missing values are reconstructed by fitting a straight line between successive known points. The shortcoming is that the reconstruction of a signal fails to capture any non-linear trend even though the overall known values follow a non-linear trend.

The polynomial is of 1st degree, which means , and because of (2), and , where each two consecutive pair of sampled points are connected using a straight line. The following formulas can be applied to calculate and ,

(3)

This interpolation gives a continuous but non-differentiable function . Let and , we have

Therefore, the function is differentiable only if . We can conclude that for non-linear functions, is not differentiable with linear interpolation.

Linear interpolation is fast to compute and very intuitive. However, the drawback is the non-derivability of the interpolation at each node, which makes it produce sharp changes in the reconstructed signal.

2.2.3 Spline interpolation methods

In the spline interpolation methods, the interpolation function is a particular case of piecewise polynomial. The advantage of this type of method over the first order interpolation is that it reconstructs the signal using non-linear functions (which makes the transitions smoother) and also avoids the Runge phenomenon [5, 28]. The Runge phenomenon refers to the oscillation at the edges of a given interval while interpolating missing values.

We can define as for where is a polynomial of small degree. Where . The most common degrees in terms of interpolation are the first and third degree, which are Linear and Cubic interpolation.

2.2.4 Third order or Cubic interpolation

The cubic function is one of the most commonly used spline interpolation methods [21]. It uses a third-degree polynomial in the Hermite form for interpolating missing values as follows . This interpolation strategy inherits the conditions of Linear interpolation and adds conditions on its first and second derivatives:

The strength of this interpolation strategy is the fact that it produces smooth curves in the region of missing values which makes the signal look natural. The drawback of this method is that it can lead to significant errors in the region of reconstruction when there is an abrupt change at the end of an interval. Where the derivative on the nodes (sampled points) must be equal, this change will be presented in the beginning of the next interpolated interval.

2.2.5 Piecewise Cubic Interpolating Polynomial (PCHIP)

The PCHIP (Piecewise Cubic Hermite Interpolating Polynomial) interpolation method is based on the same principle as the spline interpolation, but between each point, it fits a cubic polynomial in Hermite’s form [20]. The sampled points are known as “knots”, PCHIP connects those “knows” independently making a good performance in time and results. The given points have first derivative at the interpolated points, although the second derivative is not guaranteed to be continuous.

3 ZeLiC and ZeChipC Lebesgue sampling interpolation methods

In this section, first, we give some information that can be extracted from sampling based on events to develop methods that either need less sampled points to achieve a similar accuracy level than the state of the art methods or given the same amount of points, achieve higher accuracy. Then, we describe in detail the development of ZeLi, the simplest interpolation method proposed for Lebesgue sampling. Following, we describe ZeLiC, which is an improved version of ZeLi with a new functionality to adapt it to convex and concave regions222The implementation code for the proposed algorithms can be found in: https://github.com/shamrodia74/ZeLiC. Additionally, mathematical demonstrations to support the assumptions of the developed methods have been included. Finally, we propose a method called ZeChipC which is basically an adaptation of ZeLiC that uses PCHIP instead of Linear interpolation so that it can represent signals with curve regions.

3.1 Nomenclature

The purpose of the following list of mathematical nomenclature is to facilitate the readers the understanding of the theory behind the proposed algorithms.

  • : It represents a sampled point, where represents the index and its value.

  • : It is the interval that contains all the points between the ith sampled point and the next sampled point. This nomenclature is used when continuous interpolation is applied.

  • : It represents the jth point to be interpolated in the interval . This notation is used to interpolate time series in a discrete manner.

  • t: The threshold is used to decide if a given point is captured based on its difference with respect to the last sampled point.

  • : It is the function that represents the sampled signal, such as .

  • : It is the function used to interpolate the signal.

  • : It is the tolerated region a concept described in 3.2.

  • : The probability of the element

    to be in the ensemble .

  • Tolerated region = : The tolerated region is defined by the last sampled point. It is calculated as the area covered by two points, which are the value of the last sampled point plus and minus the threshold. If the values of the next points of the signal are inside the tolerated region then the values are not captured.

  • Increased tolerated region = : It is a parameter used to determine if the transition between two points has been smooth or abrupt. For example, ZeLi applies Linear or ZOH interpolation based on this parameter.

3.2 Tolerated region

There are many possible implementations of Lebesgue sampling for time series such as sampling a point when it crosses a preset limit or when the percentage variation is higher than a given threshold. Our particular implementation of Lebesgue sampling is based on the variation of the signal. In other words, when the sensor detects that a point has a difference from the previous sampled point higher than a given threshold, then the point is captured. We can express this same idea in mathematical terms in the following way. Let be the first sample of a signal and the threshold, where . Then, the sensor captures the next point called if and only if:

(4)

Lebesgue sampling indirectly gives information on the behaviour of the signal between the two consecutive samples. It can be deduced that all the points between two pair of consecutive sampled points (not necessarily consecutive in time intervals) are inside an interval delimited by the threshold as shown in Figure 2. We know this because if the value of a point of the interval was out of the interval this point would have been captured. Based on that simple deduction a new concept called tolerated region because points are not sampled while the signal stays inside this allowed region. The tolerated region can be defined as , where (threshold) is the maximum allowed value of change.

As shown in Figure 2, if the difference between two consecutive sampled points a and b (left) of a time series is very large in a given interval, we know that an abrupt change has happened (otherwise the point would have been collected earlier). On the other hand, if the difference between the sampled points is very small, like in the case of c and d (right), it is quite probable that a smooth change has taken place. This simple principle is the basis on which the interpolation algorithms for Lebesgue sampling have been developed.

Figure 2: The point b is very far away from the tolerated region of the point a (Left) , so we can deduce that an abrupt change has taken place. In contrast, the point d is very near to the tolerated region of the point c (Right), thus it can be inferred that a smooth transition has occurred.

We can express this same idea in mathematical terms in the following way. Let the position of the sampling points, let be the position of the signal at each sampling point, let be a point between two sampled points, that is to say , and let be the tolerated region where can be: . Then, with a periodic sampling, we would have that the probability of all the points being in the tolerated region is less than one, . Whereas with Lebesgue sampling all the points are inside that region, . In other words, any in the interval between the two points is inside the tolerated region. We can therefore significantly reduced the region of the possible values of when performing the interpolation and therefore reducing the error when comparing the original signal with the reconstructed one.

3.3 ZeLi interpolation algorithm

From the information that can be extracted from the tolerance region, we will develop a set of methods to interpolate time series sampled with the Lebesgue approach. The simplest method and the first that is going to be explained is called ZeLi. The rest of the methods are improvements with respect to this first method. Along with the explanation of the methods, it is presented the mathematical demonstrations to proposed methods more rigorous.

3.3.1 Combination of Zero-order hold and Linear interpolation

ZeLi interpolation combines Zero-order hold interpolation and Linear interpolation, which explains the origin of its name, to reconstruct the original signal from the sampled signal as shown in Figure 3. To decide whether to apply ZOH or Linear interpolation, the tolerance ratio parameter is used. The tolerance ratio is a constant value higher than multiplies the interval of the tolerated region, that is: , creating a new interval called increased tolerated region.

Therefore, the ZeLi algorithm contemplates two possible cases

  1. If the examined point is outside of the increased tolerance region ( the tolerance region multiplied by the tolerance ratio), then ZOH interpolation is used.

  2. If the examined point is inside of the increased tolerated region, then Linear interpolation is used.

The justification of the algorithm is as follows. We know that all the points between two captured points should be in the tolerated region. if the difference between the values of the points is small, it is quite possible that between the two sampled points the signal follows a linear trend with small variations around it, therefore Linear interpolation is used. On the other hand, to minimise the error when the difference between the two sample points is great (an abrupt change is presented), we interpolate all the points using ZOH. Although ZOH interpolation does not represent continuous signals in a smooth way, it is a very effective mechanism since it satisfies that all the values of the interval between a given point and the next point [a,b) are in the tolerated region. ZOH interpolation minimises the error because it keeps all the values in the middle of the tolerated region , dividing by two the maximal possible error.

A visual intuition of this method can be seen in Figure 3. When the signal crosses the threshold in a continuous way (left) Linear interpolation would be used. This is because it can be assumed that the values of the previous points close to b will have similar values (obviously, the further those points are from b, the less probable this assumption would be). By contrast, if an abrupt change is presented (right), then the last but one point and all the previous points between c and d were somewhere in the limited region , but we cannot deduce a trend, because the change is so abrupt. Because of this lack of information, we choose to minimise the maximal error by using ZOH interpolation. Since ZOH interpolation uses a constant line (same values for y-axis for all the points of the region) followed by a straight line to interpolate.

Figure 3: ZeLi combines ZOH with Lineal interpolation. Between a and b ZeLi applies Line interpolation (left) while between the points c and d it applies ZOH interpolation (right).

3.3.2 Mathematical definition for ZeLi interpolation method

We can express this same idea in the following way. Let be a function representing a time series. Let be a sampled point, where , sampled with a threshold sampling. We want to interpolate the time series using the extracted information from the sampling. Let be the polynomial that interpolates the signal on the interval , such as . Let the tolerated region such as .

If is of 1-order or higher, it cannot be guaranteed that . Thus, ZOH interpolation is the only spline interpolation which satisfies the condition that remains in .

From the study of the variation between and , we can distinguish two cases:

  1. The difference between two consecutive sampled points is slightly higher than the threshold: .

  2. The difference between two consecutive sampled points is larger than the threshold: .

In the first case, we know that if Linear interpolation is applied in the interval of the signal, that is to say , then most of the points of the interval will be included in the tolerated region , .

In the second case, we cannot guarantee that if we use Linear interpolation, the interpolated points will be contained in the tolerated region, that is to say, we can not guarantee that . Moreover, if the difference is much greater than , we know that all the previous points of the signal being sampled were in the permitted interval. The abrupt change happened at the point and not before. We also know that the signal is not continuous at this point, because of the sudden change at . Therefore, it is required to use an interpolation method able to represent a non-continuous function, that is why we use ZOH.

Let us formalise the condition to apply ZOH or Linear interpolation. Although the condition is defined for our Lebesgue implementation, it can be easily extended to other implementations of event-based sampling.

As it has been explained in 3.3, we will use ZOH when the difference is greater than the threshold .

The algorithm 1 for ZeLi interpolation on is rather simple. Let us suppose that we are analysing signals in the discrete-time domain, let for be the points in . The condition to apply ZOH interpolation means that for any point of interpolation the following point will have a higher value on the x-axis. Thus, . That is why it is only required to verify that

(5)
Proof

In the following lines, we demonstrate that at least one point of the interpolation is outside the tolerated region by checking an inequality on a single point. This way, we can avoid checking over all the interpolated points which makes the algorithm faster. We assume that for convenience. If it were negative, the result would be equivalent. The condition for to be in the tolerated region is , which is equivalent to .

Because we work in a discrete space, we have and therefore , . Thus, to check if all points are in the tolerated region, we simply have to check that

which concludes the proof. ∎

Input:

1:tolerance = threshold * tolerance_ratio
2: = List()
3:for each pair of points (a,b) in  do
4:     if  then
5:         
6:     else
7:         
8:     end if
9:      =
10:end for
11:return
Algorithm 1 ZeLi algorithm

3.4 ZeLiC interpolation method

One of the disadvantages of using simple methods such as linear interpolation and ZOH is that they are not able to adapt properly to the signal when there is a change in the sign of the slope. Therefore, we have developed a new functionality with the aim of improving the performance of ZeLi when convex/concave regions are presented. Next, the mathematical development and the implementation of this improvement are presented.

3.4.1 Convex/Concave regions on time series

The shape of the signal when a slope change is presented is generally a convex or concave region. When Lebesgue sampling is used, a lack of information about the shape of the signal is presented when the signal changes the sign of the slope between two sampled points and , and that change takes place inside the tolerance region (i.e. before the signal hits the threshold). Additionally, neither Linear nor ZOH are able to adapt to convex or concave regions of a signal since those two methods connect the pairs of points individually.

In order to improve the adaptation of the proposed methods, we have studied the case of convexity and concavity; inflexion points of a function. The same conclusions apply to concavity, but the formulas are slightly adapted (with opposing signs). Expressed in mathematical terms could be:

A function is convex if the line segment between any two points on the graph of the function lies above or on the graph. We can express this same idea in mathematical terms in the following way. A signal is convex on if and only if

(6)

or

(7)

Let and , then, the global slope of on is negative, and the global slope on is positive. From that information the following table can be deduced:

We can see that the derivative is globally increasing on , thus we have this new table:

If we assume that the function follows the trend of the sampled points, that is to say, if the values of for a particular region are decreasing and then, at some point, they start increasing, we can assume that is convex on the interval . At this point, it is important to remind that this is based on assumptions, we do not have enough information to support the assumptions, except the values and position of the sampled points.

Let us see what would make the assumption wrong. If the signal is convex, we have (7) which means that the true values of the signal are all under the Linear interpolation, that is to say,

where is the linear interpolation on .

Proof

We want to demonstrate that if a function is convex, then all the true values of the signal, represented by a function are under the Linear interpolation, represented by , that is to say .

Let such as and . Let and . We have

Using (7), we conclude that if is convex, then Therefore, if the signal is not convex, that means that

Now, let be the threshold chosen for the sampling, we then know that:

If a signal is not convex, that means that

(8)

Let us calculate the length of the interval for . This will give us an idea of the probability of making a false assumption, that is to say, assuming the signal is convex when it actually is not.

According to that , is a polynomial of 1st degree, where because of the fact that and . Therefore, is decreasing.

Let be the area to which the points may belong, is a rectangle of length and of height . We have that and we want to know the probability that is not convex, that is to say, the probability that , where is the region defined by .

The area of is defined as:

and the area of is:

Therefore , which means that we have a 25% chance of making a false convexity assumption and a 75% of making a right assumption.

The shape of the area where the assumption can be wrong is a right triangle, with its hypotenuse being the Linear interpolation. Because the function , that represents the length of the interval where we can make a wrong assumption is decreasing, we conclude that the assumption is more likely to be wrong at the beginning of than in the end.

3.4.2 Adding a convexity/concavity assumption to ZeLi

Let us assume that the signal is convex on the interval , and . This greatly reduces the possible position of the points on the interval . With this hypothesis, we deduce that there exists a point in where the derivative of is zero. Indeed, its derivative is increasing thanks to the convexity assumption, and the derivative is negative before (because we have ) and positive after (because ). Therefore, we could interpolate with one line going from the first point to this point where the derivative is zero, and then another line from this point to the last.

In order to minimise the maximal error for interpolating convex regions, we choose this new point to be in the middle of and calculate it with the following formula:

(9)

and we choose its value in the y-axis to be the middle of the interval where the points can be, that is to say, in-between the Linear interpolation, and the lower bound of : . Therefore, we have

(10)

Then, we apply Linear interpolation to interpolate the signal in the convex region. Let be the interpolated signal on , we set

This solution has a limitation when ZOH needs to be applied (see Case II described in 3.3.1). In fact, when ZOH was applied, that meant that a peak was detected. But with the algorithm we use with the convexity assumption, this peak is not necessarily represented, therefore we need to add another step to ensure we have this peak. When an abrupt change takes place it is not recommendable using Linear interpolation because it will interpolate the signal outside of the tolerated region .

This limitation can be solve by adding another interpolation point, , before the last point . If it weren’t for the convex assumption, we would have used ZOH interpolation. Therefore, we choose , because it is the value it would have had without the convex assumption.

We call the Linear interpolation between the points and and the Linear interpolation between the points and . We can write the interpolation function as

We can see graphically, which is more intuitive, this same methodology in Figure 4. In the first case (left) the first point for the doing interpolation A = (, ), where is the value in the x-axis and is its value in the y-axis. Let’s call the point on the right B = (, ).

To calculate the new point C, we have to calculate its value on the x-axis and its value on the y-axis:

  • : The value is the middle between and , that is to say (+)/2.

  • : It is the middle between the points of the lower bound of the tolerated region in (because here the signal is convex) and the value of the linear interpolation (the line that connects the two points) in .

In the second case (right), the first new point G is calculated in the same way as C was calculated in the other example. The second new point H is calculated as follows. The x-value, , is the point just before in the discrete domain, this is: = – 1. And the value of the y-axis of the point H, is the same as that of the point E, this is, = .

Figure 4: When there is a soft transition (left) only one new point is calculated. When an abrupt change occurs (right) we calculate another point for a better adaptation.

This solution adapts the interpolation to the convexity of the function and at the same time, to the abrupt change happening at . The same rules of the concavity assumption can be applied to the convexity assumption with the exception that the sign of the shape is just the opposite. Therefore, adapting the formulas is very easy:

(7) becomes

(11)

We have the same properties and the same implementation that in the Convexity regions. The only difference is the value of , the value of the point we added to follow the convexity condition. We now have to calculate it so the signal is concave. Let be the Linear equation on the new value for is:

(12)
Figure 5: ZeLiC is able to follow the shape of the signal with respect to ZeLi thanks to the convexity/concavity condition using a threshold of 0.10

3.4.3 Parameters of the convexity/concavity functionality

To reduce the probability of making a false assumption, we added three conditions that have to be met to make the assumption of convexity or concavity. The goal of adding new conditions is to exclude those cases when the assumption is less likely to be true. The downside of doing this is that we will have more false negatives; those cases when is convexity/concavity is not applied and actually, it should be applied.

To handle the new restrictions for doing a convexity assumption, we have defined three parameters regarding the distances between the points , and . As discussed. The three parameters included in the algorithm 2 are the followings.

  • previous_distance: It refers to the minimum distance between and . We want the distance to be inferior to a chosen threshold. If is very small, that means that on a small region, the sensor got triggered twice. This happens in signals with very frequent variations. In those cases, it is better not to assume convexity because the signal is highly unpredictable.

  • subsequent_min_distance: It refers to the minimum distance between and . It has the same purpose as the previous_distance condition. If it is too small, then the interpolation would not benefit from the convexity assumption. It might even cause a bigger error than if we used Linear interpolation or ZOH.

The best value for a parameter depends on the shape of the time series for each data set. Therefore, each dataset needs to be optimised individually. As a general idea, both parameters previous_distance and subsequent_min_distance should have similar values to be consistent.

That is to say, if the previous distance parameter, should not be smaller than a chosen value, called , then it means that this is the minimum value where we consider the signal to be predictable and therefore assume convexity.

For example, if we set subsequent_min_distance with a value far larger or smaller than , this would mean that we have significantly changed the limit where we consider the signal predictable. This means that we would assume convexity on but then, on , we change our mind and consider the same distance to be too small to assume convexity. In short, these parameters have to be set by the user, depending on the shape of the signals sampled.

Input:

tolerance = threshold * tolerance_ratio
2: = List()
for each pair of consecutive points (a,b) in  do
4:      AND
AND
6:     if NOT  then
         if  then
8:              
         else
10:              
         end if
12:     else
         
14:         
         
16:         
         if  then
18:              
         else
20:              
              
22:              
         end if
24:     end if
      =
26:end for
return
Algorithm 2 ZeLiC algorithm

3.5 ZeChip interpolation algorithm

ZeLi is based on a combination of ZOH and Linear interpolation and in consequence, it shares some limitations of that this type of interpolation. ZeLi is able to approximate time series with a high precision when they are composed of straight lines, that is to say, signals for whose first and second derivative present very constant values. However, when the signal function has curved lines, it is not possible to represent it using ZeLi. In Figure 6 (left) we can see that ZeLiC is not able to follow the curvature of the line.

As it can also be observed in 6 (right), we can apply the same idea of combining two interpolation methods as in ZeLi but replacing Linear interpolation with PCHIP interpolation. This new method is called ZeChip and is able to adapt much better to those signals that present curves regions. In addition, the new method will include the advantages of PCHIP; a fast and powerful interpolation method allows to represent regions using curved lines and obtaining a great precision. One of the shortcomings of ZeChip with respect to ZeLi is that ZeChip, due to the fact that uses PCHIP interpolation instead of Linear interpolation, has a higher complexity cost than ZeLi, and therefore, for the same given points, ZeChip will take more time to generate the interpolated signal than ZeLi.

Figure 6: (Left) ZeLiC is not able to represent curve lines since it is based on linear interpolation, nevertheless, ZeChip (Right) is able to represent curve lines.

4 Experiments

So far, we have discussed our proposed methods from a theoretical perspective but to have certain evidence that our contribution can have a meaningful impact, we need to demonstrate that the proposed methods have a better general performance than that of the state-of-the-art. To this end, we have decided to perform the experiments using a large number of databases. We want to test the performance of our models against other interpolation methods under Lebesgue sampling. Besides that, we want to compare the performance of our method with Lebesgue sampling with the performance of other interpolating methods with Riemann sampling with a similar number of samples, so we can recommend that approach when time series need to be sampled.

4.1 Preparation of the experiments

To perform the experiments we followed the methodology explained in Figure 7. First, we downsampled the original time series (using Lebesgue or Riemann sampling), then we reconstructed the original signals from the downsampled data (using Linear, PCHIP, ZOH… interpolation methods) and finally, we compared the original signal with the reconstructed one (using RMSE) to evaluate the performance of the different interpolation methods.

Figure 7: The methodology to calculate the best interpolation method is based on the RMSE between the original signal and the reconstructed one.

There are many metrics in the state-of-the-art to calculate the difference between the original signal and the reconstructed signal. In this research, we applied a very popular metric called root-mean-square-error (RMSE). The RMSE has been used in many research works to calculate the efficiency of the interpolation techniques [33, 26]. To this end, all the signals of all the datasets have been individually normalised between 0 and 1.

In order to conduct the experiments we applied some of the most popular time series interpolation methods in the state of the art such as ZOH [11], Linear interpolation [11], PCHIP [20], Shannon [23], Lasso [29], Natural Neighbour [8], Cubic [21], Multiquadric [18], Inverse Multiquadric [10], Gaussian [16], Quintic [12] and Thin-Plate [9]. This functions have been implemented using the Radial Basis Function (RBF) approximation/interpolation in python based on the books[13] and [27].

We have applied two different approaches strategies to downsample the signals:

  • Lebesgue sampling: Our implementation of Lebesgue sampling is based on the absolute difference between the sampled values. In other words, the current value of the signal is captured when its difference from the last sampled point exceeds a preset limit.

  • Riemann sampling: The Riemann sampling is performed by using the same (or slightly higher) average number of points than in Lebesgue sampling but using a fix time interval over time. Riemann sampling always takes the same or more points than with Lebesgue sampling because we adapted the threshold of Lebesgue sampling for not taking a higher percentage of samples than the established limit, so normally the percentage is slightly lower.

To perform the experiments all the signals of all the datasets have been interpolated between 0 and 1. The values of the parameters for the developed methods (ZeLi, ZeLiC, ZeChip, and ZeChipC) were: tolerance ratio = 1.15, min distance=3, and previous distance=3. As shown in Table 3, of the appendix depending on the dataset more or fewer samples were selected.

The objective of the first experiment is to evaluate the performance of our proposed method for interpolating time series from Lebesgue sampling. To this end, we compared our methods with those of the state-of-the-art. In this experiment, we applied Lebesgue sampling based on the difference between the values with a threshold of 0.05333Note that all the signals had been scaled between 0 and 1. From this perspective 0.05 means a 5% difference in the maximum change between the possible values..

The goal of the second experiment is a bit more ambitious than the one of the first experiment. We want to demonstrate that the best technique to interpolate and reconstruct any signal is to use Lebesgue sampling and our best-proposed method; ZeChipC. To this end, we will conduct a similar experiment to the first one but this time with the same number of samples for both Lebesgue sampling and Riemann sampling. We will select 15% of the total samples of the signal for both Riemann and Lebesgue sampling. In the case of Lebesgue, we will tune the value of the threshold until selecting the same or slightly less (never more) number of samples than in Riemann sampling.

To carry out the experiments we used Python 3.6.1 with Anaconda custom (x86_64). We used a MacBook with macOS High Sierra with the following features. 2.3 GHz Intel Core i5, 8GB 2133 MHz, L2 Cache:256 KB, L3: 4MB.

4.2 Data sets

As described in Table 3 of the appendix, the experiments have been conducted over the 67 databases of a repository called “The UEA and UCR time series classification repository” [6]. Some of the datasets have the training set and the testing set to perform classification techniques. Since we are not doing classification, we have simply joined the training set and testing set in a single dataset for each dataset of the repository.

4.3 Experiment I

The results of this experiment for all the 16 methods are found in the appendix section in Table 4. Figure 8 shows an illustrative summary of the performance of the best eight combinations of interpolation methods and sampling strategies. In Table 1 it can be seen the position of the ranking and the average RMSE error of the 67 datasets.

max width=0.95 Method L ZeChipC L ZeLiC L ZeChip L ZeLi L Zero L PCHIP L Linear L Nearest L Shannon L Thin-Plate Position 1 2 3 4 5 6 7 8 9 10 Avg RMSE 0.0029 0.0030 0.0031 0.0034 0.0040 0.0049 0.0055 0.0067 0.0406 0.0554

Table 1: Average RMSE of the top 10 methods in Lebesgue sampling
Figure 8: Boxplot of the RMSE of the top-8 methods for all the 67 datasets, ordered by the median value.

It could be possible that a method is strongly penalised in some dataset and this will undermine its performance severely. To avoid this, we also calculated the average position as displayed in Figure 9. We can see that the position order of the median RMSE is the same as the order in terms of the average position of the Ranking.

Figure 9: Boxplot of the rank position of the top-8 methods for all the 67 datasets.

Figure 8 shows the box-plot of the RMSE score of the top-8 interpolation methods. In this plot, the interpolation methods are sorted by the 50th percentile, and from the figure, it can be seen that ZeChipC performs best for the Lebesgue sampling and it produces least errors while reconstructing the signal. Furthermore, it can be observed that interquartile range is smaller in magnitude compared to the rest of methods and as well whisker is at lower RMSE value, which establishes that ZeChipC performs very well in the overall spread of the reconstruction from the sampled single.

Likewise, Figure 9 shows a similar conclusion where ZeChipC is the winner in terms of the rank position of the reconstructed signal. The 50th percentile shows that ZeChipC half of the time is a clear winner compared to the rest of the interpolation methods. Similarly, the interquartile range and the whisker establishes that overall ZeChipC outperforms the rest of the interpolation methods.

4.4 Experiment II

In this experiment, results are shown in the same way as in the first one. First, in Figure 10 it is shown the performance based on the average RMSE of each combination (sampling and interpolation methods) using 15% of the samples for each of the 67 datasets. As shown in Table 3, we adapted the threshold in Lebesgue sampling not to collect more than 15% for each dataset as it can be seen in Table 5 of the appendix. In Table 2 it can be seen the position of the ranking and the median RMSE error of the 67 datasets.

max width=1 Method L ZeChipC L ZeLiC L ZeChip L ZeLi L Zero L PCHIP L Linear R PCHIP R Quintic R Cubic L Nearest R Thin-Plate Position 1 2 3 4 5 6 7 8 9 10 11 12 Avg RMSE 0.0053 0.0054 0.0057 0.0061 0.0063 0.0071 0.0077 0.0085 0.0087 0.0091 0.0094 0.0095

Table 2: Average RMSE of the top 12 methods using Lebesgue and Riemann sampling
Figure 10: RMSE of the top-8 methods for all the 67 datasets.

As in the first experiment, in Figure 11 it is shown the average position of the best 12 methods of the dataset. We can see that the order of the positions of the Average Ranking is similar to that with the median RMSE value. For example, the order for the first eight combinations is the same.

We can also see that ZeChipC with Lebesgue sampling is the winner in terms of the rank position of the reconstructed signal. The 50th percentile shows that ZeChipC half of the time is the clear winner compared to the rest of the interpolation methods. Similarly, the interquartile range and the whisker establishes that overall ZeChipC outperforms the rest of the interpolation methods.

Figure 11: Boxplot of the rank position of the top-10 methods for all the 67 datasets.

4.5 Discussion of the experiments

The interpolation method that offers the best performance in both experiments is ZeChipC. This method implements three ideas that have been presented throughout the paper.

First, it uses ZOH interpolation which allows ZeChipC to adapt to abrupt changes. This improvement is shared by the other three developed methods (ZeLi, ZeLiC and ZeChip), and it can be clearly appreciated when we compare in experiment I the performance of ZeLi against Linear interpolation or ZeChip against PCHIP interpolation. ZOH is the only interpolation technique that guarantees that all the points are in the tolerated region which allows representing the shape of the signal more accurately. We can see a clear example of this in Figure 12 (left) where PCHIP interpolation is out of the tolerated region while ZeChip is respecting it.

Figure 12: (Left) PCHIP is interpolating outside the tolerated region and so, its performance is low. (Right) This can happen several times for the same signal

Second, ZeChipC includes a new functionality to adapt to concave and convex regions (See Figure 5. It is interesting to see that this improvement means an increment of 8.06% of ZeChipC with respect ZeChip (which does not implement the convexity/concavity functionality) in the first experiment and of 7.36% in the second. In the same way, there is an improvement of ZeLiC against ZeLi of 10.88% and 10.89% in the first and second experiments respectively.

The third and last idea consists of implementing PCHIP interpolation instead of Linear interpolation. The increase in the performance of this approach can be appreciated when ZeChip is compared against ZeLi, and when ZeChipC is compared against ZeLiC. In the first experiment, ZeChip has an improvement of 8.82% against ZeLi while in the second experiment it has an improvement of 5.94%. In the same way, ZeChipC has an improvement against ZeLiC of 5.94% in the first and of 2.22% in the second. Something we could ask ourselves is the differences in the performance are statically significant. To this end, we performed the following statistical tests can compare the average performance in both experiments. We compared ZeChip with ZeLi and compared ZeChipC with ZeLiC.

In addition, it is worth stressing that the ZOH interpolation is better than Linear interpolation and even better than PCHIP interpolation. In fact, it is better than any other interpolation technique when Lebesgue sampling is used. PCHIP is the second best and Linear the third best. This is a confirmation that using ZOH for the algorithms ZeChipC and ZeLiC (as well as ZeChip and ZeLi) as a combination of these methods is a good idea. Regarding the rest of the methods, it seems that Lebesgue PCHIP, variable near, variable linear and variable ZOH always remain ahead of the rest of the methods. Our results strengthen the claims regarding that sampling based on Lebesgue sampling is more accurate than Riemann sampling (either in fixed or uniform with the same number of samples).

On the other hand, we could think that ZeChipC is not better because for some data sets simply because it has been in a better average position. To argue for the ZeChipC we have two arguments. In the position ranking, it has won 53 times out of 67 in the first experiment and 37 in the second. And if we see the average position it has been the first method in both experiments, 2.18 in the first experiment and 2.63 in the second.

Figure 13: Average RMSE of the 15 smoother datasets against the 15 abrupter ones.

Lastly, time series smoothness is a concept that has been studied in detail in several investigations [7]

. One of the most frequent ways of measuring it, and the one applied in our research, is by calculating the standard deviation of the differences between the points (1st derivative). The lower the SD, the smoother the time series is.

The differences between the proposed methods and the rest of the methods are enlarged when the databases have a large number of changes. As it is shown in Figure 13, when the signals of a dataset have abrupt changes, the state-of-the-art methods do not ”understand” that the change has occurred between the last tracked point and the previous instant and as a result, the signal is drawn out of the tolerance region as shown in Figure 12.

5 Conclusion

The main reason why the developed methods (ZeLi, ZeLiC, ZeChip and ZeChipC) have better results than the other interpolation methods is that the interpolation is performed taking into account the Lebesgue sampling characteristics. That is to say, when there is an abrupt change ZOH interpolation is applied, otherwise (when there is a smooth change) Linear and PCHIP interpolation are applied. The proposed methods detect that there has been an abrupt change because the newly captured sample is far away from the tolerated region. Additionally, this decision can be optimised depending on the dataset using the tolerance ratio parameter.

On the other hand, the convexity/concavity functionality has performed very well. We can guess that when there is a change of sign in the slope of a signal, a concavity/convexity region has happened. Optimising the three parameters (previous, minimum forward, and maximum forward) to decide whether or not there is a convex/concave region for each data set as well as establishing a methodology to this end could be a very interesting research path to follow. Additionally, accurately calculating the exact point where the signal changes the slope and approximating its shape is a very complex and wide issue, although the applied implementation performed very well and boosted the performance of both methods: ZeLiC and ZeChipC.

The developed methods have been implemented based on the absolute difference with respect to the last sampled point. However, it is easily adaptable to another kind of events that trigger sensors. For example, the sensor could be triggered when the output signal crosses a certain limit or when the percentage variation is higher than a preset limit. Using the same approach; Linear or PCHIP interpolation for smooth transitions and ZOH for abrupt changes will still be effective.

From the assumptions and contributions of this research, new and more effective interpolation methods could be designed. Lebesgue sampling is known in academia, but it is important to develop reliable and adapted tools to encourage the industry to make a transition to Lebesgue sampling.

Acknowledgements. This publication has emanated from research conducted with the support of Enterprise Ireland (EI), under Grant Number IP20160496 and TC20130013.

References

  • Andrén et al. [2017] Andrén, M. T., Bernhardsson, B., Cervin, A., and Soltesz, K. (2017). On event-based sampling for lqg-optimal control. In Decision and Control (CDC), 2017 IEEE 56th Annual Conference on, pages 5438–5444. IEEE.
  • Åström and Bernhardsson [1999] Åström, K. J. and Bernhardsson, B. (1999). Comparison of periodic and event based sampling for first-order stochastic systems. IFAC Proceedings Volumes, 32(2):5006–5011.
  • Åström and Bernhardsson [2003] Åström, K. J. and Bernhardsson, B. (2003). Systems with lebesgue sampling. In Directions in mathematical systems theory and optimization, pages 1–13. Springer.
  • Astrom and Bernhardsson [2002] Astrom, K. J. and Bernhardsson, B. M. (2002). Comparison of riemann and lebesgue sampling for first order stochastic systems. In Decision and Control, 2002, Proceedings of the 41st IEEE Conference on, volume 2, pages 2011–2016. IEEE.
  • Atkinson and Han [2005] Atkinson, K. and Han, W. (2005). Theoretical numerical analysis, volume 39. Springer.
  • Bagnall et al. [2018] Bagnall, A., Lines, J., Vickers, W., and Keogh, E. (2018). The uea and ucr time series classification repository. http://timeseriesclassification.com.
  • Barnes [2003] Barnes, R. (2003). Variogram tutorial. Golden, CO: Golden. Software. Available online at http://www. goldensoftware. com/. variogramTutorial. pdf.
  • Boissonnat and Cazals [2002] Boissonnat, J.-D. and Cazals, F. (2002). Smooth surface reconstruction via natural neighbour interpolation of distance functions. Computational Geometry, 22(1-3):185–203.
  • Bookstein [1989] Bookstein, F. L. (1989). Principal warps: Thin-plate splines and the decomposition of deformations. IEEE Transactions on pattern analysis and machine intelligence, 11(6):567–585.
  • Buhmann and Micchelli [1992] Buhmann, M. D. and Micchelli, C. A. (1992). Multiquadric interpolation improved. Computers & Mathematics with Applications, 24(12):21–25.
  • De Boor et al. [1978] De Boor, C., De Boor, C., Mathématicien, E.-U., De Boor, C., and De Boor, C. (1978). A practical guide to splines, volume 27. Springer-Verlag New York.
  • Erkorkmaz and Altintas [2005] Erkorkmaz, K. and Altintas, Y. (2005). Quintic spline interpolation with minimal feed fluctuation. Journal of Manufacturing Science and Engineering, 127(2):339–349.
  • Fasshauer [2007] Fasshauer, G. E. (2007). Meshfree approximation methods with MATLAB, volume 6. World Scientific.
  • Fu [2011] Fu, T.-c. (2011). A review on time series data mining.

    Engineering Applications of Artificial Intelligence

    , 24(1):164–181.
  • Hamilton [1994] Hamilton, J. D. (1994). Time series analysis, volume 2. Princeton university press Princeton, NJ.
  • Harville [1974] Harville, D. A. (1974). Bayesian inference for variance components using only error contrasts. Biometrika, 61(2):383–385.
  • Heemels et al. [2012] Heemels, W., Johansson, K. H., and Tabuada, P. (2012). An introduction to event-triggered and self-triggered control. In Decision and Control (CDC), 2012 IEEE 51st Annual Conference on, pages 3270–3285. IEEE.
  • Hon and Mao [1997] Hon, Y. and Mao, X. (1997). A multiquadric interpolation method for solving initial value problems. Journal of Scientific Computing, 12(1):51–55.
  • Imer and Basar [2005] Imer, O. C. and Basar, T. (2005). Optimal estimation with limited measurements. In Decision and Control, 2005 and 2005 European Control Conference. CDC-ECC’05. 44th IEEE Conference on, pages 1029–1034. IEEE.
  • Kahaner et al. [1989] Kahaner, D., Moler, C., and Nash, S. (1989). Numerical methods and software. Englewood Cliffs: Prentice Hall, 1989.
  • Keys [1981] Keys, R. (1981). Cubic convolution interpolation for digital image processing. IEEE transactions on acoustics, speech, and signal processing, 29(6):1153–1160.
  • Lepot et al. [2017] Lepot, M., Aubin, J.-B., and Clemens, F. H. (2017). Interpolation in time series: An introductive overview of existing methods, their performance criteria and uncertainty assessment. Water, 9(10):796.
  • Marks [2012] Marks, R. J. I. (2012). Introduction to Shannon sampling and interpolation theory. Springer Science & Business Media.
  • Meng and Chen [2012] Meng, X. and Chen, T. (2012). Optimal sampling and performance comparison of periodic and event based impulse control. IEEE Transactions on Automatic Control, 57(12):3252–3259.
  • Miskowicz [2006] Miskowicz, M. (2006). Send-on-delta concept: an event-based data reporting strategy. sensors, 6(1):49–63.
  • Mühlenstädt and Kuhnt [2011] Mühlenstädt, T. and Kuhnt, S. (2011). Kernel interpolation. Computational Statistics & Data Analysis, 55(11):2962–2974.
  • Schimek [2013] Schimek, M. G. (2013). Smoothing and regression: approaches, computation, and application. John Wiley & Sons.
  • Schoenberg [1988] Schoenberg, I. J. (1988). Contributions to the problem of approximation of equidistant data by analytic functions. In IJ Schoenberg Selected Papers, pages 3–57. Springer.
  • Tibshirani [2011] Tibshirani, R. (2011). Regression shrinkage and selection via the lasso: a retrospective. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73(3):273–282.
  • Wang and Fu [2014] Wang, B. and Fu, M. (2014). Comparison of periodic and event-based sampling for linear state estimation. IFAC Proceedings Volumes, 47(3):5508–5513.
  • Yan et al. [2016] Yan, W., Zhang, B., Wang, X., Dou, W., and Wang, J. (2016). Lebesgue-sampling-based diagnosis and prognosis for lithium-ion batteries. IEEE Trans. Industrial Electronics, 63(3):1804–1812.
  • Zhang et al. [2016] Zhang, X.-M., Han, Q.-L., and Yu, X. (2016). Survey on recent advances in networked control systems. IEEE Transactions on Industrial Informatics, 12(5):1740–1752.
  • Žukovič and Hristopulos [2008] Žukovič, M. and Hristopulos, D. (2008). Environmental time series interpolation based on spartan random processes. Atmospheric Environment, 42(33):7669–7678.

Appendix

In the appendix section, it is shown the performance of each method for the whole dataset. In the Table 3 , we show the following fields. Number of columns and number of rows, metric to measure how abrupt the dataset is, Value of the threshold, Average number of samples for the first experiment, the value of the threshold to capture 15% of the samples.

max width=0.8 N Data set Rows Columns Abrupt Exp I: Thres Exp I: Perc Exp II: Thres Exp II: Perc 1 Adiac 779 177 11.12 0.05 9.81 0.0305 14.93 2 ArrowHead 209 252 12.71 0.05 22.53 0.0794 14.97 3 Beef 58 471 14.46 0.05 8.8 0.0287 14.99 4 BeetleFly 38 513 149.5 0.05 31.21 0.1041 15 5 BirdChicken 38 513 135.95 0.05 14.52 0.0483 14.94 6 CBF 928 129 14.31 0.05 72.79 0.2205 14.77 7 Car 118 578 24.06 0.05 9.57 0.0312 14.95 8 ChlorineConcentration 4305 167 102.34 0.05 51.58 0.2256 14.94 9 Coffee 54 287 11.04 0.05 19.9 0.0664 14.98 10 Computers 498 721 303.2 0.05 13.77 0.0498 14.45 11 DiatomSizeReduction 320 346 27.74 0.05 15.38 0.0511 15 12 DistalPhalanxOutlineAgeGroup 537 81 7.99 0.05 63.06 0.2427 14.98 13 DistalPhalanxOutlineCorrect 874 81 15.28 0.05 67.61 0.2559 14.85 14 DistalPhalanxTW 537 81 7.08 0.05 19.01 0.0631 14.98 15 ECG200 198 97 50.06 0.05 36.25 0.1384 14.95 16 ECG5000 4998 141 42.17 0.05 20.23 0.0744 14.97 17 ECGFiveDays 882 137 30.57 0.05 15.16 0.0513 14.98 18 Earthquakes 459 513 656.47 0.05 28.9 0.5675 15 19 ElectricDevices 16635 97 37.83 0.05 9.2 0 11.67 20 FaceAll 2248 132 12.39 0.05 61.3 0.238 14.99 21 FaceFour 110 351 178.73 0.05 24.61 0.1003 14.98 22 FacesUCR 2248 132 13.84 0.05 43.34 0.1597 14.99 23 Fish 348 464 33.92 0.05 9.7 0.0323 14.96 24 FordA 4919 501 352.36 0.05 49.95 0.172 14.98 25 FordB 4444 501 449.73 0.05 49.24 0.1644 14.97 26 Ham 212 432 34.3 0.05 24.25 0.0852 14.99 27 HandOutlines 1368 2710 39.08 0.05 3.1 0.0102 14.85 28 Haptics 461 1093 20.63 0.05 3.4 0.0089 14.92 29 Herring 126 513 1688.73 0.05 14.17 0.0472 14.94 30 InlineSkate 648 1883 44.37 0.05 1.85 0.0067 14.7 31 InsectWingbeatSound 2198 257 15.12 0.05 13.81 0.0448 14.98 32 ItalyPowerDemand 1094 25 7.03 0.05 67.16 0.6175 14.95 33 LargeKitchenAppliances 748 721 107.06 0.05 3.98 0.0011 13.78 34 Mallat 2398 1025 25.6 0.05 7.65 0.0255 14.93 35 Meat 118 449 12.13 0.05 7.13 0.0225 14.97 36 MedicalImages 1139 100 8.26 0.05 16.37 0.0564 14.95 37 MiddlePhalanxOutlineAgeGroup 552 81 7.5 0.05 56.63 0.207 14.98 38 MiddlePhalanxOutlineCorrect 889 81 10.75 0.05 61.37 0.2272 14.98 39 MiddlePhalanxTW 551 81 6.86 0.05 20.57 0.0705 14.98 40 MoteStrain 1270 85 43.66 0.05 31.24 0.1458 14.98 41 OSULeaf 440 428 56.13 0.05 19.87 0.0667 14.92 42 OliveOil 58 571 19.73 0.05 10.76 0.0334 14.98 43 PhalangesOutlinesCorrect 2656 81 11.86 0.05 67.61 0.2559 14.85 44 Phoneme 2108 1025 50.35 0.05 30.43 0.1072 15 45 Plane 208 145 20.24 0.05 35.43 0.1227 14.99 46 ProximalPhalanxOutlineAgeGroup 603 81 7.61 0.05 58.35 0.2245 14.98 47 ProximalPhalanxOutlineCorrect 889 81 9.62 0.05 61.92 0.2511 14.94 48 ProximalPhalanxTW 603 81 6.9 0.05 27.81 0.1045 14.94 49 RefrigerationDevices 748 721 342.14 0.05 24.9 0.1111 11.2 50 ScreenType 748 721 107.37 0.05 18.69 0.0555 13.95 51 ShapeletSim 198 501 788.94 0.05 89.69 0.5078 14.67 52 ShapesAll 1198 513 22.77 0.05 11.24 0.0367 14.98 53 SmallKitchenAppliances 748 721 181.97 0.05 2.77 0 3.61 54 StarlightCurves 9234 1025 26.36 0.05 3.7 0.0116 14.96 55 Strawberry 981 236 10.78 0.05 18.62 0.0636 14.96 56 SwedishLeaf 1123 129 19.7 0.05 22.68 0.0769 14.97 57 Symbols 1018 399 50.19 0.05 7.75 0.0241 14.97 58 ToeSegmentation1 266 278 164.52 0.05 26.67 0.0945 14.98 59 ToeSegmentation2 164 344 164.01 0.05 20.95 0.07 14.95 60 Trace 198 276 37.55 0.05 5.12 0.0161 14.88 61 TwoLeadECG 1160 83 11.83 0.05 30.79 0.1219 14.97 62 UWaveGestureLibraryAll 4476 946 53.56 0.05 8.27 0.0275 14.97 63 Wafer 7162 153 297.74 0.05 8.46 0.0116 14.6 64 Wine 109 235 9.52 0.05 19.29 0.0628 14.96 65 Worms 256 901 72.27 0.05 13.17 0.0436 14.95 66 WormsTwoClass 256 901 79.41 0.05 13.17 0.0436 14.95 67 Yoga 3298 427 1217.33 0.05 15.55 0.0519 14.95

Table 3: Information about the datasets and the experiments.

max width= N ZeChipC ZeLiC ZeChip ZeLi Zero PCHIP Linear Nrst Shannon T-P Lasso Cubic Quintic Inv-mlt Mltqdc Gaussian 1 0.0159 0.018 0.0192 0.0226 0.0261 0.0193 0.0228 0.0268 0.2397 0.2737 0.1613 0.7989 8.433 0.7879 1.078 1.165 2 0.0148 0.0153 0.0162 0.018 0.0249 0.0169 0.0206 0.0281 0.21 0.0629 0.6052 0.0963 0.0957 0.1292 0.1119 0.4174 3 0.0209 0.0218 0.0242 0.025 0.0247 0.0258 0.0277 0.0333 0.2555 0.3284 0.4262 0.9974 2.568 0.5583 2.052 74442.7 4 0.0135 0.0146 0.0141 0.0156 0.0251 0.0139 0.0171 0.0241 0.1583 0.0196 0.5401 0.0212 0.0244 0.0328 0.0208 0.1151 5 0.0137 0.0141 0.0154 0.0171 0.0258 0.0152 0.0179 0.0246 0.1714 0.0243 0.5603 0.0384 0.0486 0.0571 0.0385 0.189 6 0.0008 0.0008 0.0008 0.0008 0.0006 0.0018 0.0017 0.0016 0.0066 0.0044 0.0209 0.0049 0.0064 0.0045 0.005 0.006 7 0.0041 0.0044 0.0047 0.0055 0.0085 0.0047 0.0055 0.0077 0.0405 0.0096 0.1527 0.0171 0.0223 0.0277 0.0243 0.0593 8 0.0002 0.0002 0.0002 0.0002 0.0002 0.0007 0.0007 0.0008 0.0018 0.0012 0.0025 0.0016 0.0039 0.0011 0.0015 0.0015 9 0.0127 0.0136 0.0141 0.0154 0.0173 0.0155 0.018 0.0223 0.143 0.0215 0.3963 0.0382 0.1034 0.0372 0.0392 2.14 10 0.0012 0.0012 0.0012 0.0012 0.001 0.0147 0.0171 0.0208 0.0456 0.3657 0.0222 1.906 683.2 3125.2 13844.8 1763246.9 11 0.0013 0.0016 0.0016 0.002 0.0032 0.0016 0.0021 0.0028 0.0138 0.0029 0.0495 0.0043 0.0041 0.0072 0.0053 0.0102 12 0.0012 0.0013 0.0012 0.0013 0.0013 0.0014 0.0018 0.002 0.0098 0.0032 0.035 0.0035 0.0036 0.0056 0.0032 0.0148 13 0.0007 0.0008 0.0007 0.0008 0.0007 0.0008 0.001 0.0012 0.0064 0.002 0.0248 0.0021 0.0022 0.0033 0.002 0.0092 14 0.0011 0.0012 0.0011 0.0012 0.0013 0.0012 0.0016 0.0019 0.0179 0.0133 0.0139 0.0477 0.4118 0.0064 0.0104 0.0137 15 0.0041 0.0042 0.0042 0.0042 0.0041 0.0058 0.0058 0.0068 0.0496 0.0096 0.091 0.0126 0.0297 0.0142 0.0136 0.0736 16 0.0002 0.0002 0.0002 0.0002 0.0002 0.0002 0.0003 0.0003 0.002 0.0004 0.0052 0.0009 0.0033 0.0011 0.0031 0.1478 17 0.0007 0.0007 0.0007 0.0007 0.001 0.0019 0.0021 0.0025 0.0112 0.0097 0.0297 0.0273 0.3053 0.0875 0.4517 38.8 18 0.0003 0.0003 0.0003 0.0003 0.0003 0.026 0.0246 0.0296 0.0377 0.0488 0.0208 0.1096 1.14 0.6394 1.821 47.9 19 0 0 0 0 0 0.0006 0.0006 0.0007 0.0011 0.0045 0.0003 0.0166 1.112 0.0778 0.3465 1.149 20 0.0003 0.0004 0.0003 0.0004 0.0003 0.0004 0.0005 0.0005 0.0043 0.0008 0.0089 0.0008 0.001 0.0009 0.0008 0.0023 21 0.0083 0.0083 0.0083 0.0084 0.0068 0.0173 0.0181 0.0193 0.0891 0.0272 0.1725 0.0422 0.1634 0.0349 0.0506 0.2587 22 0.0004 0.0004 0.0004 0.0004 0.0003 0.0005 0.0006 0.0006 0.0033 0.0013 0.0059 0.0018 0.0041 0.0015 0.0018 0.003 23 0.0016 0.0017 0.0018 0.002 0.0029 0.0018 0.002 0.0027 0.0127 0.0049 0.0393 0.0081 0.0084 0.0119 0.0102 0.0187 24 0.0001 0.0002 0.0001 0.0002 0.0002 0.0002 0.0002 0.0003 0.002 0.0003 0.0041 0.0003 0.0003 0.0004 0.0003 0.0014 25 0.0002 0.0002 0.0002 0.0002 0.0002 0.0002 0.0002 0.0003 0.0021 0.0003 0.0046 0.0003 0.0003 0.0005 0.0003 0.0015 26 0.0038 0.0042 0.0039 0.0044 0.0041 0.0049 0.006 0.007 0.0549 0.0326 0.0509 0.065 0.0666 0.0743 0.4342 618.3 27 0.0002 0.0002 0.0003 0.0004 0.0008 0.0003 0.0004 0.0006 0.0027 0.0133 0.0178 0.0305 0.0288 0.0242 0.033 0.0273 28 0.0015 0.0016 0.0019 0.0021 0.0021 0.002 0.0022 0.0025 0.0159 0.0211 0.0575 0.0406 1.867 531.1 5193.4 19604 29 0.0039 0.0041 0.0045 0.0052 0.0079 0.0046 0.0052 0.0072 0.0467 0.0053 0.1642 0.0063 0.0071 0.0112 0.0068 0.0406 30 0.0011 0.0012 0.0012 0.0014 0.0015 0.0012 0.0014 0.0017 0.0152 0.0563 0.0196 0.2342 4.325 0.5209 1.464 1745.2 31 0.0004 0.0004 0.0004 0.0005 0.0004 0.0005 0.0006 0.0007 0.0056 0.008 0.0031 0.0216 0.2183 0.0103 0.0423 418 32 0.0006 0.0006 0.0006 0.0006 0.0005 0.0015 0.0016 0.0017 0.0054 0.0033 0.0208 0.0036 0.005 0.0034 0.0035 0.0057 33 0.0003 0.0004 0.0003 0.0004 0.0005 0.0072 0.0093 0.0113 0.0222 0.1143 0.0058 0.7677 155.5 9904 25632 2343150.8 34 0.0003 0.0003 0.0003 0.0004 0.0004 0.0004 0.0004 0.0005 0.0033 0.0061 0.005 0.0123 0.0168 0.0723 0.2349 139.5 35 0.0073 0.0085 0.0073 0.0085 0.0069 0.0075 0.0089 0.0109 0.1179 0.9908 0.1191 1.964 1.358 0.1303 0.5857 4794.5 36 0.0006 0.0006 0.0006 0.0006 0.0007 0.0007 0.0007 0.0008 0.011 0.024 0.0066 0.1826 30.6 0.023 0.1037 5.957 37 0.0012 0.0012 0.0012 0.0013 0.0014 0.0014 0.0017 0.0021 0.012 0.0027 0.0346 0.0031 0.0032 0.0051 0.0029 0.0142 38 0.0007 0.0008 0.0007 0.0008 0.0008 0.0009 0.0011 0.0013 0.008 0.0017 0.0246 0.0019 0.002 0.0035 0.0017 0.0103 39 0.0011 0.0013 0.0011 0.0013 0.0013 0.0012 0.0015 0.0019 0.0167 0.0121 0.0142 0.0422 0.3532 0.0062 0.0097 0.0132 40 0.0005 0.0005 0.0006 0.0006 0.0007 0.0025 0.0025 0.0029 0.0076 0.0056 0.0181 0.0107 0.039 0.0092 0.0162 0.0226 41 0.0014 0.0014 0.0015 0.0016 0.0022 0.0015 0.0017 0.0022 0.0133 0.0032 0.0378 0.0048 0.0053 0.0047 0.0052 0.0115 42 0.0127 0.0135 0.0133 0.0149 0.016 0.0144 0.0169 0.0205 0.2363 0.1952 0.1931 0.6276 1.673 2.832 39.4 1286461.6 43 0.0002 0.0003 0.0002 0.0003 0.0002 0.0003 0.0003 0.0004 0.0021 0.0007 0.0082 0.0007 0.0007 0.0011 0.0007 0.003 44 0.0004 0.0004 0.0004 0.0004 0.0004 0.0005 0.0006 0.0007 0.004 0.001 0.0097 0.0032 0.2783 0.0017 0.0012 812.3 45 0.0031 0.0034 0.0032 0.0034 0.0045 0.0035 0.0041 0.0055 0.0256 0.0044 0.0604 0.0044 0.0047 0.0057 0.0045 0.0112 46 0.0011 0.0012 0.0011 0.0012 0.0012 0.0013 0.0017 0.002 0.0155 0.0023 0.0308 0.0025 0.0026 0.0048 0.0023 0.0135 47 0.0007 0.0007 0.0007 0.0007 0.0008 0.0008 0.0011 0.0013 0.0108 0.0016 0.0235 0.0018 0.002 0.0033 0.0016 0.0098 48 0.001 0.0011 0.001 0.0012 0.0012 0.0011 0.0014 0.0017 0.0149 0.008 0.0151 0.0271 0.2449 0.0043 0.0051 0.0084 49 0.0008 0.0008 0.0008 0.0008 0.0007 0.0104 0.0115 0.0137 0.0255 0.0261 0.0239 0.0754 0.5158 0.0354 0.1558 75.8 50 0.0005 0.0005 0.0005 0.0005 0.0003 0.0069 0.0094 0.0114 0.0293 0.3088 0.0201 1.03 87.1 105.7 4119.2 6154367.6 51 0.0019 0.0019 0.0019 0.0019 0.0019 0.0131 0.0128 0.007 0.0721 0.049 0.1102 0.0508 0.0552 0.0497 0.0512 0.0516 52 0.0004 0.0004 0.0005 0.0006 0.0008 0.0005 0.0006 0.0008 0.0067 0.0011 0.0167 0.0026 0.0075 0.0027 0.0023 0.0072 53 0.0005 0.0005 0.0005 0.0005 0.0005 0.0149 0.0152 0.0185 0.0233 0.1386 0.0039 1.076 723.2 8955.2 10166.5 661524.3 54 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0008 0.0023 0.0018 0.0057 0.0081 0.003 0.0051 0.0215 55 0.0008 0.0009 0.0008 0.0009 0.001 0.0009 0.001 0.0012 0.0096 0.0055 0.016 0.0092 0.0085 0.0062 0.009 0.0333 56 0.0005 0.0005 0.0005 0.0006 0.0008 0.0006 0.0007 0.0009 0.0051 0.0015 0.0082 0.0024 0.0052 0.003 0.0038 0.0058 57 0.0004 0.0005 0.0005 0.0006 0.001 0.0005 0.0006 0.0009 0.0057 0.0087 0.0162 0.0177 0.0365 0.037 0.0637 0.2087 58 0.0031 0.0031 0.0031 0.0031 0.0032 0.0039 0.0042 0.0052 0.0352 0.0076 0.0632 0.0112 0.0412 0.0086 0.0162 0.2704 59 0.0048 0.0049 0.0049 0.0051 0.0053 0.0065 0.0068 0.0084 0.0604 0.0353 0.0978 0.0668 0.1487 0.0207 0.0476 3.394 60 0.0024 0.0028 0.0025 0.0028 0.0036 0.0036 0.0046 0.0058 0.0524 0.3275 0.078 1.29 10.1 6.28 22.8 1832.4 61 0.0005 0.0005 0.0006 0.0006 0.0007 0.0008 0.001 0.0012 0.0075 0.0032 0.0221 0.0064 0.0114 0.0031 0.0044 0.0101 62 0.0002 0.0002 0.0002 0.0002 0.0002 0.0003 0.0003 0.0004 0.0017 0.0046 0.003 0.0101 0.0132 0.0053 0.0101 305.8 63 0.0001 0.0001 0.0001 0.0001 0.0001 0.0016 0.0016 0.0019 0.0026 0.0025 0.003 0.0052 0.0858 0.0108 0.0213 0.2354 64 0.0057 0.0061 0.0065 0.0071 0.0087 0.0068 0.0081 0.0104 0.1103 0.0088 0.1489 0.0114 0.0472 0.0198 0.0156 0.1706 65 0.0029 0.0029 0.003 0.0031 0.0035 0.0034 0.0037 0.0047 0.0322 0.0121 0.0794 0.0199 0.4521 0.0311 0.0813 10.2 66 0.0029 0.0029 0.003 0.0031 0.0035 0.0034 0.0037 0.0047 0.0322 0.0121 0.0794 0.0199 0.4521 0.0311 0.0813 10.2 67 0.0001 0.0002 0.0002 0.0002 0.0003 0.0002 0.0002 0.0003 0.0019 0.0007 0.0065 0.0012 0.0012 0.0009 0.001 0.0021 Avg 0.0029 0.003 0.0031 0.0034 0.004 0.0049 0.0055 0.0067 0.0406 0.0554 0.0766 0.1786 25.62 337.82 881.01 183786.02

Table 4: Experiment I: RMSE per data set of the top 12 methods using Lebesgue sampling with 0.05 Threshold.

max width= N L ZeChipC L ZeLiC L ZeChip L ZeLi L Zero L PCHIP L Linear R PCHIP R Quintic R Cubic L Nrst R TP R Mlt-qdc R Lin R Inv-mult R Gauss 1 0.0082 0.0096 0.0097 0.012 0.0159 0.0097 0.0123 0.0629 0.0607 0.069 0.0156 0.0745 0.0748 0.078 0.0833 0.0892 2 0.0223 0.0241 0.0278 0.0322 0.0402 0.0285 0.0356 0.0564 0.0551 0.0625 0.0454 0.0675 0.0669 0.0701 0.0729 0.0772 3 0.0102 0.0107 0.0119 0.0127 0.0141 0.0128 0.0144 0.0314 0.0317 0.0355 0.0179 0.0383 0.0377 0.0395 0.0418 0.0448 4 0.0269 0.0274 0.0339 0.0397 0.0541 0.0337 0.0411 0.0172 0.0189 0.0191 0.0508 0.0194 0.0194 0.0186 0.0199 0.0213 5 0.013 0.0133 0.0144 0.0158 0.0249 0.0144 0.017 0.0119 0.0128 0.0136 0.0235 0.0143 0.0142 0.0136 0.0153 0.0175 6 0.0049 0.0047 0.0048 0.0046 0.0042 0.0075 0.0068 0.0061 0.0063 0.0064 0.0084 0.0064 0.0065 0.0063 0.0066 0.0068 7 0.0027 0.0028 0.0029 0.0032 0.0052 0.003 0.0033 0.004 0.0041 0.0045 0.0048 0.0048 0.0048 0.0049 0.0054 0.0061 8 0.001 0.001 0.001 0.001 0.001 0.0017 0.0017 0.0012 0.0014 0.0013 0.0019 0.0013 0.0013 0.0012 0.0013 0.0013 9 0.0177 0.0189 0.0202 0.0221 0.0231 0.0224 0.0253 0.0159 0.0193 0.0188 0.0301 0.0187 0.0199 0.0181 0.0204 0.0227 10 0.0011 0.0011 0.0011 0.0011 0.0007 0.0142 0.0161 0.0071 0.0075 0.0074 0.0196 0.0073 0.0075 0.007 0.0074 0.0075 11 0.0013 0.0016 0.0017 0.0021 0.0032 0.0016 0.0021 0.0018 0.0019 0.0021 0.0029 0.0022 0.0022 0.0022 0.0026 0.0031 12 0.0065 0.0066 0.0072 0.0077 0.008 0.0077 0.0086 0.0095 0.0084 0.0094 0.01 0.01 0.0099 0.0108 0.0107 0.0113 13 0.0046 0.005 0.0053 0.0059 0.0054 0.0058 0.0065 0.0049 0.0043 0.0045 0.0071 0.0047 0.0046 0.0053 0.0049 0.0051 14 0.0017 0.002 0.0018 0.002 0.0019 0.0019 0.0023 0.0086 0.0077 0.0091 0.0026 0.0098 0.0098 0.0105 0.0109 0.0115 15 0.0117 0.0112 0.0118 0.0117 0.0121 0.013 0.0132 0.0184 0.0187 0.0189 0.016 0.019 0.0191 0.0185 0.0189 0.0186 16 0.0002 0.0002 0.0002 0.0002 0.0003 0.0003 0.0004 0.0005 0.0005 0.0006 0.0005 0.0006 0.0006 0.0007 0.0007 0.0007 17 0.0007 0.0007 0.0007 0.0007 0.001 0.002 0.0022 0.0035 0.0037 0.0037 0.0026 0.0037 0.0037 0.0035 0.0037 0.0039 18 0.0251 0.0242 0.0252 0.0242 0.0116 0.0344 0.0312 0.0217 0.0229 0.0226 0.0374 0.0222 0.0227 0.0211 0.0224 0.0227 19 0 0 0 0 0 0 0 0.0003 0.0003 0.0003 0.0001 0.0003 0.0003 0.0003 0.0003 0.0003 20 0.0018 0.0018 0.0019 0.0019 0.0018 0.0021 0.0021 0.0029 0.0031 0.003 0.0024 0.003 0.0031 0.0029 0.003 0.0031 21 0.0167 0.0161 0.0171 0.0168 0.0151 0.0252 0.0269 0.021 0.0211 0.021 0.031 0.0212 0.0209 0.0216 0.0211 0.0213 22 0.0013 0.0013 0.0014 0.0014 0.0012 0.0017 0.0016 0.0023 0.0024 0.0024 0.0019 0.0024 0.0025 0.0024 0.0026 0.0027 23 0.0009 0.0009 0.001 0.0011 0.0019 0.001 0.0012 0.002 0.002 0.0023 0.0017 0.0024 0.0024 0.0025 0.0028 0.0031 24 0.0006 0.0006 0.0007 0.0007 0.0006 0.0008 0.0008 0.0005 0.0005 0.0005 0.0009 0.0005 0.0005 0.0006 0.0005 0.0005 25 0.0006 0.0007 0.0007 0.0008 0.0007 0.0008 0.0009 0.0005 0.0005 0.0005 0.001 0.0005 0.0005 0.0006