Sensor networks are being increasingly used in civil engineering applications. The data from these sensor networks are being used to monitor and detect changes in the behaviour of instrumented physical structures (Cawley, 2018). In this study we are concerned with monitoring and detecting changes in data from a sensor network installed on a pedestrian bridge. The sensor network consists of two different types of sensor: accelerometers and strain gauges, that measure the vertical deflection of the bridge. Both types of sensors record measurements at a high frequency. These measurements are recorded without end, which will inevitably pose computational storage and runtime issues (Bao et al., 2011). Data compression has therefore been increasingly utilised in literature involving instrumented infrastructures (Khoa et al., 2014; Bao et al., 2019; Bose et al., 2016). Despite the computational challenges associated with the sensor data acquisition, it is of interest to study the observed measurements during pedestrian-events, such as a person walking over the bridge, so that we can monitor how the response of the structure to this particular excitement changes over time.
We seek to develop a streaming method that is able to summarise the data corresponding to these pedestrian-events, whilst representing the data obtained from the sensors in a compressed form. The compressed version of the data will therefore retain features of the original data, that are prescribed by the user to be relevant. This is challenging for a number of reasons. First, constructing a sequential algorithm (for data compression) in the streaming data regime that can update at data-acquisition rate is difficult (Lau et al., 2018); for possibly indefinite data streams this means that such analyzes need to be incrementally updated rather than re-computed every time new data is observed. Second, determining which features in the original data are important so that they are retained in the compressed version requires expert knowledge. Embodying this expert knowledge so that relevant points of the data, with respect to user prescription, are preserved in the compressed version is not straightforward.
In this paper we propose a novel streaming method that summarises data in a compressed form, whilst preserving data relevant and corresponding to pedestrian-events. The developed method is based on segmenting the time-series – breaking the time-series up into varying-length parts. The segmentation (time) points in our method are determined by a relevance score (Moniz et al., 2016; Torgo and Ribeiro, 2007). This relevance score quantifies the importance of each data point in the time-series. We use two types of relevance score: one that is based only on the data and another that uses a query shape. Using a query shape allows for particular features in the data to be preserved in the compressed version of the data. The segmentation points can then be used to compress the time series e.g. by using a piecewise linear function between the segmentation points.
Segmentation is a commonly used method to compress time-series (Keogh et al., 2004; Fu, 2011). Typically, the segmentation points are computed using dynamic programming which minimises an error between the original series and the compressed version (Terzi and Tsaparas, 2006). In our method, we develop a segmentation method for streaming applications – where segmentation is done as the time-series is observed in real-time. The proposed method uses optimal transport (Villani, 2008)
and linear programming to compute the segmentation points. Moreover, the notion of finding patterns and features in time-series, as is done here using the relevance score, has received much attention(Cassisi et al., 2012; Keogh et al., 2004; Keogh, 1997). Our proposed method forms a bridge between this notion and data compression by finding a segmentation of a time-series that is probabilistically optimal with respect to representing these features. This allows us to compress the sensor data from the aforementioned pedestrian bridge, whilst retaining relevant data corresponding to pedestrian-events.
The remainder of this paper is organised as follows. Section 2 provides details of the sensor network installed on the pedestrian footbridge that we study, the accelerometers and strain sensors and the data obtained from both of them. In Section 3, we introduce relevance scores that are used to weight each data point according to their importance. Sections 4 and 5 introduce the segmentation methodology using the relevance score designed for the sensor data. Section 6 reports a simulation study designed to gauge the performance of the method. Further, the methodology is deployed against the sensor network data.
2 Strain and accelerometer data for instrumented infrastructure
The monitoring of civil infrastructure has typically been performed over finite periods of time. Engineering issues such as fatigue, corrosion and general degradation of concrete and other materials can require long term studies. Due to storage limitations, health monitoring periods tend to be significantly less than the lifetime of the structure. However, certain projects require more continuous monitoring to help ensure safety and to study new materials in the environment. To prepare for this form of continuous monitoring and to provide a testbed for the development of new algorithms, we outfitted an existing indoor pedestrian footbridge with a variety of sensors including accelerometers and strain gauges. The bridge serves as a walkway over a machine shop in a building in San Francisco (Figure 1) and has a span of 55’ and a width of 88.75”. The design is a common steel truss bridge. To monitor strain, we installed foil strain gauges at the midpoint of the primary structural element of the bridge, as well as at half the distance between the middle and the ends as seen in Figure 3. The strain gauges were set up to monitor the bending of the primary structural elements of the bridge in a standard Wheatstone bridge configuration using 2 extra gauges for temperature compensation. To keep costs low, custom hardware was designed for the 24 bit Analog to Digital converter (ADC) as a cape for a raspberry pi single board computer which provided the primary interface to the ADC. The core ADC chip used was the Texas Instruments ADS1231 and is capable of supporting 80 samples per second with a mV range. Accelerometers were manufactured by Analog Devices with a sensitivity of g and wired to a separate 10 bit ADC cape for raspberry pi. Accelerometers were placed in the geometric center of each deck plate section, spaced approximately 5’ 1” apart as seen in Figure 3. No filtering or smoothing was performed for any of the sensors resulting in only raw voltage readings stored to a remote cloud based time-series database. Accelerometers were sampled at 40 Hz while strain gauges were sampled at 80 Hz. All devices were synchronized using SNTP (Simple Network Time Protocol) and time-stamps were generated by the Raspbian operating system through a Python script at the time of sampling.
-scale. Due to environmental electrical noise in the machine shop, initial autoscaled plots are dominated by numerous short outliers with high amplitude. However for the application of using this data to monitor pedestrian-events (such as a person walking) on the bridge, what an analyzer of such strain sensor data is concerned with are the small sinusoidal-like signals towards the start of the illustrated time-series. This represents the structural response of the bridge as a pedestrian-event occurs. It is the periods of data that contain recurrent loading, the traversal of the pedestrian’s interaction with the bridge, that are of interest for our application. Next, we consider the snippet of time-series data shown in Figure7; this time-series shows measurements from an accelerometer (City-side: pi-pier9-bridge-accel-5-9-a-z-9). Figure 7 shows a segment of this time-series that contains a visibly high magnitude oscillatory-like signal, that represents a pedestrian-event occurring near the accelerometer. This particular pattern of data obtained from an accelerometer is therefore of interest to an analyzer when aiming to monitor the response of the bridge to pedestrian-events over time. Any compressed version of these two data-sets should aim to preserve relevant signals such as the ones discussed here, in order to effectively be able to monitor the pedestrian-events to a similar degree as one can do with the raw data shown here. The next section explains how one can weight which data points within the time-series obtained from the accelerometers and strain sensors considered in this section are relevant, with respect to being able to monitor these pedestrian-events.
3 Relevance scores for features in time-series
In this section we introduce the notion of relevance scores. A relevance score operates on the time-series and is used to characterise the importance of each data point. Two types of relevance scores are selected for use with the accelerometer and strain data. Consider the univariate real-valued time series observed at the time-stamps respectively. A relevance score operates on the time-series, transforming each value into a real-value score that quantifies its importance; the higher the score, the more important the data point. Denote the relevance score for a time-series as . We now introduce two different types of relevance scores.
Data-driven relevance scores
Define the following relevance scores, based on only the data in the time-series ,
where is the Heaviside function. The various relevance scores capture different features of the time-series. In (i), large scores are associated with high magnitude values. In (ii), large magnitude differences in the contiguous pairs of the time series lead to high relevance scores. In (iii), the scores at instance satisfying will have a non-zero value otherwise the score is zero. This represents a procedure where only large differences between contiguous pairs of values lead to a non-zero score. The higher the value of in these relevance scores, the greater the difference between the score for relevant and non-relevant points. The scores (i) and (ii) were used in Liu and Müller (2004). For the accelerometer data we shall use the relevance scores presented in (ii) with . This choice is motivated by the oscillatory features in the data as noted in Section 2.
Query-based relevance scores
Another type of relevance score can take a query
shape as an additional input, which captures a feature of the original data that we wish to retain in the compressed version of the data. A query shape is described by the real-valued vector, with odd. One such score is
where , also and is the Euclidean norm. The relevance score in (1) is used for the strain sensor data so that the sinusoidal-wave type shape seen in the previous section is preserved in the compressed data. For scale and shift invariance, there are warping distances (Keogh and Ratanamahatana, 2005; Paparrizos and Gravano, 2015) that can be used instead of the Euclidean metric used above; this is sufficient for the scope of this paper. When denotes the center of a subsequence of the time-series that matches the query shape well, the greater the importance associated with the value through the relevance score.
The following example will illustrate the behaviour of both types of relevance score. Figure 11 shows an example of an ECG time-series of length . The query shape, of length , is displayed in Figure 11. Figure 11 shows the relevance scores in (1) for the time-series, with . Note that the relevance score is higher for subsequences of the original ECG time-series that match the query shape well, e.g. during the three occurences of peak signal. On the other hand, Figure 11 shows the relevance scores in (ii) for the time-series, with . Due to the high magnitude of the peak signals relative to the rest of the time-series, these relevance scores are similar to those in Figure 11
, but exhibit slightly more variance.
The next section will introduce a method that divides a time-series based on its relevance scores. This segmentation of the time-series is used to generate a compressed version of the data.
4 Segmentation and compression
Relevance scores, introduced in Section 3, characterise the desired features in a time series that are of interest. This section introduces a method that divides the time-series based on its relevance scores into segmentation
(time) points. Interpolation between these segmentation time points leads to a compressed version of the time-series. First, Section4.2 describes how the segmentation points are computed in the static case where the time-series is not a data stream. When streaming data is considered, the method to divide the time-series into segmentation points is required to be recursive, as it assumed infeasible to re-compute this segmentation after every point is added to the time-series. Section 5 therefore introduces a method to incrementally compute an approximation to the segmentation points that one would obtain by following the static methodology in Section 4.2, for use in the streaming data setting.
4.1 Segmentation of time-series
The segmentation of a time-series leads to time-series data compression. As aforementioned, this compression is important for data acquired by sensor networks fitted to instrumented infrastructure, in order to reduce the complexity and increase the efficiency of any analysis. Segmentation can be used for time-series data compression by breaking the original time-series up into segments; one can then reconstruct a compressed version of the original time-series using these segments alongside some interpolation method. The type of compressed reconstructions that this paper considers are known as piecewise aggregate approximations (PAA) (Fu, 2011). For the time-series observed at the time-stamps , we denote segmentation points by , where and , for all . Since , the data is compressed. These points define a unique approximation , which is a compressed reconstructed version of the original time-series point , for . Exact forms for a couple of these compressed reconstructions are given in Sec. 4.3. A common metric to assess the space-efficiency of the compression of the original time-series is the compression ratio, given by,
In practice, a segmentation algorithm will typically be implemented and evaluated by (a) specifying a desired compression ratio, and then reporting the error of the approximation, or (b) specifying a condition for the segmentation to satisfy (e.g. maximum approximation error), and then reporting the compression ratio. The methodology in this paper is inline with (a). A naive choice for the segmentation points would simply be evenly spaced points; however for relatively small values of this would lead to important signals in the data being lost in the compression. Therefore the segmentation problem is concerned with choosing the points for some predefined objective, such as minimizing the approximation error. For example, our predefined objective here is to preserve key, relevant features in the compression. Dynamic programming can be implemented to find the particular segmentation which minimizes the total error of the reconstruction away from the original data; this is at the computational expense of (Terzi and Tsaparas, 2006). In the next two sections we propose a segmentation algorithm that will focus on points in the original time-series data that have high relevance to an analyzer.
4.2 Computing the segmentation points
The contribution of this work, a methodology for time-series segmentation whilst preserving relevant features of the original data, is now described in this section for the case where the time-series is not being streamed. We consider the opposite case in the Sec. 5. We seek a segmentation of a time-series that is constructed using the segmentation points , where . Recall that the original time-series points were assigned relevance scores in Sec. 3 that described their importance. These can be made into weights via the normalization, . The segmentation points can also be assigned weights, and we would like each of them to be as equally relevant in the compression. Therefore, let , where for , be these weights.
We will now describe the method used to compute the segmentation points , based on the method of optimal transport (Villani, 2008). At first glance, it seems reasonable to resample the segmentation points, using any standard resampling method (Douc and Cappé, 2005)
, from the weighted time-series points. This approach is not ideal for the objectives in this paper, since there is little guarantees using these resampling methods of a particular placement for a segmentation point. Instead in this paper we use a deterministic linear transformation from the original time-stamps to the segmentation points. It will be shown later in this section and in Sec.4.3 that this transformation allows one to guarantee particular placements of the segmentation points and prove properties about the corresponding compressed reconstruction. Define the coupling matrix , with the constraints,
Using the coupling matrix, segmentation points can be computed for this methodology via the following linear transformation,
for . We are interested in the particular coupling matrix, known as the optimal coupling, that solves the well-known Monge-Kantorovitch optimization problem,
In our case, the scheme chooses segmentation points that are as close to as possible whilst satisfying the constraints in (2). Linear programming can be used to numerically compute the optimal coupling , at the computational expense of . The matrix will have at most non-zero elements. The pseudocode of this algorithm is given in Algorithm 1 in Appendix A. Note that the time-stamps are not an input for this algorithm; this is because all time-stamps are assumed ordered. The segmentation scheme does not use information about the reconstruction, and only which features of the original time-series the reconstruction would be required to preserve. This is why it can utilise linear programming to solve the problem, and hence become significantly cheaper than alternative segmentation methods. If it is not acceptable in a particular application for the segmentation points to not necessarily take integer values, then we can use , for , instead of (3). The procedure in this section was proposed as a resampling scheme for non-parameteric data-assimilation in Reich (2013). The constraints in (2), that dictate the form that the optimal coupling matrix takes, are influenced by the relevance score of the time-series points. By computing the segmentation points in this section using the coupling matrix , we therefore designate more segmentation points towards periods of time-stamps that correspond to time-series points with high relevance scores.
for and with . This interval forms an important part of the extension of this methodology to the streaming data case considered in Sec. 5. The expression in (5) is also an important aspect of the proposed methodology, as it makes sure that there will be a segmentation point within a certain period of data, even if all the points within it are not particularly relevant. This is useful in many applications where sensor data have long-term drifts in background noise; this guaranteed interval will allow sparsely placed segmentation points to keep track of this drift. The next section will now consider how to reconstruct a compressed version of the original time-series using the segmentation points computed in this section. This reconstruction will therefore preserve highly relevant periods of time-series data in the compression to a greater extent than irrelevant periods of data.
4.3 Compressed reconstruction
This section will explore the PAA (Fu, 2011) compressed reconstruction , for , of the original time-series using the segmentation points computed within the methodology presented in the previous section. Let the segmentation points have the indices in the original time-series so that , for . Two examples of PAA reconstructions are now given: define the piecewise constant approximation,
and piecewise linear approximation,
for and where and . Another simple alternative to this approximation is the piecewise regression approximation.
The error of the relevance scores of the compressed reconstruction, utilising the segmentation points computed in the previous section, away from the relevance scores of the original time-series is now considered. This error metric is of particular interest to the scope of this paper since the proposed methodology is designed for when the practioner would like to preserve relevant features in the compressed version. We shall assume a piecewise reconstruction satisfying
where . Recall that are the relevance scores of the original time-series, and now let be the relevance score of for . Then,
for all . The derivation of this is shown in Appendix B.
5 Streaming time-series segmentation
This section will now remove the assumption made in the previous section that the time-series is not streamed. In the streaming data case, new data points are added to the time-series sequentially, possibly indefinitely. We propose a recursive approximation to the segmentation points that one would obtain from using the methodology presented in the previous section; this approximation is updated every time a new data point is added to the time-series. An approximation is required since the segmentation points are derived using the linear transformation in (3) and this transformation is affected by the constraints in (2). These constraints are dependent on the normalized weights , for ; each time a data point is added to the time-series all previous weights will change, leading to the position of all segmentation points changing. On another note, since the approximation is recursive, it is more efficient than re-computing the segmentation points using the methodology in the previous section each time the time-series is added to. There are two aspects to this approximation that are discussed in this section. First, we explain how one can update the number of segmentation points used for the compression as the time-series increases in size. Second, we outline how we approximate the segmentation points; a user-defined approximation error controls how accurate one wants the approximation to be. A general outline of the approximation is given below:
Initialize the algorithm by observing the first points of the time-series, setting , to be the user’s choice, and . Also initialize:
Prescribe a user-defined level of accuracy and set .
Set , and observe the new data point in time-series (or using a buffer, if required for a query-driven relevance score), at the time-stamp , and compute associated relevance score . Set , and .
Update and prune , and : Set . If:
Set , , and .
Set and while , implement:
If , then set . On the other hand, if , then compute
and set , and . Set , and .
Return to step (3).
The procedure outlined in the steps above is given in more detail in Appendix C. The intuition behind the approximation is the following. The vectors , and keep a synopsis of the time-stamps, relevance scores, and products of time-stamps and relevance scores over the time-series. In step (4ii), some elements of these vectors are combined and summed together when the corresponding relevance scores are low; these elements are unlikely to have segmentation points on them. As this synopsis is pruned over time, generating approximations from the elements of it instead of the entire time-series will be efficient. Now, since we know that the segmentation point , for , is within the interval , it is important to always maintain approximations to the end-points of the interval in (5); this is done in step (5i). Using the condition in (10) we have that approximations to the end-points of the interval in (5) satisfy,
A consequence of this on the accuracy of the segmentation point approximation, given by a rolling weighted sum in step (5ii), is that,
for and where . This bound on the segmentation points is proved in Appendix D.
As an example to see how the number of segmentation points is updated in step (4) as data points are added to the time-series, consider the following time-series:
We assume that we start with , and let with . Therefore, and . After the fifth data point, , enters the time-series at the time-stamp , we have that and . If , we have that and therefore . In this case we would increase the number of segmentation points by one. If on the other hand , we have that and therefore . In this case the number of segmentation points would stay as . The next section gives a numerical demonstration of this approximation to the segmentation points of a time-series, in the streaming data regime.
6 Numerical demonstrations
The following section will demonstrate the methodology presented throughout the paper with the application of the method to simulated streaming data and data from the accelerometers and strain gauges instrumented on the pedestrian footbridge introduced in Sec. 2. These demonstrations prove the effectiveness of the proposed compression technique for the application of efficiently managing data from instrumented infrastructure, whilst preserving key features in the original sensor data.
6.1 Simulated streaming data
This section will investigate the effectiveness of the streaming data approximation, presented in Sec. 5, for the segmentation points obtained from the optimal transport algorithm introduced in Sec. 4. Recall that this approximation is required in the streaming data regime since it is assumed infeasible to re-compute the segmentation points every time a new element is added to the time-series. The implementation of the approximation is described in Appendix C and segmentation points are added on-the-fly when the condition in (9) is met. The simulated time-series considered in this problem is,
where , and the relevance score used is , with . This time-series is chosen to simulate frequent occurrences of a particular magnitude-based feature in the data, that will hopefully allow the segmentation points to shift periodically to the peaks of the sinusoidal waves when they enter into the time-series. Initially we start with , and . The value is used. After all elements have been added to the time-series, there are segmentation points.
First, Figure 13 shows the relative error of the approximate segmentation points,
with , after every 5000’th element has been added to the time-series. The theoretical bound in (12) is also shown. The relative error stays approximately constant over time, and below the bound. Next, Figure 13 shows the runtime (in seconds) of computing the segmentation points via the approximation presented in Algorithm 5 within Appendix C, after every 5000’th element has been added to the time-series. It shows this runtime in comparison to that of computing the actual segmentation points using an implementation of linear programming (Algorithm 1). Note that the runtime of the approximation is far less than that of re-implementing linear programming each time a new element is added to the time-series for large . This shows the feasibility of applying the segmentation methodology (or an approximation of it) proposed in this paper to time-series obtained from a sensor acquiring data at a fast pace. Finally, Figure 14 shows the ratio of reconstruction error from using a piecewise linear reconstruction, in (7), alongside both the approximate segmentation points and those obtained by continuously implementing Algorithm 1. Note that this ratio of errors is approximately equal to one, over the data stream, showing that there is negligible loss in reconstruction accuracy in computing approximate segmentation points instead of using the linear programming algorithm in Algorithm 1.
6.2 Accelerometer data
This section applies the proposed compression methodology to a time-series generated by accelerometers instrumented on the pedestrian footbridge, introduced in Sec. 2. Data from one of the sensors (City-side: pi-pier9-bridge-accel-5-9-a-z-9) in the described accelerometer network is considered here. This time-series is acquired at 40Hz over a total time of 20 seconds. There are three signals in the time-series, seemingly corresponding to a pedestrian-event occuring on the bridge near the sensor three times. As aforementioned, accelerometer signals have an oscillatory-like shape, and therefore the relevance score we use to generate the segmentation points in this example corresponds to . The intuition behind this choice is that oscillations in the time-series are larger during a signal, rather than during background sensor noise. This can be seen in Figure 15, where segmentation points, obtained using the methodology presented in Sec. 4, are also shown.
Notice that the segmentation points gather around the points where the oscillations are largest, that could represent a pedestrian-event being detected by the accelerometer. On the other hand, they are more sparsely spread out at times when there appears to be only sensor noise present. Interestingly, the third and final signal has the least dense concentration of segmentation points out of all three signals, given it does not exhibit as large oscillations as the other two signals do.
6.3 Strain sensor data
This section now applies the proposed compression methodology to a time-series obtained from a strain sensor (City-side / left: pi-pier9-bridge-strain-2-left-s-0) within the network instrumented on the pedestrian bridge introduced in Sec. 2. Relevant signals within the time-series, seemingly corresponding to a pedestrian-event occuring near the sensor, appear as a sinusoidal-like wave (see Figure 5). Inspired by the form of this signal, the relevance score used to generate the segmentation points in this example corresponds to that in (1), where the query shape is given by,
are the sample mean and standard deviation of the subsequence. Therefore normalization is employed here to aid the pattern detection metric in (1). Figures 17 and 17 show the placements of the segmentation points for two snippets of data from the strain sensor (acquired at 80Hz), each containing a single signal that seemingly corresponds to a pedestrian-event. In both cases, the segmentation points are very sparse for all times that aren’t in the immediate interval of the signal; instead they are concentrated on the signal itself.
An interesting aspect of the piecewise linear compressed reconstruction from the segmentation points, obtained from using the proposed methodology, is the relevance score for this reconstruction. A piecewise-linear approximation using the segmentation points computed in Figure 17 is obtained, and Figure 18 shows the value of the relevance score in (1) for this approximation alongside that from the original time-series. As one can see, the relevance scores match well for large values (corresponding to values which are close to segmentation points), but do not match well for the lower, less relevant values. This shows the benefit of the proposed methodology at being able to preserve key features of the original time-series, specified by the relevance score used. The error of the relevance score for compressed reconstructions using segmentation points obtained via the proposed methodology is investigated in Sec. 4.3.
6.3.1 Reconstruction from compression
We will now assess how the compressed reconstruction of a time-series obtained from the strain sensor instrumented to the pedestrian footbridge introduced in Sec. 2, obtained using the compression methodology in this paper, performs at representing the original time-series in a lower dimensional form. To do this, we will concentrate on assessing the reconstruction error and space-efficiency within parts of the original strain sensor time-series that are highly relevant to the analyzer: the signals corresponding to pedestrian-events. A 400 second long time-series is obtained from the strain sensor. This snippet contained 5 pedestrian-event signals. Five 2 second long intervals, containing these signals, were extracted manually, and the non-event periods in between these intervals were recorded separately. Segmentation points were computed as in the previous section, and a compressed reconstruction was obtained using a piecewise regression approximation to the original time-series. The reconstruction during one of the extracted event intervals containing a signal is shown in Figure 20, in addition to the reconstruction during one of the non-event periods in between the extracted intervals shown in Figure 20. Compression ratios, and the average relative squared reconstruction error, , where corresponds to all indices within a particular time interval of the reconstruction, were computed for each extracted event interval and each non-event period in between the extracted intervals. These are shown in Table 1. One can note from the aforementioned figures and this table that the reconstruction is refined, leading to lower error, in the periods of high relevance (signals seemingly caused by pedestrian events). Compression ratios are lower in these periods, than during the non-event periods, as more segmentation points have concentrated on them. The higher compression ratios in the non-event periods, coupled with the lower error during the extracted event intervals, show the effectiveness of the reconstruction at producing very similar values to the original time-series during relevant periods only in a much lower dimensional form.
|Period||Compression Ratio||Relative Squared Reconstruction Error|
7 Conclusion and Discussion
This paper has presented a compression technique for data streamed from instrumented infrastructure, such as bridges, roads and tunnels fitted with sensor networks, for applications including condition and structural health monitoring. Especially when data is acquired frequently, relative to any changes exhibited in the structure, it is important to wisely compress data down to a manageable quantity for storage and analysis. Methodology is presented for cases where data is given in a single batch, and where data is acquired sequentially in an indefinite stream. The proposed compression technique produces a piecewise aggregate approximation (segmented time-series) that preserves user-defined particular patterns or features that exist in the original time-series. This paper uses the motivating example of particular patterns of signals from accelerometers and strain sensors instrumented on a pedestrian footbridge, that could represent a pedestrian-event (such as a person walking) in the vicinity of these sensors, as the features that one would like to preserve in a compression.
The methodology works as follows. A user-defined relevance score is first used to create weights for each data point in a time-series; points are weighted relative to how important it is to the appearance of features or patterns that one would like to preserve in the compression. Then optimal transport is used to find the optimal piecewise segmentation that preserves sequences of points within the time-series that have high relevance. This can be done via linear programming which can be implemented quickly even for relatively large data-sets. In the case where the data is streamed sequentially over time (e.g. from a sensor network instrumented on an operational structure), a bounded approximation to the optimal piecewise segmentation can be maintained over time and queried in a significantly reduced runtime relative to re-computing the linear programming result.
The features that the compression should preserve inform the choice of the relevance score used alongside the proposed methodology. For example, similarity search and distance measures can be used to preserve a particular query shape or pattern in the time-series data. Future extensions of this work should explore the properties of compressions constructed using the proposed methodology alongside more exoctic relevance scores (e.g. Markov-models(Ge and Smyth, 2000) or probabilistic warping (Bautista et al., 2012)) and compositions of relevance scores. It only seems natural that by doing the latter for example, one could extend this methodology to compressing a time-series whilst preserving multiple important features or patterns, such as data acquired from sensors that produce different signals for different types of events (e.g. railway bridges, where different types of trains frequently pass over it).
The motivating example of applying the proposed methodology to compressing data-sets obtained from a pedestrian footbridge instrumented with strain sensors and accelerometers is considered via a series of numerical demonstrations towards the end of this paper. These demonstrations highlight the effectiveness of the compression at preserving key features (e.g. a sinusoidal-type wave of measurements) in the original data from the strain sensors and accelerometers that represent a pedestrian-event near the sensor location. Due to the choice of these relevance scores for the implementation of the proposed methodology, the compression ignores large-magnitude noise and outliers (possibly due to electrical currents) that often make traditional analyzes of raw data obtained from strain sensors and accelerometers difficult. The reconstructed compressed data obtained from this methodology exhibits low error, with respect to the original data, during occurences of pedestrian-events, and high compression ratios (a metric for the space-efficiency of the compression) during unimportant periods of data. These demonstrated properties are necessary in alleviating the high complexity of storing and analyzing streaming sensor data from instrumented infrastructure. Therefore this work contributes towards important research efforts to improve structural health and condition monitoring systems used alongside novel and contemporary sensing technologies.
- Bao et al. (2011) Bao Y, Beck JL and Li H (2011) Compressive sampling for accelerometer signals in structural health monitoring. Structural Health Monitoring 10(3): 235–246.
- Bao et al. (2019) Bao Y, Tang Z and Li H (2019) Compressive-sensing data reconstruction for structural health monitoring: A machine-learning approach. arXiv preprint arXiv:1901.01995 .
Bautista et al. (2012)
Bautista MA, Hernández-Vela A, Ponce V, Perez-Sala X, Baró X, Pujol O, Angulo C and Escalera S (2012) Probability-based dynamic time warping for gesture recognition on rgb-d data.In: International Workshop on Depth Image Analysis and Applications. Springer, pp. 126–135.
- Bose et al. (2016) Bose T, Bandyopadhyay S, Kumar S, Bhattacharyya A and Pal A (2016) Signal characteristics on sensor data compression in iot-an investigation. In: 2016 13th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON). IEEE, pp. 1–6.
- Cassisi et al. (2012) Cassisi C, Montalto P, Aliotta M, Cannata A and Pulvirenti A (2012) Similarity measures and dimensionality reduction techniques for time series data mining. In: Advances in data mining knowledge discovery and applications. InTech.
- Cawley (2018) Cawley P (2018) Structural health monitoring: Closing the gap between research and industrial deployment. Structural Health Monitoring 17(5): 1225–1244.
- Douc and Cappé (2005) Douc R and Cappé O (2005) Comparison of resampling schemes for particle filtering. In: ISPA 2005. Proceedings of the 4th International Symposium on Image and Signal Processing and Analysis, 2005. IEEE, pp. 64–69.
Fu T (2011) A review on time series data mining.
Engineering Applications of Artificial Intelligence24(1): 164–181.
Ge and Smyth (2000)
Ge X and Smyth P (2000) Deformable markov model templates for time-series pattern matching.In: Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pp. 81–90.
Greenwald and Khanna (2001)
Greenwald M and Khanna S (2001) Space-efficient online computation of quantile summaries.In: ACM SIGMOD Record, volume 30. ACM, pp. 58–66.
- Keogh (1997) Keogh E (1997) A fast and robust method for pattern matching in time series databases. In: Proceedings of WUSS 97.1, volume 99.
- Keogh et al. (2004) Keogh E, Chu S, Hart D and Pazzani M (2004) Segmenting time series: A survey and novel approach. In: Data mining in time series databases. World Scientific, pp. 1–21.
- Keogh and Ratanamahatana (2005) Keogh E and Ratanamahatana CA (2005) Exact indexing of dynamic time warping. Knowledge and information systems 7(3): 358–386.
- Khoa et al. (2014) Khoa NLD, Zhang B, Wang Y, Chen F and Mustapha S (2014) Robust dimensionality reduction and damage detection approaches in structural health monitoring. Structural Health Monitoring 13(4): 406–417.
- Lau et al. (2018) Lau FDH, Butler L, Adams N, Elshafie M and Girolami M (2018) Real-time statistical modelling of data generated from self-sensing bridges. In: Proceedings of the Institution of Civil Engineers - Civil Engineering.
- Liu and Müller (2004) Liu X and Müller HG (2004) Functional convex averaging and synchronization for time-warped random curves. Journal of the American Statistical Association 99(467): 687–699.
- Moniz et al. (2016) Moniz N, Branco P and Torgo L (2016) Resampling strategies for imbalanced time series. In: Data Science and Advanced Analytics (DSAA), 2016 IEEE International Conference on. IEEE, pp. 282–291.
- Paparrizos and Gravano (2015) Paparrizos J and Gravano L (2015) k-shape: Efficient and accurate clustering of time series. In: Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data. ACM, pp. 1855–1870.
Reich S (2013) A nonparametric ensemble transform method for bayesian inference.SIAM Journal on Scientific Computing 35(4): A2013–A2024.
- Terzi and Tsaparas (2006) Terzi E and Tsaparas P (2006) Efficient algorithms for sequence segmentation. In: Proceedings of the 2006 SIAM International Conference on Data Mining. SIAM, pp. 316–327.
- Torgo and Ribeiro (2007) Torgo L and Ribeiro R (2007) Utility-based regression. In: European Conference on Principles of Data Mining and Knowledge Discovery. Springer, pp. 597–604.
- Villani (2008) Villani C (2008) Optimal transport: old and new, volume 338. Springer Science & Business Media.
Appendix A Linear programming algorithm
The algorithm for linear programming is given in Algorithm 1.
Appendix B Proof of the error in relevance of reconstructions
This section explains the derivation of the error bound in the relevance scores of the compressed reconstruction with respect to that of the original time-series, given in (8). We start by assuming smoothness of the relevance score ,
where . Let . We will now assume that the piecewise linear reconstruction in (7) is used, and a relevance score that just depends on , for , is used. We then investigate two cases: (a) , (b) . For case (a), it is clear that at any point , the error of the relevance score for the reconstructed time-series is,
assuming . Now for case (b), we know that the last point in the interval will be the value of , where . Since a piecewise linear approximation is assumed, we have , and therefore
We also note that of course . Then at any time-stamp , the error of the relevance score for the reconstructed time-series is,
Therefore for either case, and for the assumptions placed on the reconstruction, we have that
for all .
Appendix C Streaming approximation to segmentation points
The approximation to the segmentation points outlined in Sec. 5 is explained in more detail here. The construction of the approximation is based on storing a synopsis of the data points in the time-series, and is inspired by the work in Greenwald and Khanna (2001). The synopsis is a set formed of the triples , for , where the values , for , are a succinct collection of time-stamps within the time-series. They are such that , with and . The values represent the sum of over all the time-stamps . Finally the values represent the sum of the products of and , over all the time-stamps . The approximation is more efficient than re-computing the segmentation points via Algorithm 1 since the approximation operates on only the data points stored in this synopsis, and given that . The approximation starts by initializing the values , , and in step (1) of the outline in Sec 5. These triples are maintained over time to generate the approximations , to the segmentation points , using the Algorithms 2, 3 and 4 below. Then, an approximation to the segmentation points , for , can be queried at any time via Algorithm 5. The triples are maintained as follows. Every time a new element is added to the time-series at the time-stamp , Algorithm 4 is implemented to update the synopsis. This routine uses Algorithm 2 and Algorithm 3; the latter algorithm allows the synopsis to be cut down in size in order to make the approximation efficient.
Appendix D Proof of streaming approximation error
This section provides a proof of the bound given in (12) for the error of the approximation to the segmentation points , for . First, let and recall that the indices and are the smallest and largest non-zero elements in respectively. Also recall that and . Assume that the approximations in Algorithm 5 to the indices and (corresponding to the ’th and ’th triple in respectively) are given by and . Note that due to the way that is constructed and maintained, we have that if we must have , and that if we must have . Also recall that
from (11). The error of the streaming approximation to the segmentation points , for , can be expressed as,