Sequential Detection of a Temporary Change in Multivariate Time Series

10/29/2021
by   V. Watson, et al.
0

In this work, we aim to provide a new and efficient recursive detection method for temporarily monitored signals. Motivated by the case of the propagation of an event through a field of sensors, we postulated that the change in the statistical properties in the monitored signals can only be temporary. Unfortunately, to our best knowledge, existing recursive and simple detection techniques such as the ones based on the cumulative sum (CUSUM) do not consider the temporary aspect of the change in a multivariate time series. In this paper, we propose a novel simple and efficient sequential detection algorithm, named Temporary-Event-CUSUM (TE-CUSUM). Combined with a new adaptive way to aggregate local CUSUM variables from each data stream, we empirically show that the TE-CUSUM has a very good detection rate in the case of an event passing through a field of sensors in a very noisy environment.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

01/31/2020

Two-Sample Testing for Event Impacts in Time Series

In many application domains, time series are monitored to detect extreme...
01/19/2022

Generative Models for Periodicity Detection in Noisy Signals

We introduce a new periodicity detection algorithm for binary time serie...
01/12/2021

Change-point detection using spectral PCA for multivariate time series

We propose a two-stage approach Spec PC-CP to identify change points in ...
09/27/2018

Dataset: Rare Event Classification in Multivariate Time Series

A real-world dataset is provided from a pulp-and-paper manufacturing ind...
11/10/2020

Tracking change-points in multivariate extremes

In this paper we devise a statistical method for tracking and modeling c...
09/03/2021

Simultaneous quantification and changepoint detection of point source gas emissions using recursive Bayesian inference

Recent findings suggest that abnormal operating conditions of equipment ...
02/02/2020

A Model of Distributed Disorders Detection

The paper deals with disorders detection in the multivariate stochastic ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The multiplication of industrial sites near populated areas increases the danger for populations in cases of an unexpected release of an hazardous compound. Densely populated areas can also be a target for ill-intention people to release some toxic material and cause many victims. In such cases early detection can be crucial. If these sensitive areas are monitored, waiting for the level of toxic compound to be sufficient so that it is unambiguously monitored by the sensors can have the consequence of being too late in one’s response to a threat. Sequential change-point detection uses the statistics of a data stream to detect an abnormality in a signal while the signal is still low. This means it could detect a small radiation level before it becomes a Seveso-like event, or detect abnormal residual radioactivity due to the presence of a dirty bomb that has not exploded yet. These sequential detection techniques can be used to detect the presence of a pollutant in a fluid such as in Rajaona et al. (2015)

. Early detection allows estimation techniques such as in

Septier et al. (2020)

to start monitoring the data at the right moment and facilitates the convergence to a solution while decreasing the computational cost, it is also used for early seismic detection

Popescu (2014), or early detection of infected people during a pandemic Braca et al. (2021), but can also be applied to many other fields such as Aue and Horváth (2013) and Shbat and Tuzlukov (2019).

The CUSUM (CUmulative SUM) technique Page (1954) is a powerful univariate sequential change-point detection tool on which are based many detection techniques. The extension of the CUSUM to multivariate cases is not strait-forward and has been the object of many considerations in the process-control community (see Golosnoy et al. (2009); Mei (2010); Kurt and Wang (2019); Tartakovsky et al. (2014); Rovatsos et al. (2020a); Banerjee and Veeravalli (2015); Xie and Siegmund (2013)

). Moreover, the issue of the possible non-synchronicity of the monitoring between the sensors in multivariate cases for the detection of temporary change remains a research open question. Indeed, the existing ways to deal with these problems require to lose the recursive computation of the test statistics necessary to trigger detection. The temporary change is not a common consideration in the process-control community as when a process gets out-of-control, it rarely gets back in-control. When we extend the sequential detection technique to some other physical problems such as the ones cited earlier, this back in-control scenario is what we expect as a sensor can be exposed only for a limited duration.

In this paper we propose a new multivariate CUSUM-based technique to deal with temporary changes without losing the recursive computation. Indeed, the Temporary-Event-CUSUM (TE-CUSUM) does not require the change to be permanent or synchronous between data streams to be detected. We also developed a new adaptive method to combine the local test statistics so it increases the performances of the TE-CUSUM when the subset of sensor which is affected with the signal is unknown.

This paper is organised as follows, in the second section, we introduce our detection problem and the CUSUM technique seen as a Generalised-likelihood ratio test (GLRT). We set the principle of the proposed TE-CUSUM and show its equivalence with the CUSUM for univariate cases. In the third section, we extend the model problem to multivariate temporary events and make a quick review of the existing methods for multivariate sequential change-point detection. We then introduce a new strategy to combine local statistics with a novel adaptive censoring method and also demonstrate the efficiency of the TE-CUSUM in multivariate sequential detection cases. The fourth section is a validation test in which we compare the efficiency of the different techniques for monitoring the spread of a compound over a field of sensors.

2 Sequential change-point detection in univariate time series

The change-point detection problem in univariate time series can be formulated as the following hypothesis test:

(1)

This represents a two cases scenario, the first one marked by the hypothesis for which every sample with follows , the second one stating that there is a time such that with starts to follow .

This leads us to the associate likelihood ratio:

(2)

Comparing to a threshold allows us to define a statistical test sequentially computed to decide between the two hypothesis.

2.1 Generalised likelihood ratio test and CUSUM

One problem about the likelihood ratio test of Equation (2) is that the knowledge of the change-point is needed. In such a situation (unknown parameter in the likelihood distribution), it is common to use a generalised likelihood ratio test (GLRT) Trees (1992) which is defined in our problem as:

(3)

with,

(4)

while the change point can be estimated with:

(5)

The criterion increases when and decreases when . If , the hypothesis is true and the ratio has a better chance to be greater than 1. will overall increase even if monotony is far from guaranteed. We can then compare

to a threshold to trigger detection sequentially once a novel observation is received. In such a sequential setting, since we are interested in the quickest detection method, it is also important to consider the detection delay instead of just the probability of detection

Tartakovsky et al. (2014).

The generalised likelihood ratio in Equation (3) can be rewritten in the following recursive form which allows its integration in online systems:

(6)

2.1.1 CUSUM principle

The CUSUM technique was first introduced by Page in 1954 Page (1954). This algorithm has been proposed in order to optimise both the detection delay and the average run-length to false alarm (ARL2FA) which is the average time between two false alarms Tartakovsky et al. (2014). This procedure can be seen as a sequential algorithm to recursively compute the GLRT defined by Equation (3). By using the log transform of Equation (6), the CUSUM test statistic is indeed simply given by:

(7)

By just computing a sum at each time sample and comparing to a threshold, one can have a robust online detection technique. The question of estimating the change point can be solved easily by expanding Equation (5) as:

(8)

with:

(9)

2.1.2 Example

To illustrate the CUSUM, let us consider a change of mean in a single data stream composed of independent Gaussian random variables:

(10)

In this case the CUSUM test statistic can be computed as:

(11)

with,

(12)
Figure 1: Example of detection of a change of mean in a single data stream from to

with a Gaussian noise distribution of standard deviation

. At the top is plotted the data stream with the change-point time marked, in the center is the CUSUM variable with marked the value of the chosen threshold and the time of detection in green and at the bottom is the variable from Equation (9) with in red the estimated change point.

Figure 1 empirically shows that the CUSUM technique is able to detect a change in the mean of a signal which is not obvious by looking only to the time series . It takes several time samples for the method to detect the change-point (51 in this case) but it is able to detect it nonetheless. At the bottom of the figure we can see that , defined by Equation (9), gives us an estimate of the change point. The CUSUM can be used to detect any changing parameter Lee et al. (2003)

even if it is most commonly used to detect a change of mean or variance. As in any detection technique there is a balance to make between detection rate and false alarm, here between the average detection delay and the ARL2FA. A way to deal with the setting of the method is to determine what ARL2FA is tolerable, set the detection threshold (to which

is compared) to get the wanted ARL2FA and then check what average detection delay is obtained. When comparing several methods, one can set the thresholds so that the ARL2FA is the same among all of them and compare the average detection delays to determine which gives the quickest detection.

2.2 Finite moving average (FMA)

Concerned by the cases in which the change is temporary, Tartakovsky et al. (2021) proposes a method to detect this change by computing a likelihood ratio test on a moving time window of the signal defined as:

(13)

which depends on the window length . This test statistic, is then compared to a threshold to trigger or not a detection. In Tartakovsky et al. (2021), the authors compared this approach to the CUSUM when the amplitude of a change in mean is lower than expected and when the change duration is finite or even if the change is intermittent. These characteristics are of the utmost interest for our purpose.

This technique requires to either memorise values of the likelihood ratio or to compute times more operations every time sample than the CUSUM technique. Also, this method seems to be sensitive to the difference between the length of the window and the duration of the signal to detect.

2.3 Temporary-Event-CUSUM

In this paper we introduce a new technique called Temporary-Event-CUSUM (TE-CUSUM). Because the change is transitory, the model we consider is defined through the two hypotheses:

(14)
Proposition 0.

The test statistic obtained by solving the generalised likelihood ratio test of Equation (14) can be recursively obtained as follows:

(15)

with being the CUSUM test statistic defined in Equation (7).

Proof.

The GLRT of Equation (14) can be written as:

(16)

which leads straightforwardly to the recursive form introduced in Prop. 1. ∎

In Equation (16) it is implicit that is the last change-point before . Moreover, causality forces when is computed.

On univariate cases, TE-CUSUM is strictly equivalent to the standard CUSUM because a test on is equivalent to a test on . From Equation (15): we have . Therefore if then if then , being the detection threshold. If and then and . There is no way without and neither without .

3 Multivariate detection

In this section, we consider a multi-sensor network which consists of a collection of indexed sensors where each of them observes a realization from the previously discussed model. More specifically, under normal conditions, the distribution which governs the behaviour of each of the sensor is given by . At random time and during some random duration, a change could occur which affects a subset of sensors . The detection problem of a change can be thus formulated using the following binary hypothesis test model:

(17)

3.1 A brief review of existing procedures

Solving the problem set by Equation (17) would necessitate to test all combination of change-points for each possible subset of sensors. Some optimisation approaches have been used by Kurt and Wang (2019), or Rovatsos et al. (2020a) and Rovatsos et al. (2020b). The major drawback of these approaches is that it becomes rapidly too computationally expensive and it looses the possibility of a recursive computation. Golosnoy et al. (2009) sees the multivariate CUSUM variable as the norm of the sum of the local test statistics and Xie and Siegmund (2013) takes into account the case where the size of the subset of sensor is roughly known. In some cases, when the number of sensors is very large Banerjee and Veeravalli (2015) proposed to only merge binary units to the decision center so that detection is triggered by their number and not by an aggregation of local values.

Except for Rovatsos et al. (2020a) and Rovatsos et al. (2020b), all the developed methods consider that the change is permanent () for all the affected sensors.

In the work we present here we intend to remove the limitation (thus allowing some sensors to stop monitoring the change at some point) without increasing the computational power required for the detection.

Two basic ways to adapt the CUSUM to multivatiate cases is by computing the sum of local variables or by extracting the maximum value among the local CUSUM variables.

The SumCUSUM Mei (2010) associates local CUSUM variables as follows:

(18)

Where being the global SumCUSUM variable; i.e. the sum of the local CUSUM variables of the data stream. being the CUSUM variable of the sensor at time . This variable will be compared to an adapted threshold to make a decision of a detection when .

The MaxCUSUM extracts the highest value among the local CUSUM values as shown by Equation (19):

(19)

It appears that SumCUSUM will be relevant to be used when all or almost all of the data streams are affected by the signal while MaxCUSUM will be relevant when one or only a few of the data streams are affected.

Mei (2010) proposed to select (”censor”) sensors and compute a partial and optimized SumCUSUM with a low computational cost. It seems to be a very effective way to merge the data for an online use of the method. The SumCUSUM variable is thus transformed as:

(20)

with a threshold based on the prior rough knowledge of the value would take if it were affected by the signal. Figure 2 shows the results of the three methods depending on the proportion of sensors affected by the signal. The average run-length to false alarm (ARL2FA) of all three methods have been set to 30 000. We can see that when 1 or 2 out of 10 sensors are affected, the MaxCUSUM shows lower detection delays. The SumCUSUM gives better results when 3 or more sensors are affected. We can also infer from figure 2 that the censored SumCUSUM is a good compromise between SumCUSUM and MaxCUSUM. However, Mei (2010) shows that the best choice for depends on the number of sensors affected. While in some cases the proportion of sensors affected can be roughly predicted, in most cases it is completely unknown. In the case of the example of Figure 2 has been set as 60% of the global threshold .

Figure 2:

Average detection delay of SumCUSUM, MaxCUSUM and censored SumCUSUM techniques on a change in the mean of a Gaussian distribution with signal to noise ratio of -6dB. Change-point occurs at time sample 1000.

3.2 A novel adaptive censoring technique

To overcome the limitation of requiring some prior knowledge on the expected values of in order to carefully choose the absolute threshold from Mei (2010), we propose a relative threshold , computed for every time sample by:

(21)

with being a factor so that .

In both cases the censoring technique is a compromise between the SumCUSUM and the MaxCUSUM. The results given by the two can be retrieved using particular values for ( and )or ( and ).

In order to assess the difference of behaviour of the two threshold types regarding the number of affected sensors when it is unknown, we conducted an experiment which results are shown in Figure 3. In this experiment, is set such that an average run-length to false alarm (ARL2FA) of 10,000 is obtained and and values are set to be those which give the overall quickest detection for an unknown number of affected sensors between 1 and 20. The results empirically show that the proposed adaptive censoring technique outperforms the classical one. The gap in performance increases with the number of affected sensors.

Figure 3: Average detection delay standard and adaptive Censored SumCUSUM at -12dB (with optimized values: and ). A change of mean in the Gaussian distributions of a subset (in abscissa) of sensors appears at time 1000 while the other sensors keep the centered Gaussian distribution.

Because we cannot know in advance the number of sensors that will be affected in addition of the non-requirement of the knowledge of the expected values of , the relative threshold is consequently more relevant.

Some clues can be pointed out to explain this difference of behaviour by examining the differences between the two methods in some particular cases.

Case 1: All the local have close values one to another and are relatively far from the . In this case both methods will compute the same .

Case 2: The local have very different values. The standard method computes adding more low values of and has a lower value of which slows the detection.

Case 3: All the local have close values one to another and the are also close to the others. In this case, it is the standard method that computes a highest value for , but the case implies that all the values are close to so detection does not happen in both cases unless the value chosen by is close to and in this case we have a behaviour close to the MaxCUSUM.

In all that follows we apply this optimised relative censoring technique to all local statistics (CUSUM, TC-CUSUM, FMA) and keep the SumCUSUM and the MaxCUSUM as benchmarks.

3.3 Asynchronous monitoring and Temporary-Event-CUSUM on multivariate cases

In the previous section we have considered that the signal appears simultaneously on all the affected sensors. Indeed, all the local test variables are computed simultaneously and it is from these that we can compute the global variable at time and make a decision regarding the detection.

In many practical cases the signal can be monitored by the sensors with a delay. Even more some sensors can cease to be affected by the signal before some others begin to be. Thus the sensors are not affected at the same time. One could say that we should try to find the best synchronicity of the data streams, meaning the synchronicity which maximises the associated CUSUM variable but this is a combinatorial problem.

By using locally the novel TE-CUSUM test statistic defined in 1, the Sum-TE-CUSUM allow us to get the best synchronicity without requiring to test all the combinations, and thus saving a lot of computational resources. This time the global test variable becomes:

(22)

is the change-point for the data stream and is the end of the signal presence in the data stream.

As a reminder of Equation (15), the local variable is:

(23)

From Equation (16) and with of the data stream being rewritten :

(24)

The censoring technique can be applied to Equation (24) simply by adding a threshold like in Equation (20):

(25)

Here is an example to illustrate the Sum-TE-CUSUM . An event is monitored in three data stream but with such a delay that there is no overlap. In Figure 4, we can see the three data streams with and without noise.

Figure 4: Three data streams monitoring a short event without overlapping (noiseless in red, noise+signal in blue)

Figure 5 shows the evolution of the test variable of the standard SumCUSUM technique and the TE-CUSUM.

Figure 5: Evolution of test variables on the data streams of figure 4

We can see in Figure 5 that the SumCUSUM decreases between each appearance of the signal while the TE-CUSUM stands by and increases again as soon as the signal appears on another data stream. With the TE-CUSUM, we can detect the presence of the event with a higher threshold. If we set both thresholds in order to have a probability of false alarm of 1% on this interval and if we make 10,000 runs we obtain a detection rate of 41% with the SumCUSUM and 83% with TE-CUSUM.

Remark: In order to compare it to the other methods, we can also use the censoring technique to extend the FMA technique to multivariate cases:

(26)

4 Validation

In this section we compare the different detection methods presented previously. The studied methods are used to detect a change in mean of an amplitude

affected only a subset of sensors. The measurement noise at each sensor is assumed to be normally distributed with zero mean and standard deviation

. This experiment is conducted on several cases with two different signal to noise ratios (SNR) defined as . Ten sensors are considered among which 3,5 or 7 monitor the event. The censoring technique will be applied to TE-CUSUM and FMA. For each method the global threshold is set to have an ARL2FA of 30,000 time samples, and the parameter from the censoring technique is set (except for SumCUSUM and MaxCUSUM) for the quickest detection for a random number of signal but when all of them are synchronized.

Two sizes of window have been chosen for the FMA technique 50 time samples and 200 time samples as the duration of the event is supposed to be unknown, this will show a case where the time window is longer than the exposure and a case where the time window shorter than the exposure in the first scenario. In the second scenario, the duration of the exposure will fortunately for the FMA200 be of 200 time samples.

The numerical experiments are divided into four cases: the first case is when all the signals are monitored simultaneously, in the second case there is a drift of half the signal length between each sensor which monitors the signal, in the third case the drift is of a full signal length and in the fourth the drift is of one and a half signal length. This last simulation tends to represent a diffuse event passing through a field of sensors so that these ones do not monitor the event at the same time and only a little part of the sensors can monitor the signal anyways.

4.1 First scenario

In this first scenario, all the studied techniques are set to expect an offset of 0.4 in amplitude and to have an ARL2FA of 30,000 time samples. When a sensor is affected by the signal, its mean value is affected by an offset of 0.4 for a duration of 100 time samples.

(a) Results when 3 sensors out of 10 are affected
(b) Results when 7 sensors out of 10 are affected
Figure 6: Cumulative detection rate as a function of time of each of the presented method when streams are synchronized with a signal to noise ratio of -8dB. The end of exposure marks the time until at least one sensor is monitoring the change in amplitude. The exposure time is of 100 time sample for the affected sensors.(SC=SumCUSUM, MC=MaxCUSUM, cSC = Censored-SumCUSUM, cSTEC = Censored-Sum-Temporary-Event-CUSUM, cFMA50 = Censored-Finite-Moving-Average with a window of 50 time samples and cFMA200 = Censored-Finite-Moving-Average with a window of 200 time samples).

From figures 5(a) and 5(b) We can see that when all the exposures are synchronized, the SumCUSUM technique is the one to give the best results. However, the censored-SUM-TE-CUSUM and the Max-CUSUM give rather good results in this case. When data streams stop to monitor the event, at the time marked ”end of exposure”, the detection ratio is over 80% for the four best methods in the case where 3 data sensors are exposed and over 90% when 7 sensors are exposed. In the case 7 sensors are exposed the censored TE-CUSUM as well as the SumCUSUM give a result of almost 100% of detection at the end of exposure. The FMA technique seems to give slower detection and does not manage to reach the detection rate of the other techniques.

(a) Results when 3 sensors out of 10 are affected
(b) Results when 7 sensors out of 10 are affected
Figure 7: Cumulative detection rate as a function of time of each of the presented method when streams are slightly out-of-sync with a signal to noise ratio of -8dB. The end of exposure marks the time until at least one sensor is monitoring the change in amplitude. Here the delay between the beginning of the exposure of a sensor and the next is of 50 time samples while the exposure time by sensor is of 100 time sample.(SC=SumCUSUM, MC=MaxCUSUM, cSC = Censored-SumCUSUM, cSTEC = Censored-Sum-Temporary-Event-CUSUM, cFMA50 = Censored-Finite-Moving-Average with a window of 50 time samples and cFMA200 = Censored-Finite-Moving-Average with a window of 200 time samples).

The results presented on figures 6(a) and 6(b) are obtained in the same condition except this time, the start of monitoring of every data stream is delayed by 50 time sample from the previous one. Here, the censored TE-CUSUM shows how it manages to give similar results in more difficult conditions. Indeed, at the end of the exposure in figure 6(a) the total energy transmitted by the event to the system is the same than at the end of exposure in figure 5(a). The censored TE-CUSUM gives in both cases a 90% detection rate at the ”end of exposure” where the SumCUSUM technique falls from about 95% when signals are synchronized to a little less than 90% when they are slightly out-of-sync. On these figures we can also see that the longer the overall exposure is the better the FMA results seem to be.

(a) Results when 3 sensors out of 10 are affected
(b) Results when 7 sensors out of 10 are affected
Figure 8: Cumulative detection rate as a function of time of each of the presented method when streams are totally out-of-sync with a signal to noise ratio of -8dB. The end of exposure marks the time until at least one sensor is monitoring the change in amplitude. In (b) the end of exposure happen after the end of monitoring. Here the delay between the beginning of the exposure of a sensor and the next is of 150 time samples while the exposure time by sensor is of 100 time sample.(SC=SumCUSUM, MC=MaxCUSUM, cSC = Censored-SumCUSUM, cSTEC = Censored-Sum-Temporary-Event-CUSUM, cFMA50 = Censored-Finite-Moving-Average with a window of 50 time samples and cFMA200 = Censored-Finite-Moving-Average with a window of 200 time samples).

In the case of figures 7(a) and 7(b) there is a gap of 50 time sample where no data stream monitors the signal between the exposure of each data stream. One can see that the detection rate tends to stay still between the exposure of each data stream. In figure 7(b) the end of exposure happens at time sample 1100 and is not marked on the figure. In these two last cases we can see that the censored FMA technique, can give good results when the exposure lasts. It is notable that in all the non synchronized cases, censored TE-CUSUM gives the best early detection and the best detection rate. This time, the SumCUSUM, because it only considers instantaneous CUSUM variables, has a very low detection rate compared to the other techniques.

Results with more cases are presented in A.

4.2 Second scenario

In this second scenario, we run the same tests, but with a lower SNR. The amplitude of the signal expected by the methods is still of 0.4, but the real signal has an amplitude of only 0.2. Again, detection thresholds of all methods are set to have an ARL2FA of 30,000 time samples. When a sensor is affected by the signal, its mean value is affected by an offset of 0.2 for a duration of 200 time samples.

(a) Results when 3 sensors out of 10 are affected
(b) Results when 7 sensors out of 10 are affected
Figure 9: Cumulative detection rate as a function of time of each of the presented method when streams are synchronized with a signal to noise ratio of -14dB. The end of exposure marks the time until at least one sensor is monitoring the change in amplitude. The exposure time is of 100 time sample for the affected sensors.(SC=SumCUSUM, MC=MaxCUSUM, cSC = Censored-SumCUSUM, cSTEC = Censored-Sum-Temporary-Event-CUSUM, cFMA50 = Censored-Finite-Moving-Average with a window of 50 time samples and cFMA200 = Censored-Finite-Moving-Average with a window of 200 time samples).

Figures 8(a) and 8(b) shows that, this time, nearly all the methods have the utmost difficulties to give high detection rates. The exposure of more data streams in the case of figure 8(b) improve the detection rates of the methods, but SumCUSUM is the only one which is fully satisfying.

(a) Results when 3 sensors out of 10 are affected
(b) Results when 7 sensors out of 10 are affected
Figure 10: Cumulative detection rate as a function of time of each of the presented method when streams are slightly out-of-sync with a signal to noise ratio of -8dB. The end of exposure marks the time until at least one sensor is monitoring the change in amplitude. Here the delay between the beginning of the exposure of a sensor and the next is of 50 time samples while the exposure time by sensor is of 100 time sample.(SC=SumCUSUM, MC=MaxCUSUM, cSC = Censored-SumCUSUM, cSTEC = Censored-Sum-Temporary-Event-CUSUM, cFMA50 = Censored-Finite-Moving-Average with a window of 50 time samples and cFMA200 = Censored-Finite-Moving-Average with a window of 200 time samples).

Figures 9(a) and 9(b) show that when signals are slightly out-of-sync, the censored FMA (with a time window of 200) and censored TE-CUSUM give similar results. If is these results are not fully satisfying, it is important to remember that the SNR is very low. In terms of power, the signal to noise ratio is of -14dB on a data stream when the signal is present.

(a) Results when 3 sensors out of 10 are affected
(b) Results when 7 sensors out of 10 are affected
Figure 11: Cumulative detection rate as a function of time of each of the presented method when streams are totally out-of-sync with a signal to noise ratio of -14dB. The end of exposure marks the time until at least one sensor is monitoring the change in amplitude. In (b) the end of exposure happen after the end of monitoring. Here the delay between the beginning of the exposure of a sensor and the next is of 150 time samples while the exposure time by sensor is of 100 time sample. (SC=SumCUSUM, MC=MaxCUSUM, cSC = Censored-SumCUSUM, cSTEC = Censored-Sum-Temporary-Event-CUSUM, cFMA50 = Censored-Finite-Moving-Average with a window of 50 time samples and cFMA200 = Censored-Finite-Moving-Average with a window of 200 time samples).

In this last example, figures 10(a) and 10(b) show that the censored TE-CUSUM and the censored FMA can reach the same detection rate than when signals are synchronized or slightly out-of-sync, even if these detection rate are reached later than when signals are synchronised. The SumCUSUM gives very good results when most sensors monitor the event and when these are synchronized. However, when one of these condition is not met, the SumCUSUM performance decreases very fast.

Results with subsets of affected sensors and one additional asynchronous case are presented in A.

5 Conclusion

In this paper, we have addressed the detection problem of an event which only appears on a subset of the data steams, and such that these appearances can be delayed one to another so that they can be perceived by the system as if the data stream monitoring the same event were out-of-sync.

Existing methods have already explored the fact that an event can be monitored by only a portion of the data streams, as well as they can deal with the fact that the change point does not occur at the same time on every data stream. But what standard CUSUM methods lack to consider is the fact that a sensor can in some cases ceases to monitor the event while another one does.

The method we propose takes in consideration all the cases of delays between the change point on the different data streams as well as the fact that some of them can cease to monitor the event before the end of the system exposure. We have shown that if the system is composed of only one data stream, the TE-CUSUM is equivalent to a standard CUSUM procedure.

We have also shown that the censored TE-CUSUM, beside the cases where all signals are synchronized, gives the best results, even if those can be similar to the FMAs at very low SNR when the FMA window is adapted to the signal length. It is important however to note that the FMA was not originally designed to be used on multivariate out-of-sync cases. Also, a big advantage of the censored TE-CUSUM, is that it keeps the recursive computation of the CUSUM. Indeed, it only adds a comparison of the SumCUSUM variable to the last maximum to the SumCUSUM technique where the FMA computes the likelihood ratio on a signal portion which can be rather long, which means it needs to store many values (B) to compute the test variable. The FMA also requires to use the window length which can be sensitive to the length of the signal that is expected. This sensitive parameter of the window length for the FMA gives the TE-CUSUM the advantage of being easier to tune when the length of exposure is not known.

In this paper we have introduced the new TE-CUSUM method which provides a light and simple detection technique which covers a greater range of cases by adding the possibility for the event to detect to be temporarily and not simultaneously monitored by the different data streams while still giving rather good results in standard cases when data streams are synchronised. The proposed procedure can have many practical applications, e.g. when a network of sensors is monitoring a localised event passing through. It can be, for instance a plume travelling into the air containing a chemical compound one wishes to detect or a furtive object passing through several radar monitored areas one after the other.

Appendix A Experimental results

Table 1: Results with 3/10 affected data streams on the first column, 5/10 on the second column and 7/10 on the third. With synchronised signals on the first row, a 50 time sample delay on the second, a full signal length delay on the third, and a 50 sample gap between two exposure on the fourth row. Signal amplitude is 0.4 and there is 100 time samples by exposure.
Table 2: Results with 3/10 affected data streams on the first column, 5/10 on the second column and 7/10 on the third. With synchronised signals on the first row, a 100 time sample delay on the second, a full signal length delay on the third, and a 100 sample gap between two exposure on the fourth row. Signal amplitude is 0.2 and there is 200 time samples by exposure.

Appendix B Computational resources

method description number of number of
computations stored variables
MaxCUSUM L access g + L tests 2L L
SumCUSUM L access g + L sums 2L L
CensoredSC L access g + L test + sums L
TE-CUSUM same as cSC + L tests on G 2L
FMA (access + sums) + L tests + sums
Table 3: Estimation of the computational cost of each method (L is the number of sensors and w is the window length for the FMA). In our cases . Here the computation of the likelihood ratio is not displayed as it is common to all the techniques. The global decision test is not displayed for the same reason.

References

  • A. Aue and L. Horváth (2013) Structural breaks in time series. Journal of Time Series Analysis 34 (1), pp. 1–16. External Links: Document, https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1467-9892.2012.00819.x Cited by: §1.
  • T. Banerjee and V. V. Veeravalli (2015) Data-efficient quickest change detection in sensor networks. IEEE Transactions on Signal Processing 63 (14), pp. 3727–3735. External Links: Document Cited by: §1, §3.1.
  • P. Braca, D. Gaglione, S. Marano, L. M. Millefiori, P. Willett, and K. R. Pattipati (2021) Quickest detection of covid-19 pandemic onset. IEEE Signal Processing Letters 28 (), pp. 683–687. External Links: Document Cited by: §1.
  • V. Golosnoy, S. Ragulin, and W. Schmid (2009) Multivariate cusum chart: properties and enhancements. AStA Advances in Statistical Analysis 93 (3), pp. 263–279. Cited by: §1, §3.1.
  • M. N. Kurt and X. Wang (2019) Multisensor sequential change detection with unknown change propagation pattern. IEEE Transactions on Aerospace and Electronic Systems 55 (3), pp. 1498–1518. External Links: Document Cited by: §1, §3.1.
  • S. Lee, J. Ha, O. Na, and S. Na (2003) The cusum test for parameter change in time series models. Scandinavian Journal of Statistics 30 (4), pp. 781–796. External Links: Document, https://onlinelibrary.wiley.com/doi/pdf/10.1111/1467-9469.00364 Cited by: §2.1.2.
  • Y. Mei (2010) Efficient scalable schemes for monitoring a large number of data streams. Biometrika 97 (2), pp. 419–433. External Links: ISSN 0006-3444, Document, https://academic.oup.com/biomet/article-pdf/97/2/419/583997/asq010.pdf Cited by: §1, §3.1, §3.1, §3.2.
  • E. S. Page (1954) Continuous inspection schemes. Biometrika 41 (1-2), pp. 100–115. External Links: ISSN 0006-3444, Document, https://academic.oup.com/biomet/article-pdf/41/1-2/100/1243987/41-1-2-100.pdf Cited by: §1, §2.1.1.
  • T. D. Popescu (2014) Signal segmentation using changing regression models with application in seismic engineering. Digital Signal Processing 24, pp. 14–26. External Links: ISSN 1051-2004, Document Cited by: §1.
  • H. Rajaona, F. Septier, P. Armand, Y. Delignon, C. Olry, A. Albergel, and J. Moussafir (2015)

    An adaptive bayesian inference algorithm to estimate the parameters of a hazardous atmospheric release

    .
    Atmospheric Environment 122, pp. 748–762. External Links: Document Cited by: §1.
  • G. Rovatsos, V. V. Veeravalli, D. Towsley, and A. Swami (2020a) Quickest detection of growing dynamic anomalies in networks. External Links: 1910.09151 Cited by: §1, §3.1, §3.1.
  • G. Rovatsos, S. Zou, and V. V. Veeravalli (2020b)

    Sequential algorithms for moving anomaly detection in networks

    .
    Sequential Analysis 39 (1), pp. 6–31. External Links: Document, https://doi.org/10.1080/07474946.2020.1726678 Cited by: §3.1, §3.1.
  • F. Septier, P. Armand, and C. Duchenne (2020) A bayesian inference procedure based on inverse dispersion modelling for source term estimation in built-up environments. Atmospheric Environment 242, pp. 117733. External Links: ISSN 1352-2310, Document Cited by: §1.
  • M. Shbat and V. Tuzlukov (2019) Primary signal detection algorithms for spectrum sensing at low snr over fading channels in cognitive radio. Digital Signal Processing 93, pp. 187–207. External Links: ISSN 1051-2004, Document Cited by: §1.
  • A. G. Tartakovsky, N. R. Berenkov, A. E. Kolessa, and I. V. Nikiforov (2021) Optimal sequential detection of signals with unknown appearance and disappearance points in time. IEEE Transactions on Signal Processing 69, pp. 2653–2662. External Links: ISSN 1941-0476, Document Cited by: §2.2.
  • A. Tartakovsky, I. Nikiforov, and M. Basseville (2014) Sequential analysis: hypothesis testing and changepoint detection p. 111. External Links: ISBN 9781439838204 Cited by: §1, §2.1.1, §2.1.
  • H. L. V. Trees (1992) Detection, estimation, and modulation theory: radar-sonar signal processing and gaussian signals in noise. Krieger Publishing Co., Inc., USA. External Links: ISBN 0894647482 Cited by: §2.1.
  • B. Y. Xie and D. Siegmund (2013) Sequential multi-sensor change-point detection. In 2013 Information Theory and Applications Workshop (ITA), Vol. , pp. 1–20. External Links: Document Cited by: §1, §3.1.