Event-based State Estimation: An Emulation-based Approach

03/24/2017 ∙ by Sebastian Trimpe, et al. ∙ Max Planck Society 0

An event-based state estimation approach for reducing communication in a networked control system is proposed. Multiple distributed sensor agents observe a dynamic process and sporadically transmit their measurements to estimator agents over a shared bus network. Local event-triggering protocols ensure that data is transmitted only when necessary to meet a desired estimation accuracy. The event-based design is shown to emulate the performance of a centralised state observer design up to guaranteed bounds, but with reduced communication. The stability results for state estimation are extended to the distributed control system that results when the local estimates are used for feedback control. Results from numerical simulations and hardware experiments illustrate the effectiveness of the proposed approach in reducing network communication.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In almost all control systems today, data is processed and transferred between the system’s components periodically. While periodic system design is often convenient and well understood, it involves an inherent limitation: data is processed and transmitted at predetermined time instants, irrespective of the current state of the system or the information content of the data. That is, system resources are used regardless of whether there is any need for processing and communication or not. This becomes prohibitive when resources are scarce, such as in networked or cyber-physical systems, where multiple agents share a communication medium.

Owing to the limitations of traditional design methodologies for resource-constrained problems, aperiodic or event-based strategies have recently received a lot of attention [1, 2]. With event-based methods, data is transmitted or processed only when certain events indicate that an update is required, for example, to meet some control or estimation specification. Thus, resources are used only when required and saved otherwise.

Figure 1: Distributed state estimation problem. Multiple distributed sensors make observations of a dynamic system and communicate to estimator nodes via a common bus network. The development of an event-based scheme allowing all estimators to estimate the full system state , but with limited inter-agent communication, is the objective of this article.

In this article, a novel event-based scheme for distributed state estimation is proposed. We consider the system shown in Fig. 1, where multiple sensors observe a dynamic system and transmit data to estimator agents over a common bus. Each estimator agent shall estimate the full state of the dynamic system, for example, for the purpose of monitoring or control. In order to limit network traffic, local event triggers on each sensor ensure that updates are sent only when needed. The common bus ensures that transmitted data reaches all agents in the network, which will allow for efficient triggering decisions and the availability of full state information on all agents.

The proposed approach for distributed event-based estimation emulates a classic discrete-time state observer design up to guaranteed bounds, but with limited communication. Emulation-based design is one common approach in event-based control literature (see [2]), where an event-based control system is designed so as to emulate the behavior of a given continuous or periodic control system. However, to the best of the author’s knowledge, emulation-based design has not been considered for state estimation. While the focus of this article is on state estimation, we also show stability of the event-based control system resulting when local estimates are used for feedback control.

In particular, this articles makes the following main contributions:

  1. First emulation-based design for distributed event-based state estimation replicating a centralized discrete-time linear observer.

  2. Stability proofs for the resulting distributed and switching estimator dynamics under generic communication or computation imperfections (bounded disturbances).

  3. Extension to distributed event-based control, where local estimates are used for feedback.

  4. Experimental validation on an unstable networked control system.

Preliminary results of those herein were presented in the conference papers [3, 4]; this article has been completely rewritten and new results added.

1.1 Related work

Early work on event-based state estimation (EBSE) concerned problems with a single sensor and estimator node (see [1] and references therein). Typically, the optimal triggering strategies have time-varying thresholds for finite-horizon problems, and constant thresholds for infinite-horizon problems, [1, p. 340]. Because long-term behavior (stability) is of primary interest herein, we consider constant thresholds.

Different types of stationary triggering policies have been suggested in literature. With the send-on-delta (SoD) protocol [5], transmissions are triggered based on the difference of the current and last-transmitted measurement. Innovation-based triggering [6] places a threshold on the measurement innovation; that is, the difference of the current measurement and its prediction based on a process model. Wu et al.[7]

use the same trigger, but apply a transformation to decorrelate the innovation. Considering the variance of the innovation instead yields

variance-based triggering [8]. Marck and Sijs [9] proposed relevant sampling, where the relative entropy of prior and posterior state distribution is employed as a measure of information gain. We use innovation-based triggers herein, which have been shown to be effective for EBSE, [10].

Different estimation algorithms have been proposed for EBSE, with particular emphasis on how to (approximately) incorporate information contained in ‘negative’ events (instants when no data is transmitted), [11, 12, 13]

. If one ignores the extra information from negative events in favor of a straightforward implementation, a time-varying Kalman filter (KF) can be used (e.g. 

[6]). Herein, we use the same structure as the standard KF, but with pre-computed switching gains, thus achieving the lowest computational complexity of all mentioned algorithms.

To the best of the author’s knowledge, distributed EBSE with multiple sensor/estimator nodes and general coupled dynamics was first studied in [6]. While Yook et al.[14] had previously proposed the use of state estimators for the purpose of saving communication, they do not employ state estimation in the usual sense. Instead of fusing model-based predictions with incoming data, they reset

parts of the state vector. Later results on distributed EBSE include

[15, 16, 13, 17, 18, 19]. In contrast to the scenario herein, they consider either a centralized fusion node, or simpler SoD-type triggers, which are less effective for estimation, [10]. None of the mentioned references treats the problem of emulating a centralized observer design with a distributed and event-triggered implementation.

When the event-based state estimators are connected to state-feedback controllers (discussed in Section 5), this represents a distributed event-based control system. Wang et al.[20] and Mazo Jr et al.[21] were among the first to discuss distributed or decentralized event-based control. In contrast to these works, we neither assume perfect state measurements, nor a centralized controller as in [21], nor have a restriction on the dynamic couplings [20], but we rely on a common bus network supporting all-to-all communication.

1.2 Notation

The terms state observer and state estimator are used synonymously in this article. , , and denote real numbers, positive integers, and the set , respectively. Where convenient, vectors are expressed as tuples , where may be vectors themselves, with dimension and stacking clear from context. For a vector and matrix , denotes some vector Hölder norm [22, p. 344], and the induced matrix norm. For a sequence , denotes the norm . For an estimate of computed from measurement data until time , we write ; and use

. A matrix is called stable if all its eigenvalues have magnitude strictly less than one. Expectation is denoted by

.

2 Problem statement: distributed state estimation with reduced communication

We introduce the considered networked dynamic system and state the estimation problem addressed in this article.

2.1 Networked dynamic system

We consider the networked estimation scenario in Fig. 1. The dynamic system is described by linear discrete-time dynamics

(1)
(2)

with sampling time , state , control input , measurement , disturbances , , and all matrices of corresponding dimensions. We assume that is stabilizable and is detectable. No specific assumptions on the characteristics of the disturbances and

are made; they can be random variables or deterministic disturbances.

Each of the sensor agents (cf. Fig. 1) observes part of the dynamic process through measurements , . The vector thus represents the collective measurements of all agents,

(3)
(4)

with and . Agents can be heterogeneous with different types and dimensions of measurements, and no local observability assumption is made (i.e.  can be not detectable).

Each of the estimator agents (cf. Fig. 1) shall reconstruct the full state for the purpose of, for example, having full information at different monitoring stations, distributed optimal decision making, or local state-feedback control. Overall, there are agents, and we use to index the sensor agents, and for the estimator agents.

While the primary concern is the development of an event-based approach to the distributed state estimation problem in Fig. 1, we shall also address distributed control when the local estimates are used for feedback. For this, we consider the control input decomposed as

(5)

with the input computed on estimator agent .

All agents are connected over a common-bus network; that is, if one agent communicates, all agents will receive the data. We assume that the network bandwidth is such that, in the worst case, all agents can communicate in one time step , and contention is resolved via low-level protocols. Moreover, agents are assumed to be synchronized in time, and network communication is abstracted as instantaneous.

Remark 1.

The common bus is a key component of the developed event-based approach. It will allow the agents to compute consistent estimates and use these for effective triggering decisions (while inconsistencies can still happen due to data loss or delay). Wired networks with a shared bus architecture such as Controller Area Network (CAN) or other fieldbus systems are common in industry [23]. Recently, Ferrari et al. [24] have proposed a common bus concept also for multi-hop low-power wireless networks.

2.2 Reference design

We assume that a centralized, discrete-time state estimator design is given, which we seek to emulate with the event-based design to be developed herein:

(6)
(7)

where the estimator gain has been designed to achieve desired estimation performance, and the estimator is initialized with some . For example, (6), (7) can be a Kalman filter representing the optimal Bayesian estimator for Gaussian noise, or a Luenberger observer designed via pole placement to achieve a desired dynamic response. At any rate, a reasonable observer design will ensure stable estimation error dynamics

(8)

We thus assume that is stable, which is always possible since is detectable. It follows [25, p. 212–213] that there exist and such that

(9)

2.3 Problem statement

The main objective of this article is an EBSE design that approximates the reference design of Section 2.2 with guaranteed bounds:

Problem 1.

Develop a distributed EBSE design for the scenario in Fig. 1, where each estimator agent () locally computes an estimate of the state , and each sensor agent () makes individual transmit decisions for its local measurements . The design shall emulate the centralized estimator (6), (7) bounding the difference , but with reduced communication of sensor measurements.

Furthermore, we address distributed control based on the EBSE design:

Problem 2.

Design distributed control laws for computing control inputs (cf. (5)) locally from the event-based estimates so as to achieve stable closed-loop dynamics (bounded ).

For state estimation in general, both the measurement signal and the control input must be known (cf. (6), (7)). For simplicity, we first focus on the reduction of sensor measurements and assume

Assumption 1.

The input is known by all agents.

This is the case, for example, when estimating a process without control input (i.e. ), when is an a-priori known reference signal, or when is broadcast periodically over the shared bus. In particular, if the components are computed by different agents as in Problem 2, Assumption 1 requires the agents to exchange their inputs over the bus at every step . Reducing measurement communication, but periodically exchanging inputs may be a viable solution when there are more measurements than control inputs (as is the case for the experiment presented in Section 6.2).

Later, in Section 5, an extension of the results is presented, which does not require Assumption 1 and periodic exchange of inputs by employing event-triggering protocols also for the inputs.

3 Event-based state estimation with a single sensor-estimator link

In order to develop the main ideas of the EBSE approach, we first consider Problem 1 for the simpler, but relevant special case with ; that is, a single sensor transmits data over a network link to a remote estimator (also considered in [1, 9, 11, 7, 10], for instance). For the purpose of this section, we make the simplifying assumption of a prefect communication link:

Assumption 2.

Transmission from sensor to estimator is instantaneous and no data is lost.

For a sufficiently fast network link, this may be ensured by low-level protocols using acknowledgments and re-transmissions. However, this assumption is made for the sake of simplicity in this section, and omitted again in the later sections.

We propose the event-based architecture depicted in Fig. 2(a). The key idea is to replicate the remote state estimator at the sensor; the sensor agents then knows what the estimator knows, and thus also when the estimator is in need of new data. The State Estimator and Event Trigger, which together form the EBSE algorithm, are explained next.

(a) Single sensor/estimator agent
(b) Multiple sensor/estimator agents
Figure 2: Proposed event-based state estimation architectures. Dashed arrows indicate event-based communication, while solid ones indicate periodic communication. (a)Single sensor/estimator case: The sensor agent implements a copy of the remote State Estimator to trigger a data transmission (Event Trigger) whenever an update is needed at the remote agent. (b)Multiple sensor/estimator case: Each agent implements a copy of the State Estimator for making transmit decisions (Sensors) or having full state information available (Estimators). The common bus supports data exchange between all agents; denotes the set of measurements communicated at time . Disturbances model differences in the agents’ estimates, e.g. from imperfect communication.

3.1 State estimator

Both sensor and remote agent implement the state estimator (cf. Fig. 2(a)). The estimator recursively computes an estimate of the system state from the available measurements:

(10)
(11)

with for the sensor, for the estimator, as in (7), and denoting the sensor’s decision of transmitting (), or not ().

By Assumption 2, both estimators have the same input data. If, in addition, they are initialized identically, both estimates are identical, i.e.  for all . Hence, the sensor has knowledge about the estimator and can exploit this for the triggering decision.

3.2 Event trigger

The sensor transmits a measurement if, and only if, the remote estimator cannot predict the measurement accurately enough based on its state prediction. Specifically, is transmitted when the remote prediction deviates from by more than a tolerable threshold . Since , the sensor can make this decision without requiring communication from the remote estimator:

(12)

Tuning allows the designer to trade off the sensor’s frequency of events (and, hence, the communication rate) for estimation performance. This choice of the trigger will be instrumental in bounding the difference between the event-based and the centralized estimator, as will be seen in the subsequent stability analysis. The trigger is also called innovation-based trigger and was previously proposed in different contexts in [14, 6, 7]. The innovation trigger (12) can also be realized without the local state estimator on the sensor by periodically communicating estimates from the remote estimator to the sensor. However, the proposed architecture avoids this additional communication.

3.3 Stability analysis

The estimator update equations (10), (11) and the triggering rule (12) together constitute the proposed event-based state estimator. The estimator (10), (11) is a switching observer, whose switching modes are governed by the event trigger (12). For arbitrary switching, stability of the switching observer is not implied by stability of the centralized design (see e.g. [26]). Hence, proving stability is an essential, non-trivial requirement for the event-based design.

3.3.1 Difference to centralized estimator

Addressing Problem 1, we first prove a bounded difference to the centralized reference estimator . Using (6), (7), (10), and (11), the difference can be written as

(13)

where the last equation was obtained by adding and subtracting . The error is governed by the stable centralized estimator dynamics with an extra input term, which is bounded by the choice of the event-trigger (12): for , is bounded by (12), and for , the extra term vanishes. We thus have the following result:

Theorem 1.

Let Assumptions 1 and 2 be satisfied, be stable, and for some . Then, the difference between the centralized estimator and the EBSE (10), (11), and (12) is bounded by

(14)
Proof.

From the assumptions, it follows that and . From the previous argument, we have

(15)

The bound (14) then follows from [25, p. 218, Thm. 75] and exponential stability of (cf. (9)). ∎

The first term in (14), , is due to possibly different initial conditions between the EBSE and the centralized estimator, and represents the asymptotic bound. Choosing small enough, can hence be made arbitrarily small as , and, for , the performance of the centralized estimator is recovered.

The bound (14) holds irrespective of the nature of the disturbances and in (1), (2) (no assumption on , is made in Theorem 1). In particular, it also holds for the case of unbounded disturbances such as Gaussian noise.

3.3.2 Estimation error

The actual estimation error of agent is

(16)

Theorem 1 can be used to deduce properties of the estimation error from properties of the centralized estimator. We exemplify this for the case of bounded, as well as stochastic disturbances and .

Corollary 1.

Let , , be bounded, and be stable. Then, the event-based estimation error (16) is bounded by

(17)

with .

Proof.

The bound on the centralized estimation error follows directly from (8), exponential stability (9), and [25, p. 218, Thm. 75]. The result (17) is then immediate from (16). ∎

Corollary 2.

Let , , be random variables with , , , and the centralized estimator be initialized with . Let be bounded, and be stable. Then, the expected event-based estimation error (16) is bounded by

(18)
Proof.

From (8), it follows , and thus by recursion from . Therefore,

(19)

where the first inequality follows from Jensen’s inequality, and the last from . ∎

4 Event-based state estimation with multiple agents

We extend the ideas of the previous section to the general multi-agent case in Problem 1. While the assumption of perfect communication (Assumption 2) may possibly be realizable for few agents, it becomes unrealistic as the number of agents increases. Thus, we generalize the stability analysis to the case where agents’ estimates can differ.

4.1 Architecture

We propose the distributed event-based architecture depicted in Fig. 2(b) for the multi-agent problem. Adopting the key idea of the single agent case (cf. Fig. 2(a)), each agent implements a copy of the state estimator for making transmit decisions. The common bus network ensures that, if a measurement is transmitted, it is broadcast to all other units. For the estimators to be consistent, the sensor agents also listen to the measurement data broadcast by other units.

The proposed EBSE scheme is distributed in the sense that data from distributed sensors is required for stable state estimation, and that transmit decisions are made locally by each agent.

4.2 Event trigger

In analogy to the single agent case (12), each agent , , uses the following event triggering rule:

(20)

The prediction computed by agent is representative also for all other agents’ predictions of the same measurement, , as long as , which is to be established in the stability analysis below. Being able to approximately represent the other agents’ knowledge is the basis for making effective transmit decisions in the proposed approach.

For later reference, we introduce and the index sets of transmitting and not-transmitting agents:

(21)
(22)

4.3 State estimator

Extending the event-based estimator (10), (11) to the multi sensor case, we propose the following estimator update for all agents ():

(23)
(24)

where is the submatrix of the centralized gain corresponding to . Rewriting (7) as

(25)

we see that (24) is the same as the centralized update, but only updating with a subset of all measurements. If, at time , no measurement is transmitted (i.e. ), then the summation in (24) vanishes; that is, .

To account for differences in any two agents’ estimates, e.g. from unequal initialization, different computation accuracy, or imperfect communication, we introduce a generic disturbance signal acting on each estimator (cf. Fig. 2(b)). For the stability analysis, we thus replace (24) with

(26)

The disturbances are assumed to be bounded:

Assumption 3.

For all , .

This assumption is realistic, when represent imperfect initialization or different computation accuracy, for example. Even though the assumption may not hold for modeling packet drops in general, the developed method was found to be effective also for this case in the example of Section 6.1.

4.4 Stability analysis

We discuss stability of the distributed EBSE system given by the process (1), (4), the (disturbed) estimators (23), (26), and the triggering rule (20). We first consider the difference between the centralized and event-based estimate, . By straightforward manipulation using (6), (23), (25), and (26), we obtain

(27)
(28)

where is the inter-agent error, and we used . The error dynamics (28) are governed by stable dynamics with three input terms. The term is analogous to the last term in (13) and bounded by the event triggering (20) (cf. (22)). The last two terms are due to the disturbance and resulting inter-agent differences . To bound , must also be bounded, which is established next.

4.4.1 Inter-agent error

The inter-agent error can be written as

(29)

where is defined for some subset by

(30)

Hence, the inter-agent error is governed by the time-varying dynamics . Unfortunately, one cannot, in general, infer stability of the inter-agent error (and thus the event-based estimation error (28)) from stability of the centralized design. A counterexample is presented in [4].

A sufficiency result for stability of the inter-agent error can be obtained by considering the dynamics (29) under arbitrary switching; that is, with for all subsets . The following result is adapted from [27, Lemma 3.1].

Lemma 1.

Let Assumption 3 hold, and let the matrix inequality

(31)

be satisfied for some positive definite and for all subsets . Then, for given initial errors (), there exists , , such that

(32)
Proof.

Under (31), the error dynamics (29) are input-to-state stable (ISS) following the proof of [27, Lemma 3.1] ( replaced with ). With Assumption 3, ISS guarantees boundedness of the inter-agent error and thus the existence of (possibly dependent on the initial error ) such that

(33)

Finally, (32) is obtained by taking the maximum over all . ∎

The stability test is conservative because the event trigger (20) will generally not permit arbitrary switching. Since also includes the empty set (i.e. ), the test can only be used for open-loop stable dynamics (1). In Section 4.4.3, we present an alternative approach to obtained bounded for arbitrary systems.

4.4.2 Difference to centralized estimator

With the preceding lemma, we can now establish boundedness of the estimation error (28).

Theorem 2.

Let Assumptions 1 and 3 and the conditions of Lemma 1 be satisfied, and let be stable. Then, the difference between the centralized estimator and the EBSE (20), (23), (26) is bounded by

(34)

with , as in (9), and .

Proof.

We can establish the following bounds (for all )

(35)
(36)
(37)

The result (34) then follows from (28), stability of , and [25, p. 218, Thm. 75]. ∎

4.4.3 Synchronous estimator resets

We present a straightforward extension of the event-based communication scheme, which guarantees stability even if the inter-agent error dynamics (29) cannot be shown to be stable (e.g., if Lemma 1 does not apply).

Since the inter-agent error is the difference between the state estimates by agent and , we can make it zero by resetting the two agents’ state estimates to the same value, for example, their average. Therefore, a straightforward way to guarantee bounded inter-agent errors is to periodically reset all agents’ estimates to their joint average. Clearly, this strategy increases the communication load on the network. If, however, the disturbances are small or only occur rarely, the required resetting period can typically be large relative to the underlying sampling time .

We assume that the resetting happens after all agents have made their estimator updates (26). Let and denote agent ’s estimate at time before and after resetting, and let be the fixed resetting period. Each agent implements the following synchronous averaging:

If a multiple of : transmit ; (38)
receive ;
set .

We assume that the network capacity is such that the mutual exchange of the estimates can happen in one time step, and no data is lost in the transfer. In other scenarios, one could take several time steps to exchange all estimates, at the expense of a delayed reset. The synchronous averaging period can be chosen from simulations assuming a model for the inter-agent disturbances (e.g. packet drops).

We have the following stability result for EBSE with synchronous averaging (38).

Theorem 3.

Let Assumptions 1 and 3 be satisfied and be stable. Then, the difference between the centralized estimator and the EBSE with synchronous averaging given by (20), (23), (26), and (38) is bounded.

Proof.

Since the agent error (28) is affected by the resetting (38), we first rewrite in terms of the average estimate . Defining and , we have and will establish the claim by showing boundedness of and separately.

For the average estimate , we obtain from (23), (26),

where and . The dynamics of the error are described by

(39)
(40)

where (39) is obtained by direct calculation analogous to (29), and (40) follows from (38). Since , , and are all bounded, boundedness of for all follows.

Since , we obtain from (28)

(41)

where we used . Note that (41) fully describes the evolution of . In particular, the resetting (38) does not affect because, at , it holds

(42)

All input terms in (41) are bounded: by Assumption 3, by (22), and by the previous argument. The claim then follows from stability of . ∎

4.4.4 Estimation error

By means of (16) with Theorem 2 or Theorem 3, properties about the agent’s estimation error can be derived given properties of the disturbances , , and the centralized estimator. For example, Corollaries 1 and 2 apply analogously also for the multi-agent case.

5 Distributed control

In this section, we address Problem 2; that is, the scenario where the local estimates on the estimators are used for feedback control.

Recall the decomposition (5) of the control input, where is the input computed on estimator agent . Assume a centralized state-feedback design is given

(43)

with controller gain such that is stable. We propose the distributed state-feedback control law

(44)

where is the part of the gain matrix in (43) corresponding to the local input . Same as for the emulation-based estimator design in previous sections, the feedback gains do not need to be specifically designed, but can simply be taken from the centralized design (43).

5.1 Closed-loop stability analysis

Using (16) and (44), the state equation (1) can be rewritten as

(45)

where are the estimation errors of the estimator agents (cf. Section 4.4.4). Closed-loop stability can then be deduced leveraging the results of Section 4.

Theorem 4.

Let the assumptions of either Theorem 2 or Theorem 3 be satisfied, be stable, and and bounded. Then, the state of the closed-loop control system given by (1), (2), (20), (23), (26), (44), and (possibly) (38) is bounded.

Proof.

Since is stable and , bounded, it follows from (8) that the estimation error of the centralized observer is also bounded. Thus, (16) and Theorem 2 or 3 imply that all , , are bounded. Hence, it follows from (45), stability of , and bounded that is also bounded. ∎

Satisfying Assumption 1 for the above result requires the periodic communication of all inputs over the bus. While this increases the network load, it can be a viable option if the number of inputs is comparably small. Next, we briefly present an alternative scheme, where the communication of inputs is reduced also by means of event-based protocols.

5.2 Event-based communication of inputs

Each estimator agent computes according to (44) and broadcasts an update to the other agents whenever there has been a significant change:

(46)

where is a tuning parameter, and is the last input that was broadcast by agent .

Each agent maintains an estimate of the complete input vector ; agent ’s estimate of agent ’s input is

(47)

The agent then uses instead of the true input for the estimator update (23). Since the error