I Introduction
The nextgeneration electrical power grid, i.e., the smart grid, is vulnerable to a variety of cyber/physical system faults and hostile cyber threats [1, 2, 3, 4]. In particular, random anomalies such as node and topology failures might occur all over the network. Moreover, malicious attackers can deliberately manipulate the network operation and tamper with the network data, from sources such as smart meters, control centers, network database, and network communication channels (see Fig. 1). False data injection (FDI), jamming, and denial of service (DoS) attacks are wellknown attack types [2, 5, 6, 7, 8]. Moreover, Internet of Things (IoT) botnets can be used to target against critical infrastructures such as the smart grid [9, 10, 11].
In the smart grid, online state estimates are utilized to make timely decisions in critical tasks such as loadfrequency control and economic dispatch [12]. Hence, a fundamental task in the grid is reliable state estimation based on online measurements. On the other hand, the main objective of the adversaries is to damage/mislead the state estimation mechanism in order to cause wrong/manipulated decisions, resulting in power blackouts or manipulated electricity prices [13]. Additionally, random system faults may degrade the state estimation performance. Our objective in this study is to design a highly secure and resilient state estimation mechanism for widearea (multiarea) smart grids, that provides reliable state estimates in a fullydistributed manner, even in the case of cyberattacks and other network anomalies.
Ia Background and Related Work
IA1 Secure Dynamic State Estimation
Feasibility of dynamic modeling and efficiency of dynamic state estimation have been widely discussed and various dynamic models have been proposed for the power grids [14, 15, 16, 17]. The general consensus is that the dynamic modeling better captures the timevarying characteristics of the power grid and dynamic state estimators are more effective to track the system state compared to the conventional static least squares (LS) estimators. Moreover, state forecasting capability achieved with dynamic estimators is quite useful in realtime operation and security of the grid [15, 5]. In [14]
, a quasistatic state model is proposed where the power system state is periodic over a day. In the simplest case, the system state is perturbed with an additive white Gaussian noise (AWGN) process with a small variance and hence the state varies in a small dynamic range. In
[17], a linear exponential smoothing model is proposed, where the effects of past measurements on the present state estimates are reduced over time.In the literature, various techniques have been proposed to make the dynamic state estimation mechanism secure/robust against outliers, heavytailed noise processes, model uncertainties including unknown noise statistics, rankdeficient observation models, and cyberattacks, etc.,
[18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32]. For instance, robust statisticsbased approaches [18, 19, 20, 21] aim to suppress the effects of outliers by assigning less weights to more significant outliers. As the outliers are still incorporated into the state estimation process, their effects are not completely eliminated, and due to the recursive nature of dynamic state estimators, errors are accumulated over time and the corresponding state estimator breaks down at some point, i.e., fails to keep track of the system operating point. The estimator also breaks down in case of gross outliers. Furthermore, this approach is based on the solution of an iterative weighted LS problem, repeated at each time, that might be prohibitive for realtime processing.To deal with outliers, another method is modeling the system noise as a heavytailed, e.g., Student’s t or Laplace, distribution [22, 23, 24]. This method can handle (nominal) outliers observed during the regular system operation, however, it is expected to be ineffective against attacks and faults that behave significantly different from nominal outliers. Further, some studies are based on the assumption that attacks are sparse over the network or with bounded magnitudes [25, 26, 27, 28, 29], which significantly limits the robustness of the corresponding state estimators. This is because imposing restrictions on the capabilities or strategies of attackers make the corresponding state estimation mechanism robust against only a certain subset of attacks. Yet another approach is to completely ignore the outliers in the state estimation process [30, 31, 32, 33]. In the literature, this approach is usually based on the samplebysample classification of observations as either outlier or nonoutlier. Although it might be useful to detect and eliminate the effects of gross outliers, the corresponding state estimator is expected to fail if small/moderatelevel (difficulttodetect) outliers are observed in a persistent manner.
Distributed dynamic state estimation has been also studied extensively, see e.g., [34] for a review of the distributed Kalman filtering techniques. Particularly, two main architectures are considered: hierarchical and fullydistributed. In the former, there exists a central controller coordinating multiple local controllers, while in the latter, no central controller exists. In the hierarchical schemes, the information filter, an algebraic equivalence to the centralized Kalman filter, can be useful to fuse the information processed across the local controllers in a simple manner [35, 36]. The fullydistributed schemes usually require an iterative consensus mechanism to reduce the disagreement of the local controllers on the estimates of common state variables [34, 37]. In this work, we propose a nearoptimal fullydistributed dynamic state estimation mechanism where the local controllers exchange the necessary information only once at each measurement sampling interval.
IA2 Secure Distributed System Design
Traditionally, through the supervisory control and data acquisition (SCADA) system, the power grid is controlled in a centralized manner. In particular, the systemwide data are collected, stored, and processed at a single node. Considering the increasing speed and size of the measurement data, collecting and processing such huge volume of data in realtime at a single center seem practically infeasible in the modern power grids [38]. Moreover, the traditional implementation is based on the assumption that the centralized node is completely trustable. In practice, however, the centralized node can be the weakest point of the network in terms of security. This is because by hacking only the centralized node, adversaries can arbitrarily modify the control decisions and the network database. On the contrary, hacking a distributed system is usually more difficult for attackers, especially for the smart grid which is distributed over a geographically wide region. Therefore, distributing the computation and the trust over the network can be useful to achieve a more feasible and a more secure grid.
Blockchain (BC) is an emerging secure distributed database technology, operating on a peertopeer (P2P) network with the following key components [39, 40, 41]: (i) a distributed ledger (chronologically ordered sequence of blocks) that is shared and synchronized over the network where each block is cryptographically linked to the previous blocks and the ledger is resistant/immune to modifications, (ii) advanced cryptography that enables secure data exchanges and secure data storage, and (iii) a mutual consensus mechanism, that enables collective verification/validation on the integrity of exchanged and stored data, and thereby distributes trust over the network instead of relying on a single node/entity. BC technology was firstly used in financial applications [42, 43] but due to its securitybydesign and the distributed nature without needing any trusted third party, it has been applied to many fields such as vehicular networks, supply chain management, cognitive radio, and insurance [44, 45, 46].
The smart grid critically relies on a database and network communication channels, both are quite vulnerable to attacks and manipulations. As a countermeasure, an effective approach is the detection and then mitigation of such threats. On the other hand, a significantly better approach is the prevention of the threats as much as possible. In this direction, the BC technology has a great potential due to its advanced data protection/attack prevention capabilities. Hence, we aim to integrate some salient features of the BC technology to the smart grid in order to improve the resilience of the system, particularly to protect the system database and the communication channels.
Research on the integration of BC to smart grids has mainly focused on secure energy transactions/trade so far [47, 48, 49, 4], and only a few studies have examined the integration of BC to make the power supply system more secure, see [50] and [51], where in both studies, the grid is protected by securely storing all the systemwide measurements at every smart meter. However, this seems infeasible in many aspects, e.g., smart meters are smallsize devices with limited memory, power, and processing capabilities [48] and hence not suitable to perform advanced computations such as encryption/decryption, to store the distributed ledger, and to constantly communicate with all other meters in the network. Moreover, since the measurements are collected mainly to estimate the system state, the BCbased system can be designed to protect the state estimation mechanism in a more direct way, rather than to protect the entire history of raw measurement data, that then enables secure state estimation.
Although BC can be useful to secure the network database and the communication channels, the online meter measurements are still vulnerable to attacks and faults. We would like to design a state estimation mechanism that is secure against all types of anomalies. Towards this goal, as a complement to the BCbased data protection, robust bad data detection and mitigation, i.e., state recovery, schemes need to be integrated into the state estimation mechanism. We have recently proposed in [7] a robust dynamic state estimation scheme for the smart grid, in case the attack models are known (with some unknown parameters). In practice, unknown attacks/anomalies may occur in the smart grid as it has many vulnerabilities and attackers might have arbitrary strategies. Hence, in general, anomaly/attack models need to be assumed unknown and the state estimation mechanism should be designed accordingly. Furthermore, in a BCbased distributed system, there is no centralized trusted node to check and recover a node that is faulty or hacked by a malicious entity. Hence, a distributed trust management mechanism needs to be employed over the network to evaluate the trustability of each node against the possibility of misbehaving nodes.
IB Contributions
In this study, we propose a novel BCbased resilient system design to achieve secure distributed dynamic state estimation in widearea smart grids. Our aim is to reduce the risks at each part of this highly complex network, specifically, the network database, smart meters, local control centers, and network communication channels (see Fig. 1) without modeling anomalies.
Firstly, assuming regular system operation (no anomaly), we propose a fullydistributed dynamic state estimation scheme that achieves nearoptimal performance thanks to the local Kalman filters and with the exchange of necessary information between local centers. Then, to improve the resilience of the proposed mechanism, we propose to (i) use salient features of the emerging BC technology to secure both the network database and the network communication channels against attacks and manipulations, (ii) embed online anomaly detection schemes into the state estimation mechanism to make it secure against measurement anomalies, and (iii) detect and eliminate the effects of the misbehaving nodes in realtime via a novel distributed trust management mechanism over the network. Then, we achieve a highly secure distributed mechanism that is able to deliver a reliable state estimation performance under adversarial settings. Moreover, we provide theoretical guarantees regarding the false alarm rates of the proposed online detection schemes, where the false alarms can be easily controlled by the system designer.
IC Organization and Notations
The remainder of the paper is organized as follows. Sec. II presents the system model. Sec. III describes the BCbased secure system design. Sec. IV discusses the proposed distributed state estimation mechanism under regular (nonanomalous) network operation. Sec. V explains the proposed online anomaly detection scheme against measurement anomalies and the corresponding state recovery scheme. Sec. VI discusses the proposed distributed trust management scheme against misbehaving nodes. Sec. VII then summarizes the proposed mechanism. Sec. VIII illustrates the advantages of the proposed mechanism over a simulation setup. Finally, Sec. IX concludes the paper.
Notations:
Boldface letters denote vectors and matrices.
denotes the set of real numbers.denotes the Gaussian probability density function (pdf) with mean
and covariance matrix . denotes an identity matrix. and denote the probability and the expectation operators, respectively. denotes the indicator function. denotes the natural logarithm and denotes the Euler’s number. denotes the cardinality of a set. denotes an empty set. denotes the set of elements belonging to but not belonging to . , , and denote the infimum, supremum, and maximum operators, respectively, and . Finally, denotes the transpose operator.Ii System Model
We consider a smart power grid with buses and smart meters, usually with to have the necessary measurement redundancy against system noise [52]. System state at time , , represents voltage phase angles of the buses, where a bus is chosen as the reference. Measurement taken at meter at time is denoted with and the measurement vector is denoted with . Based on the widely used approximate DC model [52, 14], we model the grid as a discretetime linear dynamic system as follows:
(1)  
(2) 
where is the state transition matrix, is the measurement matrix determined by the network topology, is the process noise vector, and is the measurement noise vector. We assume that and are independent AWGN processes where and .
The widearea smart grid is composed of geographically separated subregions (see Fig. 5). Each subregion contains a set of smart meters, supervised by a local (control) center. Since the meters are distributed over the network, each local center partially observes the measurement vector . Assuming that the grid is composed of subregions and the subset of meters in the th subregion is denoted by , the measurement vector is decoupled into subvectors , , where denotes (with an abuse of notation) the measurement vector of the th local center at time and is the number of meters in the th subregion. Since each meter belongs to only one subregion, for any two subregions and , and do not overlap and we have .
The smart grid is an interconnected system, where there exist tielines between neighboring subregions (see Fig. 5) that leads to some common (shared) state variables between neighboring local centers. Hence, denoting the state vector of the th local center at time by , for any two neighboring local centers and , and might overlap. This implies . In general, if the state transition matrix is nondiagonal, additional state variables might be shared between neighboring or nonneighboring local centers due to dependencies between state variables over time through the state transition matrix. In this study, for the simplicity of the presentation, we assume is diagonal, as in e.g., [14, 31]. Under this assumption, we next determine the state vector of a local center, say the th one. For the case of nondiagonal , please see [5]. Note that in the nondiagonal case, the proposed system design directly extends, where the only difference is that the size of the local state vectors might be larger.
Let be the th row of the measurement matrix , i.e., . Then, using (2), each measurement can be written as follows:
(3) 
Based on (3), we can argue that depends on, equivalently bears information about, the following state variables:
Then, the local state vector consists of the union of all such state variables for all the meters in the subregion :
For each local center , we then have the following local state transition model:
(4) 
and the following local measurement model:
(5) 
where the local state transition matrix, , and the local measurement matrix, , can be easily obtained from and , respectively. Moreover, the local process noise vector, , is a subvector of corresponding to . Similarly, the local measurement noise vector, , is a subvector of corresponding to .
Iii BlockchainBased Secure System Design
Iiia Overview of the Proposed System
We consider a distributed P2P network of local centers where each node (local center) can communicate with all other nodes (see Fig. 1). We aim to design a system in which the nodes collaborate with each other to perform the state estimation task in a safe and reliable manner. For a reliable distributed dynamic state estimation, particularly the Kalman filter, we need safe updates and hence the following three items must be secure/reliable at each time :

state estimates of the previous time ,

meter measurements acquired at the current time , and

the nodes functioning in the state estimation process, i.e., the local centers.
In other words, at each time, we need to make sure that the previous state estimates are not modified, the online meter measurements are not anomalous, and the nodes are working according to predesigned network rules. Furthermore, in case of an anomaly over the network, the state estimates can be recovered using the previous reliable state estimates, and hence we need to also protect the previous state estimates against tampering. Considering these requirements, our proposed system is composed of the following three main components:

BCbased data protection/attack prevention: BC enhances the security of the grid by reducing the risk of manipulations at the network database and the network communication channels. In particular, to protect the previous state estimates against tampering and make them widely available and accessible over the network against the possibility of node failures and hacking, we record them in a shared transparent distributed ledger that is resistant to alterations. Moreover, we secure the internode data exchanges via cryptography against attacks and manipulations.

Secure state estimation against measurement anomalies: Each local center quickly and reliably detects local measurement anomalies and then employs a state recovery mechanism.

Distributed trust management: All nodes collectively (via votingconsensus) evaluate the trustability of each node, specifically whether the local state estimates provided by a node exhibit an anomalous pattern over time.
Firstly, the following subsection explains how we use the BC technology to enhance the security of the state estimation mechanism.
IiiB Blockchain Mechanism
The BC operates on the P2P network of local centers. Since each node is prespecified and preauthenticated, we have a permissioned (private) BC mechanism [48, 50]. In BCbased systems, duties of each node and interactions between nodes are determined via a smart contract, which is a software code that specifies the predefined rules of network operation. In our proposed mechanism, each node collects and analyzes the meter measurements in its subregion, estimates its local state vector, exchanges information with other nodes, performs encryption/decyrption, participates in votingconsensus procedures, and stores the distributed ledger in its memory. The details regarding the duties of the nodes will be more clear in the subsequent sections. Next, we explain the proposed BC mechanism in more detail.
IiiB1 Data Exchanges
In all internode data exchanges, we use asymmetric encryption based on the public key infrastructure (see Fig. 2). In this mechanism, each node owns a publicprivate key pair that forms the digital identity of the node. The public key is available at every node. On the other hand, the private key is available only at its owner. Moreover, a secure hash algorithm (SHA), e.g., SHA, SHA, etc. [53, 54], is used in the data encryption process. Particularly, in every data exchange, the sender node firstly processes its message via the SHA and obtains the message digest. It then encrypts the message digest via its private key using a signature algorithm, e.g., Elliptic Curve Digital Signature Algorithm [40, 47], and obtains the digital signature. Finally, it transmits the data package consisting of the message and the corresponding digital signature. The receiver node then decrypts the received digital signature via the public key of the sender node and obtains a message digest. Moreover, it processes the received message via the SHA and obtains another message digest. Only if these two message digests exactly match, the integrity of the received message is verified.
In this procedure, the SHA provides security since it is computationally intractable to obtain the same message digest from two different messages [53, 54]. The SHA is a oneway function that outputs a fixedlength message digest for an arbitrarysize input message. Let denote the SHA and the output of the be bits. Then, given a message , the time complexity of finding such that is via bruteforce search. This property implies that over a data exchange, if a malicious adversary aims to replace the actual message with a fake message without being noticed ( so that the receiver verifies the integrity of the message), and moreover if the adversary has a computational power of querying possible fake messages, then the probability of a successful fake message is . Here we assume that the adversary knows the SHA so that it can check whether while trying different fake messages . The probability of success is negligible in practical settings where , , etc. For example, if an adversary can query fake messages and , the probability of success is for a single data package.
Furthermore, since the received digital signature can only be decrypted via the public key of the sender node, the receiver can verify the identity of the sender. Assuming (reasonably) that the private keys are kept secret and the digital signatures are bits, the time complexity of generating a successful fake signature is via bruteforce search [40]. Then, if an adversary does not know the public key of the sender (so that it cannot check whether a fake signature is decrypted via the public key), the probability of generating a successful fake signature for a chosen fake message is . On the other hand, obtaining public keys might be easier than private keys because the public keys are distributed over the network and all public keys can be accessed by hacking only one node. Then, if an adversary knows the public key of the sender, it can try different fake signatures for a chosen fake message and check whether the fake message is verified. In this case, if the adversary has the computational power of querying possible fake signatures, then the probability of success is . Again choosing sufficiently high, such as , makes the success probability practically negligible. Notice that, however, if the private key of the sender is stolen, then the fake messages cannot be noticed at the receiver.
Over a data exchange, in case either the integrity of the received data package or the identity of the sender cannot be validated, then the received message is ignored and a retransmission can take place. Thereby, thanks to the asymmetric encryption procedure, the internode data exchanges are secured against attacks that manipulate either the message or the identity of the sender, such as maninthemiddle and IP spoofing attacks.
IiiB2 Distributed Ledger and Consensus Mechanism
The ledger is a chronologically ordered sequence of blocks, stored at every node and synchronized over the network. In BCbased systems, the block content is applicationspecific. In our case (see Fig. 3), at each measurement sampling interval, a new block is produced, that includes (i) the state estimates of the current time and (ii) a header consisting of the discrete timestamp, hash value of the previous block that cryptographically links the current block to the previous block, and a random number called the nonce, which is the solution to a puzzle problem. Particularly, the nonce is determined, via bruteforce search, such that the hash value of the current block satisfies a certain condition [40, 50]. We next explain the process of producing a new block and the corresponding update in the distributed ledger.
At each time , after computing the local state estimates, each node broadcasts a data package, containing its local state estimates, , as the message, to the entire network. Then, as explained above, every node checks the validity of each received data package. A broadcasted data package is verified over the network only if the majority of the nodes validate it. After all the broadcasted data packages are verified, all the local state estimates of the current time are recorded into a new block at each node.
In this process, to alleviate inconsistencies in the distributed ledger, we need to make sure that the ledger is synchronously updated over the network. Moreover, we seek a mutual consensus over the network for the update of the ledger. Using the common ProofofWork consensus mechanism [40], some nodes act as “miners”, where the miners compete with each other to solve the puzzle and obtain the nonce value. Public BCbased systems such as Bitcoin [42] provides an incentive to miners, where the miner solving the puzzle first, gets a financial reward. In our case, however, this procedure is completely autonomous in that at each time, among all the local centers, a few nodes are randomly assigned as the miners. Then, the miner solving the puzzle first broadcasts the nonce value to the entire network. Each node then checks the puzzle solution. If the majority of nodes verify the solution, then the new block is produced and simultaneously connected to the existing ledger at every node. Here, random assignment of the miners enables a higher level of security to the BC mechanism compared to permanently assigning the miner nodes, that would then be the open targets for adversaries.
As explained before, for a reliable dynamic state estimation, we require that the most recent state estimates are secure. Moreover, in case of an anomaly over the network, e.g., a failure or an attack, we can recover the system state from the recent reliable state estimates (the details are presented in Sec. V and Sec. VI). For these reasons, in the distributed ledger, we propose to store a finite number of blocks that contain the recent state estimates in order to protect them against all kinds of manipulations. Let the number of blocks in the ledger be . Then, at each time, while a new block is connected to the existing ledger, the oldest block is pruned (see Fig. 3). This, in fact, solves the problem of monotonically increasing storage costs in the conventional BCs as well. We will explain how to choose in the subsequent sections. Further, in our case, at each time interval, only one new block is generated and we have a main chain of blocks without any forks, unlike the wellknown public BCs such as Bitcoin [42] and Ethereum [43], in which the ledger keeps record of the transactions between nodes and it is possible to observe multiple transactions at the same time. Finally, we assume that the update of the ledger is completed within one measurement sampling interval.
The distributed ledger is resistant to modifications due to the following reasons: (i) since each block is cryptographically linked to the previous block, to modify a single block without being noticed, all the subsequent blocks must be modified accordingly, and (ii) since updating the ledger requires mutual consensus over the network, in order to modify the ledger, malicious entities need to control the majority of nodes in the network. This also implies that the security improvements introduced with the BCbased system design are valid only if the majority of the nodes in the network are reliable. We expect that this condition is easily satisfied in largescale smart grids with many nodes distributed over a geographically wide region, for which hacking the majority of the nodes is practically quite difficult. From an attacker’s perspective, letting be the minimum number of nodes to be hacked in order to control the majority and hence to be able to arbitrarily modify the ledger, the best strategy would be attacking on the least costly set (in terms of the required efforts and resources for hacking) among the possible sets of nodes. Furthermore, an attacker may replace a block, say , with a fake block such that with a practically negligible probability (similar to the analysis in Sec. IIIB1), however, the fake block can still be noticed by comparing the modified ledger with the other copies of the ledger in the network. Hence, for a successful fake block, the attacker still needs to control the majority of the nodes.
Iv Distributed State Estimation Assuming Regular System Operation
In this section, assuming that the grid operation always fits to the nominal system model (see (1) and (2)), we aim to perform optimal state estimation in a fullydistributed manner, where each node only estimates its local state vector .
For an optimal state estimation, the node needs to use all the measurements that bear information about , either fully or partially. The node has access to only the local measurements, , that are clearly informative about (see (5)). On the other hand, due to the shared state variables, some measurements collected at the other nodes may also be informative about . Let
be the set of nodes that share at least one state variable with the th node. Let and be the subvector of that is informative about , i.e.,
Moreover, let . Then, we can decompose as
(6) 
where the matrices and are easily determined such that the equality in (6) is satisfied for all . Moreover, is the subvector of corresponding to .
In (6), the term is clearly noninformative and irrelevant to the state estimator of the th node. On the other hand, the th node estimates (and hence ) at each time . Then, denoting the estimate of by , based on (6), we can write
(7) 
where
(8) 
We propose that the th node subtracts from to compute and then transmits it to the th node at each time in order to facilitate the local state estimation at the th node. Henceforth, we call as the processed measurements (at the th node for the th node).
Notice that for each node , (4) defines the local state transition model. Moreover, the local measurements (see (5)) and the processed measurements to be received from the other nodes together form the overall measurement vector of the th node (for the local state estimation task). Let the overall measurement vector of the th node be denoted with and as an example, let . Then, is simply obtained as follows:
For the th node, we then have the following linear statespace equations:
(9) 
where is determined based on , , and , and is the noise vector corresponding to (see (5) and (7)):
(10) 
The Kalman filter is an iterative realtime estimator consisting of prediction and measurement update steps at each iteration. For the linear system given in (9), the following equations describe the Kalman filter iteration at time , where denotes the state estimates of the th node at time ( for prediction and for measurement update):
Prediction:
(11) 
Measurement Update:
(12) 
where and denote the estimates of the state covariance matrix of the th node at time based on the measurements up to and , respectively. Moreover, is the Kalman gain matrix of the th node at time and denotes the covariance matrix of .
We now go back to the process of obtaining at the th node. We see through (7) that this process contains estimation errors due to the term . Our aim is to statistically characterize the estimation errors in order to compute the statistics of (see (8)), which is, in fact, required for an optimal state estimation at the th node. Note that the th node estimates its local state vector (and hence ) via its local Kalman filter. We propose that
where is the subvector of corresponding to . Then, the following lemma states the distribution of .
Lemma 1:
(13) 
where
(14) 
where is the estimate of the state covariance matrix of , that can be obtained from and denotes the size of the vector .
Proof.
See Appendix A. ∎
Through (4) and (5), we know that and are independent zeromean multivariate Gaussian noise vectors. Furthermore, from Lemma 1, we know that the noise term for the processed measurements (see (7)) is also zeromean multivariate Gaussian. That implies is a zeromean multivariate Gaussian vector (see (10)). Then, since all the noise terms are Gaussian for the linear system in (9), the local Kalman filter given in (11)–(12) is the optimal state estimator for the th node in minimizing the mean squared state estimation error [55].
Here, we observe that the computation of requires the correlation between and . Using (8), we can write
which requires the correlation between the state estimation errors for and . However, since it is possible that or and the correlations for such state variables that belong to different local centers are not computed over the network, we approximate the correlation terms involving the state variables in and as zero. This is the only point we make an approximation and hence loose the optimality. Through simulations, we observe that this approximation only slightly increases the state estimation error compared to the optimal centralized Kalman filter. Hence, in the rest of the design and analysis (in Sec. V and Sec. VI), we assume that the proposed distributed dynamic state estimator achieves nearoptimal performance. Finally, we note that the main contributions of the proposed distributed dynamic state estimator are in the processing of the measurements that are acquired at the other local centers and only partially relevant to the local state vector .
Remark 1: Each node needs the knowledge of for the measurement update step of its local Kalman filter, where computing requires (see (15)). We observe through (14) that depends on and . Here, is determined based on the network topology, which is available at every node. On the other hand, is extracted from the state covariance matrix of the th node, , which is primarily computed at the th node. Nevertheless, we see through (11) and (12) that the (iterative) computation of does not depend on online meter measurements and hence it can be computed offline at each node in the network. Moreover, in the proposed distributed trust management scheme (see Sec. VI), each node needs to compute the state covariance matrices of all other nodes. Hence, the proposed state estimation mechanism does not introduce further computational complexity beyond the trust management scheme.
V Secure State Estimation against Measurement Anomalies
The state estimator proposed in the previous section is based on the assumption that the network operation always fits to the nominal system model. However, in practice, various kinds of anomaly might appear all over the network, e.g., measurement anomalies, due to cyberattacks or network faults. We would like to achieve secure state estimation against the measurement anomalies. Towards this goal, we propose to quickly and reliably detect them and then to eliminate their effects as much as possible.
Considering that the attackers can be advanced, strategic, or adaptive to the system and detector dynamics, it is hard to model all attack types [7, 8]. Moreover, considering the complex cyberphysical nature of the smart grid, it is also difficult to model all types of network faults. Hence, in this study, we do not focus on particular anomaly types and rather we assume the anomaly type is totally unknown. On the other hand, once we detect an anomaly, this prevents us to recover the useful part of the anomalous measurements (if any). Our anomaly mitigation strategy is then to reject/neglect the anomalous measurements in the state estimation process and predict the system state, until the system is recovered back to the regular operating conditions, based on (i) the previous reliable state estimates, securely recorded in the distributed ledger and (ii) the nominal system model. As the meters are distributed over the network, each node analyzes only its local measurements. We next explain the proposed measurement anomaly detection scheme at the th node and then the corresponding state recovery over the network.
Va Realtime Detection of Local Measurement Anomalies
During the regular system operation, it is possible to observe infrequent outliers and the Kalman filter is known to be effective to compensate (suppress) small errors due to such infrequent outliers [56]. Hence, we are particularly interested in longterm anomalies where there exist a temporal relation between anomalous measurements. Our aim is to detect such anomalies timely and reliably using the measurements that become available sequentially over time. In this problem, although we can statistically characterize the nominal measurements sufficiently accurately based on the nominal system model and online state estimates under regular system operation, the measurements can take various unknown statistical forms in case of an anomaly. Hence, we follow a solution strategy in that we derive and monitor (over time) a univariate statistic that is informative about a possible deviation of the online measurements from their nominal model, as detailed next.
At each time , the th node locally observes (see (5)). Moreover, based on the local Kalman filter, we have
(16) 
Using (5) and (16), we can write
(17) 
where
Then, based on (17),
(18) 
is a chisquared random variable with
degrees of freedom. Notice that has a timeinvariant distribution under regular system operation.Let
be the cumulative distribution function (cdf) of the chisquared random variable with
degrees of freedom. If the right tail probability corresponding to satisfies(19) 
then the corresponding local measurement vector is considered as an outlier for the significance level of . In case of an anomaly, we expect that the chisquared statistic takes higher values compared to its nominal values and hence, we expect to observe more frequent outliers. Then, we can model an anomaly as persistent outliers, as in [57] and [58].
Based on (19), for an outlier , we have
(20) 
and similarly, for a nonoutlier , we have . Hence, we can consider as a (positive/negative) statistical evidence for anomaly at time . Then, similar to the accumulation of the loglikelihood ratios in the wellknown cumulative sum (CUSUM) test, we can accumulate ’s over time and declare a measurement anomaly only if there is a strong/reliable evidence supporting that, that results in the following CUSUMlike test [57]:
(21) 
where denotes the stopping time at which an anomaly is detected at the th node and .
Let be the unknown changepoint at which an anomaly happens at the local measurements of the th node and continues thereafter. The CUSUM test always keeps an estimate for the changepoint and update it as the measurements become available over time [59, Sec. 2.2]. Let be the changepoint estimate of the proposed test. Initializing at , whenever the decision statistic reaches zero, we make the following update: . In other words, is the latest timeinstant at which the decision statistic reaches zero. The final changepoint estimate is determined when an anomaly is declared at the stopping time . Hence, we have
The changepoint estimate will be useful for state recovery (see Sec. VB).
For the CUSUMlike test in (21), to achieve a lower false alarm rate (equivalently a larger average false alarm period), the significance level is chosen smaller and/or the test threshold is chosen higher, that, on the other hand, leads to larger detection delays (see (20) and (21)). Let be the average false alarm period, i.e., the average stopping time when no change happens at all (). The following corollary (to Theorem 2 of [57]) describes how to choose and to obtain a desired lower bound on the average false alarm period.
Corollary 1: For a chosen and
(22) 
we have
where denotes the LambertW function^{1}^{1}1There exists a builtin MATLAB function lambertw..
Proof.
See Appendix B. ∎
VB State Recovery
Once the proposed CUSUMlike detection scheme in (21) declares an anomaly, our purpose is to recover the current (and the future) state estimates. Since the local measurements observed after the (unknown) changepoint, i.e., , are not reliable, we can estimate the changepoint and recover the state estimates from the latest reliable state estimates at the changepoint estimate .
The smart grid is a highly interconnected network, as in the proposed mechanism, the local state estimation is performed using the local measurements as well as the processed measurements received from some other nodes. Hence, if an anomaly happens at a node, the whole network is affected by the anomaly to some extent. Then, whenever a measurement anomaly is detected at a node, say the th one at time , the th node immediately broadcasts to the entire network. Then every node makes the following state recovery:
(23) 
that essentially corresponds to the case where we replace all the measurements during the anomaly interval with the corresponding pseudo measurements , that makes the measurement innovation signal zero (see (11) and (12)).
The regular network operation requires the participation of every node in the state estimation process. Hence, whenever a measurement anomaly is detected at the th node, we propose to raise an alarm flag, calling for further investigation at the th subregion and the neighboring subregions considering the possibility that the processed measurements received from the neighboring nodes may also lead to an anomaly in the local state estimation process. The investigation should be performed considering also the possibility of false alarms. After the investigation process and possibly the recovery of the system, the predesigned regular network operation is restarted. Our main purpose here is to decrease the state estimation errors due to anomalies and hence to provide more reliable state estimates during the anomaly mitigation/system recovery period. The identification of anomalies (types and causes) and the development of the corresponding mitigation/recovery strategies are needed to achieve a completely autonomous network operation, which are beyond the scope of the current work.
Remark 2: For the state recovery, the distributed ledger consisting of the recent blocks needs to include the state estimates of the time , where is not known ahead of time. However, since we expect quick detection, we do not expect to observe a that is quite far away to the stopping time . Hence, in practice can be chosen reasonably small. In the case where the ledger does not contain the state estimates of the time , we can recover the state estimates from the oldest state estimates available in the ledger considering that they are more reliable compared to the other alternatives.
Vi Distributed Trust Management
In BCbased distributed networks, malicious adversaries may obtain illegitimate access to the system, e.g., via stealing the digital identity of some nodes, malware propagation, etc., [60, 11], and additionally some nodes may get faulty during the system operation. Moreover, as the network is fullydistributed, there is no centralized trusted node to check whether all nodes are safe and trustable, i.e., whether the nodes are functioning according to the predesigned network rules. Therefore, against the possibility of misbehaving nodes, we need a distributed trust management mechanism over the network, in which all nodes collectively verify the trustability of each node. Recall that every node knows (i) the nominal system model and the network configuration, and (ii) a finite history of recent state estimates of the all nodes stored in the shared distributed ledger (see Sec. IIIB2). Using only (i) and (ii), each node votes on the trustability of all other nodes. Then, at each time, the trustability of each node is decided via majorityvoting. We explain below how the th node is evaluated by the other regular (nonmisbehaving) nodes.
Suppose that at an unknown time , an unexpected event happens at the th node: the node gets faulty or an attacker hacks and takes control of the node. Then, we can no longer expect that the behavior of the th node fits to its predefined regular operation. Furthermore, similar to the measurement anomalies, it is quite difficult to model the (anomalous) behavior of the th node after time . Our objective is to detect misbehaving nodes as quickly as possible to timely mitigate the corresponding effects on the state estimation process. For the evaluation of the th node, we propose that each node decides whether the state estimates provided by the th node exhibit an anomalous pattern over time. In this direction, we next derive the nominal evolution (over time) model of the local state estimates of the th node. Then, similar to Sec. VA, we derive a univariate statistic that is informative about a possible deviation of the local state estimates of the th node from the nominal evolution model and monitor this statistic over time.
Based on the local Kalman filter iteration of the th node at time (see (11) and (12)), we can write
(24)  
(25) 
where
(26) 
Here, (24) is obtained using (9) and (11). Moreover, is obtained using (see (16)), , and approximating the correlation terms involving the state variables belonging to different local centers as zero, as in Sec. IV.
Notice that (25) statistically characterizes the local state estimates at time , given the local state estimates at time , under regular system operation. Based on (25), we can write
that implies
(27) 
is a chisquared random variable with degrees of freedom under regular system operation.
If node is misbehaving, we expect that the state estimates provided by it deviate from the nominal evolution model given in (25), that makes larger than its nominal values. Then, similar to the detection of measurement anomalies in Sec. VA, the following CUSUMlike detection scheme can be employed at a regular node to decide on the trustability of the th node:
(28) 
where is the decision statistic at time , , is the decision threshold, is the statistical evidence at time , is the significance level, is the right tail probability corresponding to , and denotes the cdf of the chisquared random variable with degrees of freedom. A regular node then evaluates the th node as trustable until time , and misbehaving after . Furthermore, as before, the unknown changepoint is estimated by the th node as the latest timeinstant at which the decision statistic reaches zero before time :
The overall decision on the trustability of the th node is made by the majority of the nodes. Let the vote of the th node on the trustability of the th node at time
be denoted with a binary variable
, where or if the th node evaluates the th node as trustable or misbehaving, respectively. Notice that if the th node is regular, then it votes based on the test in (28) that gives rise to . The time at which the th node is declared misbehaving over the network is determined as follows:(29) 
Notice that this decision mechanism works as intended unless the majority of the nodes are misbehaving. In other words, as long as the majority of the nodes regularly employ the proposed detection scheme in (28) and vote accordingly, then the trustability of the th node is evaluated reliably over the network, for every node .
Under the nominal system operation, all nodes are regular and hence the detector (for the th node) in (28) is identical at all the nodes in the network. Then, the false alarm rate of (29) is equal to the false alarm rate of (28). Hence, the proposed trust management scheme achieves the same false alarm guarantees through Corollary 1 (after replacing the parameters and in the corollary with and , respectively).
If (29) declares the th node as misbehaving, an alarm flag is raised, calling for an investigation at the th node and the neighboring nodes . Moreover, due to the internode data exchanges, a node misbehavior affects all the nodes in the network to some extent. Then, as before, until the system is recovered back to the regular operating conditions, the local states of each node can be predicted based on the nominal system model and the latest reliable estimates at time as follows:
where can be obtained from the distributed ledger.
The proposed trust management scheme requires that each node computes and for every other node (see (26)–(29)). Since the nominal system model is known by each node and the local state estimates provided by the other nodes are available (via the distributed ledger) to each node, the node already knows , , , and at each time for every node . On the other hand, , , and are not directly available to the th node. Fortunately, the Kalman gain matrix and the estimate of the state covariance matrix can be computed offline without requiring online meter measurements. Moreover, is computed based on the estimates of the state covariance matrices (see also Remark 1). Hence, at each node , we propose to compute and (iteratively through (11) and (12)) for every other node in the network.
Vii Summary of the Proposed Mechanism
We summarize the proposed procedure at the th node in Fig. 4, where the procedure is identical at every node. Since the proposed mechanism requires an investigation after the detection of a measurement anomaly at any node or a misbehavior of any node, the overall stopping time of the network is given by
(30) 
If several detection mechanisms simultaneously give alarms, we can recover the state estimates based on the oldest among the corresponding changepoint estimates. Moreover, if the state estimates corresponding to the (oldest) changepoint estimate are not included in the distributed ledger consisting of the recent blocks, then we can choose as the state recovery point, that corresponds to the oldest state estimates available in the ledger, where the state recovery point is denoted with in Fig. 4. In case an anomaly is declared over the network, after an investigation and possibly the recovery of the system, the proposed mechanism can be restarted. Finally, the number of blocks in the distributed ledger can be chosen based on the maximum expected detection delay and the corresponding changepoint estimate over an offline simulation.
Viii Simulation Results
In this section, we evaluate the performance of the proposed mechanism via simple case studies over an IEEE14 bus power system, that consists of subregions, buses, and meters (see Fig. 5). The bus is chosen as the reference bus, the state transition matrix is chosen to be an identity matrix, i.e., , and the measurement matrix is determined based on the network topology. The noise variances are chosen as , and the initial state variables (voltage phase angles) are determined via the DC optimal power flow algorithm for case14 in MATPOWER [61]. For the proposed detection schemes, to achieve , we choose and (see (22)). Then, we obtained via a Monte Carlo simulation that the average false alarm period of the network (see (30)) is . Moreover, we choose the number of blocks in the distributed ledger as . In the following, we present simulation results firstly for a measurement anomaly case and then for a node misbehavior case.
Viiia Case 1: Measurement Anomalies
As an example to the measurement anomalies, we consider FDI attacks launched at time against the measurements in the subregion :
(31) 
where denotes the injected false data at time . We assume that subregions 1 and 2 are under FDI attack after time with , , , , where denotes a uniform random variable in the range of .
Firstly, assuming and , we present in Fig. 6 the sum of the mean squared state estimation errors over all local centers for both the preattack period, i.e., , and for the attacking period of . We present the performance of the proposed distributed secure state estimation mechanism, the centralized Kalman filter, and the robust centralized Kalman filter that rejects gross outliers and replaces the corresponding measurements with the pseudo measurements, similar to [31]. In particular, the robust Kalman filter computes a chisquared statistic at each time using all the measurements , similar to (18
), and compute the corresponding pvalue (the right tail probability) based on the chisquared distribution with
degrees of freedom. The significance level for outliers is chosen as . Then, if the pvalue is less than , the corresponding measurements are replaced with the pseudo measurements
Comments
There are no comments yet.