Power Systems Topology and State Estimation by Graph Blind Source Separation

09/17/2018
by   Sivan Grotas, et al.
Ben-Gurion University of the Negev
0

In this paper, we consider the problem of blind estimation of states and topology (BEST) in power systems. We use the linearized DC model of real power measurements with unknown voltage phases (i.e. states) and an unknown admittance matrix (i.e. topology) and show that the BEST problem can be formulated as a blind source separation (BSS) problem with a weighted Laplacian mixing matrix. We develop the constrained maximum likelihood (ML) estimator of the Laplacian matrix for this graph BSS (GBSS) problem with Gaussian-distributed states. The ML-BEST is shown to be only a function of the states second-order statistics. Since the topology recovery stage of the ML-BEST approach results in a high-complexity optimization problem, we propose two low-complexity methods to implement it 1) Two- phase topology recovery, which is based on solving the relaxed convex optimization and then finding the closest Laplacian matrix, and 2) Augmented Lagrangian topology recovery. We derive a closed-form expression for the associated Cramer-Rao bound (CRB) on the topology matrix estimation. The performance of the proposed methods is evaluated for the IEEE-14 bus test-case system and for a random network. It is shown that, asymptotically, the state estimation performance of the proposed ML-BEST methods coincides with the oracles minimum mean-squared-error (MSE) state estimator, and the MSE of the topology estimation achieves the proposed CRB.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

08/24/2020

Graph Signal Processing Meets Blind Source Separation

In graph signal processing (GSP), prior information on the dependencies ...
09/11/2020

Generalized Minimal Distortion Principle for Blind Source Separation

We revisit the source image estimation problem from blind source separat...
12/30/2016

Double Coupled Canonical Polyadic Decomposition for Joint Blind Source Separation

Joint blind source separation (J-BSS) is an emerging data-driven techniq...
12/02/2013

Grid Topology Identification using Electricity Prices

The potential of recovering the topology of a grid using solely publicly...
05/31/2017

Learning Graphs with Monotone Topology Properties and Multiple Connected Components

Learning graphs with topology properties is a non-convex optimization pr...
04/10/2015

Gradient of Probability Density Functions based Contrasts for Blind Source Separation (BSS)

The article derives some novel independence measures and contrast functi...
12/14/2020

Traffic Rate Network Tomography with Higher-Order Cumulants

Network tomography aims at estimating source-destination traffic rates f...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

State estimation is a critical component of modern energy management systems (EMSs) for multiple monitoring purposes, including analysis, security, control, situational awareness, stability assessment, power market design, and optimization of electricity dispatchment [1, 2]. In the DC model, the states are the bus voltage angles, while the grid topology includes the arrangement of loads or generators, transmission lines, transformers, and the statuses of system devices. It should be noted that this definition generalizes the computer science graph theory definition, which refers to the connectivity of the graph, since here the topology also includes the weights. In currently applied systems, it is assumed that the EMS has precise knowledge of the grid topology [1], which is used for obtaining accurate state estimation. However, knowledge of grid topology may not be available and it may change over time due to failure, opening and closing of switches on power lines, and the presence of new loads and generators. For example, large-scale penetration of distributed generation results in regular topology changes due to ad-hoc connection of many plug-and-play components. Even worse, a distribution system operator usually lacks topology information, as many of the distributed energy resources do not belong to the utility [3, 4]. The topology data may also be incorrect due to malicious attacks [5, 6, 7]. Thus, methods for state estimation that are not based on a known topology are crucial for obtaining a reliable system model and high power quality. An additional use for topology identification is event detection, such as identifying faults, line outages, and system imbalances [8, 9]. Moreover, it can be used to secure the system from potential cyberattacks on the topology information and to identify the potential vulnerabilities of a power system.

Several approaches to topology identification have been proposed in the literature. Detecting topological changes has been studied in [10, 11] and the conditions for the detectability of topology errors are studied in [12]. Recently, a few papers have addressed blind estimation of the grid topology by observing multiple power injection supervisory control and data acquisition (SCADA) measurements [13, 14], voltage and power data obtained by phasor measurement units (PMUs) [15, 16], voltage measurements and their associated correlations [17], and electricity price based market data [18]. In [19], an unobservable attack is designed based on incomplete knowledge of the system matrix, which is learned via a blind identification approach. The methods proposed in [13, 14, 18, 19]

can reveal part of the grid topology, such as the grid connectivity and the eigenvectors of the topology matrix, but they cannot reconstruct the full topology matrix with exact scaling and true eigenvalues. Thus, incorporating blind source separation (BSS) techniques with the specific characteristics of a graph seems promising.

BSS methods aim at restoring a set of unknown source signals from a set of observed linear mixtures of these source signals (see, e.g. [20, 21, 22, 23, 24, 25, 26, 27, 28, 29]), without prior knowledge of the sources and the mixing system. The problem of BSS has been extensively investigated in the literature in the recent two decades. Prior works on maximum likelihood (ML) separation in BSS deal with general stationary sources [21, 30]

, autoregressive (AR) sources, and AR Gaussian mixture model distributed sources

[28, 29]

. The ML BSS for nonstationary structures with varying variance-profiles was considered in

[31]

. However, classical BSS solutions are ambiguous in the sense that the order, signs and scales of the original signals cannot be retrieved. These ambiguities cannot be tolerated in the considered power system problem. In addition, usually the distributions of the states are assumed to be Gaussian due to the central limit theorem, while most BSS methods cannot handle Gaussian sources. Therefore, new methods for BSS are required for the semiblind scenario of a Laplacian mixing matrix with Gaussian sources, without permutation and scaling problems.

In addition to state estimation in power systems, the recent field of graph signal processing (GSP) [32] has many applications, [33, 34, 35]. A major challenge in GSP is learning the graph structure from data under Laplacian matrix constraints (see, e.g. [36, 37, 38]) and blind deconvolution of signals on graphs [39], aim to jointly identify the filter coefficients and the input signal. In future work the recent approach has the potential to be extended to general GSP applications.

In this paper, we consider the problem of state estimation and topology identification in power systems based on active power measurements. First, we show that this problem is equivalent to the problem of BSS with a weighted Laplacian mixing matrix, where the weights are determined by the branch susceptances. Then, we derive the ML blind estimation of states and topology (ML-BEST) method for Gaussian-distributed states, that incorporates the constraints of a Laplacian mixing matrix and is shown to be a second-order statistics (SOS) method. Since the topology recovery stage of the ML-BEST estimator is shown to be a NP-hard optimization problem, we suggest two practical low-complexity methods to implement this stage: (1) Two-Stage topology recovery, which is based on solving the relaxed convex optimization and then finding the closest Laplacian matrix, and (2) Augmented Lagrangian topology recovery. Preliminary results can be found in [40]. We also derive a closed-form expression for the Cramr-Rao bound (CRB). Finally, simulations demonstrate that the proposed ML-BEST methods are applicable for different network topologies, and asymptotically achieve the CRB.

The remainder of the paper is organized as follows. In Section II we introduce the system model and the graph BSS (GBSS) problem for state and topology estimation in power systems. The ML-BEST solution is defined and two different practical methods for its topology recovery stage are suggested in Sections III and IV, respectively. Section V offers some remarks, including a parameter identifiability analysis, complexity discussion, and possible extensions of the proposed model and methods. A closed-form expression for the CRB of the topology matrix is derived in Section VI. The proposed methods are evaluated via simulations in Section VII. Conclusions appear in Section VIII.

Ii Problem formulation

In this section, we formulate the problem of estimating the state and topology/admittance matrix in power systems under the linear DC power model. We show that this problem is equivalent to BSS with a Laplacian mixing matrix.

Ii-a Notation

In the rest of this paper vectors are denoted by boldface lowercase letters and matrices by boldface uppercase letters. The

identity matrix is denoted by , and denotes the constant -length one vector. The vectors and are zero vectors (with appropriate dimension), except for the th element of , which is . Additionally, denotes Kronecker’s delta, which equals if and otherwise. The notations , , and denote the determinant operator, the trace operator, and the Kronecker product, respectively. For a full-rank matrix , is the Moore-Penrose pseudo-inverse. The th element of the vector , the th element of the matrix , and the submatrix of are denoted by , , and , respectively. If is a positive semidefinite matrix we denote it by and its square root, , satisfies , where denotes the inverse of this square root. For any matrix , and denote its Frobenius and -(pseudo)norm (counting its non-zero entries), respectively, is the nonnegative part of , and is a vector obtained by stacking its columns. Similarly, for any symmetric matrix , is a vector obtained by stacking the columns of the lower triangular part of , including the diagonal, into a single column. Finally, we denote the cone of real symmetric matrices of size by .

Ii-B Graph representation of power systems

A power system can be represented as an undirected connected weighted graph, , where the set of vertices, , is the set of buses (that represent interconnections, generators or loads) and the edge set, , is the set of connected transmission lines between the buses. An arbitrary orientation is assigned to each edge , , , , that are ordered in a lexicographical order, which connects the vertices and . The cardinality of the edge set, , represents all possible connections in the graph. According to the -model of transmission lines [1], each line is characterized by the line admittance , .

The incidence matrix of a graph is [35], where the element of is given by

(1)

and . In addition, let be a diagonal matrix where if , . For connections that do not exist we use . Then, we can define the graph Laplacian matrix, , as

(2)

The matrix is a real, symmetric, and positive semidefinite matrix111It should be noted that is a positive semidefinite matrix, assuming we only have positive susceptances [41]., which satisfies the null space property, , and with nonpositive off-diagonal elements.

Ii-C DC model and problem formulation

We consider the DC power flow model [1], which is based on the following assumptions on the network:

A.1

Branches are considered lossless, which results in , where is the susceptance of the branch.

A.2

The bus voltage magnitudes, , , are approximated by 1 per unit (p.u.).

A.3

Voltage angle differences across branches are small, such that , where , , are the bus voltage angles.

Under Assumptions A.1-A.3, the active power injected at bus satisfies

(3)

Now, let be the vector of active power injected and the vector of voltage phase angles at time , . Then, based on the model from (II-C), the noisy linearized DC model of the network can be written as

(4)

where the topology matrix , defined in (2), is a deterministic unknown Laplacian matrix, which is considered static for a short-period of time and under normal operating conditions. The noise is a stationary Gaussian sequence with zero mean and a covariance matrix , i.e. , and it is assumed that the additive noises are independent of the state vectors. The vectors ,

, are assumed to be unknown random states with a joint probability density function (pdf)

and marginal pdfs of , , . By subtracting the mean from the data, we can assume, without loss of generality, that has zero mean. The resulting centralized measurements are given by , where is the sample mean. For the rest of this paper, will denote the mean-centered active power data.

Now, in order to reformulate the model with a full-rank mixing matrix, we use the relation

(5)

where

(6)

and is a st-order reduced-Laplacian matrix, which is obtained by removing the first row and first column of . By substituting (5) in (4), one obtains

(7)

where

. By multiplying both sides of (7) with , it can be verified that the model in (7) is equivalent to

(8)

where and , . In addition, it can be shown (see, e.g. pp. 134-144 [42]) that the modified noise sequence satisfies , .

We assume here that all sources are time-independent Gaussian distributed, i.e. , . Thus, , , where . Under the assumption that is known, the observations vectors are also independent Gaussian-distributed vectors, i.e. and , , where

(9)

and, assuming nonsingular matrices,

(10)

The reduced topology matrix, , has the following properties [35, 37]:

P.1

Full rank - Under the assumption of a connected graph, is a nonsingular matrix of rank and, thus, can be identified in general. In power system terminology, we assume that there are no unobservable islands in the grid.

P.2

Positive semidefinite - Since is a symmetric, positive semidefinte matrix, is also a symmetric, positive semidefinte matrix.

P.3

Nonpositive off-diagonal elements - , , .

P.4

Diagonally dominant - Since is a Laplacian matrix, is a diagonally dominant matrix, i.e. , .

P.5

Sparsity (optional) - It is shown in previous works that the power system is sparse [43], i.e. the zero pseudonorm of the off-diagonal entries of , , is much smaller than .

Iii Ml-Best

In this section, we develop the basic ML-BEST approach that jointly reconstructs the matrix and the states , , for the model from Section II. This problem can be interpreted as a BSS problem with a Laplacian mixing matrix, or graph BSS (GBSS). First, in Subsection III-A the minimum mean-squared-error (MMSE) estimator of the random states, , , is developed. Then, in Subsection III-C, we develop the ML estimator of the noise variance, , and formulate the optimization problem describing the ML estimator of the mixing system.

Iii-a MMSE state estimation

For given and , the sequences , , , , are jointly Gaussian. Thus, in this case the MMSE estimator of the state vector is a linear estimator given by (see, e.g. Chapter 20 in [44], [45])

(11)

. We refer to the estimator in (11) as the oracle MMSE state estimator, i.e. an ideal estimator which has perfect knowledge of the noise variance and the system topology.

The practical state estimator for the considered GBSS problem is obtained by plugging in the ML estimators of the noise variance and the reduced-Laplacian matrix, and , respectively, that are developed in the following in Subsections III-B and III-C, into (11), which results in

(12)

.

For high signal-to-noise ratio (SNR) values, i.e. when , the matrix is a singular matrix and, thus, the covariance matrix of the data from (9) is also a singular matrix. In this case, instead of using pseudo inverse as in (11) and (12

), the unknown parameters can also be treated by removing the linearly dependent random variable (see, e.g. Chapters 3 and 10 in

[46]). In power system state estimation this is usually done by setting one bus as a reference bus and setting its angle to zero (see, e.g. [1]), and then only estimating . Here we prefer to use instead the state estimation method in (11) and (12) for estimation of .

Iii-B ML estimation of the noise variance

It is shown in [47, 48, 49] that for Gaussian measurements with the aforementioned structure, the ML estimator of the noise variance is given by

(13)

where are the eigenvalues of the sample covariance matrix,

(14)

Iii-C System identification: ML estimation of the mixing matrix

By using the invariance property of the ML estimator [50] and the relation in (5), the ML estimator of the full Laplacian matrix can be obtained from the ML estimator of the reduced-Laplacian matrix, , as follows:

(15)

In the following, the ML estimation of the reduced topology matrix, , is formulated and is shown to be NP-hard. Practical methods to approximate the ML estimator of , , are developed in the next section. Under the model from (8) and the Gaussian-distributed sources assumptions, the normalized log likelihood of , , after removing constant terms and substituting the ML estimator of the noise variance from (13), satisfies

(16)

where

(17)

is the modified sample covariance matrix and the last equality is obtained by substituting (14). That is, the log-likelihood from (16) depends on the data only through the sample covariance matrix, , which is the sufficient statistic for estimating .

Since the reduced-Laplacian matrix satisfies Properties P.1-P.4, we are interested in minimizing over the domain of symmetric matrices and under the associated constraints as follows:

(18)

The Gaussian log-likelihood function, , is a concave function of the inverse covariance matrix, . However, even without the sparsity constraint, the constraints in (18) cannot be rewritten as convex constraints on . Therefore, the resulting optimization is not a convex optimization and, in addition, a direct Karush-Kuhn-Tucker (KKT) conditions [51] solution of this constrained minimization is intractable. Two low-complexity implementation methods are described in the next section.

Imposing directly the sparsity constraint in P.5 usually results in complex combinatorial searches, and following advances in compressive sensing [52, 53], the sparsity constraint can be approximated by restricting the off-diagonal -norm. We perform simulations that suggest that simple elementwise thresholding of the estimated Laplacian matrix is competitive with methods. Thus, at the end of the ML-BEST approach, we thresholded the off-diagonal elements of the estimator of the topology matrix, , from (15), with a threshold, , such that the th element of the final estimation is given by

(19)

, . The threshold should be tuned until the desired level of sparsity is achieved, while keeping connectivity. The diagonal elements of are known to be positive for the Laplacian matrix, which thus, has partially known support. Thus, set to be smaller than the magnitude of the smallest estimated element of the diagonal:

(20)

where . The value of can be set to the inverse of the number of buses, , or of the average nodal degree [35].

The basic ML-BEST algorithm is summarized in Algorithm 1, for any method of estimation of the reduced-Laplacian matrix, . Two such methods are described in Section IV.

Input: Observations , . State covariance matrix, . Output: Estimators and , .
Algorithm Steps:
  1. (Optional) Remove the sample mean, , from the observations , .

  2. Obtain the sample covariance matrix, , by (14).

  3. Perform eigendecomposition operation for the sample covariance matrix to find its eigenvalues .

  4. Estimate the noise variance by the smallest eigenvalue, .

  5. Estimate the reduced-Laplacian matrix and obtain approximation to , for example, by the two-phase/augmented ML-BEST from Section IV.

  6. Reconstruct the full topology matrix according to (15):

  7. Impose sparsity by setting the threshold according to (20):

    and thresholding such that the th element of the final estimation is given by (19):

    , .

  8. Evaluate the sources according to (12):

    .

Algorithm 1 Basic ML-BEST Algorithm

Iv Practical implementations of the ML-BEST

In this section, two low-complexity estimation methods of the reduced topology are derived: 1) Two-Stage topology recovery in Subsection IV-A; and 2) Augmented Lagrangian topology recovery in Subsection IV-B.

Iv-a Two-phase topology recovery

In this subsection, we propose a low-complexity method for solving (18) in two phases. First, we relax the original optimization problem from (18), by removing constraints and into

(21)

It is well known that the relaxed optimization problem from (21) is a convex optimization w.r.t. and the optimal solution is the sample covariance matrix inverse, , under the assumption of nonsingular matrices (see, e.g. p. 466 in [54], [55]). Then, by using the invariance property of the ML estimator [50], the one-to-one mapping in (10), and the symmetry , one obtains that the unique minimum of (21) w.r.t. , which is the ML estimator of a symmetric positive definite mixing matrix, , satisfies

(22)

which implies that

(23)

In the second phase, we find the closest graph Laplacian matrix to the matrix in the sense of Frobenius norm. Thus, we solve the following optimization problem:

(24)

The problem in (24) is a convex optimization problem and can be efficiently computed by standard semidefinite program solvers, such as CVX [56]. This two-phase topology recovery algorithm is summarized in Algorithm 2. The ML-BEST approach with two-phase topology recovery is implemented by Algorithm 1, where Step 5 is implemented by Algorithm 2.

Input: , , .
Output: Estimator .
Algorithm Steps:
  1. Evaluate the reduced sample covariance matrix from (17) by .

  2. Evaluate the optimal solution of the optimization in (21) by (23):

  3. Find the closest Laplacian matrix, , to , by solving the convex optimization problem in (24) (by solvers such as CVX [56]).

Algorithm 2 Two-phase Topology Recovery Algorithm

Iv-B Augmented Lagrangian topology recovery

In this subsection we develop a constrained independent component analysis (cICA) method

[57] to solve (18). This approach is based on sequentially estimating the demixing matrix, , under constraints, where the inequality constraints (Constraints and from (18)) are transformed into equality constraints in the augmented Lagrangian [58, 59]. Constraint 1) implies the symmetry of , i.e. the equality constraint . Thus, in this case the objective function for the cICA, which is based on Equation (3) in [57] is given by

(25)

where , , and are the nonnegative vector, positive semidefinite matrix, and symmetric matrix, respectively, of Lagrange multipliers, and is the penalty parameter. The minimization of (IV-B) w.r.t. results in the following natural gradient descent learning rule [60] for :

(26)

where is the iteration index,

(27)

and is the learning rate that determines the step size. By substituting (10) and in (16) and then taking the derivative of the result w.r.t. , we obtain

(28)

By substituting (IV-B) in (27), we obtain

(29)

Finally, the Lagrange multipliers, , , and , according to the gradient ascent method are updated as follows:

(30)
(31)
(32)

. is a symmetric matrix with nonnegative elements and zero diagonal. Then, it is updated according to (26)-(32) until convergence.

The augmented Lagrangian topology recovery is summarized in Algorithm 3. The ML-BEST approach with augmented Lagrangian topology recovery is implemented by Algorithm 1, where Step 5 is implemented by Algorithm 3.

Input: , , .
Output: Estimator .
Algorithm Steps:
  1. Evaluate the reduced sample covariance matrix from (17) by .

  2. Initialize , for example, by the estimator from (23):

  3. Set , , , , and to small positive scalar values.

  4. Repeat

    1. Update

      where is given in (IV-B).

    2. Update the Lagrange multipliers, , , and , according to (30), (31), and (32), respectively.

    Until criterion .

  5. Evaluate the reduced topology matrix .

Algorithm 3 Augmented Lagrangian Topology Recovery Algorithm

V Remarks

In this section, we discus the identifiability conditions and complexity in Subsection V-A and V-B, respectively, and describe a few extensions for the proposed model and methods in Subsection V-C.

V-a Identifiability conditions

In this subsection, we discuss the GBSS identifiability conditions, under which the topology matrix and the state vectors can be recovered [48] for the model from Section II with zero-mean measurements. It is well known that Gaussian sources with i.i.d. time-structures cannot be separated [20, 21, 23]. Nevertheless, the following theorem states that when the mixing matrix is a symmetric matrix, consistent separation can rely exclusively on the SOS of the source covariance, even for Gaussian sources.

Theorem 1

Given the model in (4) and the relation in (5), and assuming the following conditions:

  • is a symmetric positive definite matrix

  • The covariance of the states, , is known and is a positive definite matrix

  • The matrix , where and are defined in (17) and (13), respectively, is a positive semidefinite matrix.

Then, the Laplacian mixing matrix, , can be uniquely identified, without scaling and permutation ambiguities, from the sample covariance matrix of the observations, , defined in (14).

Proof:

First we will show that is identifiable. Then, can be uniquely recovered by using the relationship in (5). Similar to the derivation of (10), it can be shown that for any state distribution and independent noise with known noise covariance, , the covariance of the observations, , , satisfies

(33)

where the last equality is obtained by substituting the symmetry property, . It is known that for any positive definite matrix there exists a unique positive definite square root, , such that (see, e.g. p. 448 in [54]). Thus, under the assumption that and are positive definite matrices, the solution of (33) is unique and is given by

(34)

Now, if we use the estimators and in (34) instead of the true unknown values of , , then the existence of a positive definite solution is not guaranteed. Under the Theorem’s assumption that is a positive definite matrix, the uniqueness holds for the solution in (23).

A necessary condition for the existence of the inverse of , as required in Theorem 1, is that the sample covariance matrix has a full rank, i.e. . To ensure numerical stability, we require stricter conditions than the condition