DeepAI
Log In Sign Up

Distributed Learning of Neural Lyapunov Functions for Large-Scale Networked Dissipative Systems

07/15/2022
by   Amit Jena, et al.
Purdue University
MIT
0

This paper considers the problem of characterizing the stability region of a large-scale networked system comprised of dissipative nonlinear subsystems, in a distributed and computationally tractable way. One standard approach to estimate the stability region of a general nonlinear system is to first find a Lyapunov function for the system and characterize its region of attraction as the stability region. However, classical approaches, such as sum-of-squares methods and quadratic approximation, for finding a Lyapunov function either do not scale to large systems or give very conservative estimates for the stability region. In this context, we propose a new distributed learning based approach by exploiting the dissipativity structure of the subsystems. Our approach has two parts: the first part is a distributed approach to learn the storage functions (similar to the Lyapunov functions) for all the subsystems, and the second part is a distributed optimization approach to find the Lyapunov function for the networked system using the learned storage functions of the subsystems. We demonstrate the superior performance of our proposed approach through extensive case studies in microgrid networks.

READ FULL TEXT VIEW PDF

page 2

page 3

page 4

page 5

page 6

page 7

page 8

page 9

07/11/2021

Stabilizing Neural Control Using Self-Learned Almost Lyapunov Critics

The lack of stability guarantee restricts the practical use of learning-...
07/29/2021

Distributed Identification of Contracting and/or Monotone Network Dynamics

This paper proposes methods for identification of large-scale networked ...
08/23/2020

Learning Dynamical Systems using Local Stability Priors

A coupled computational approach to simultaneously learn a vector field ...
07/02/2021

A Micro-Service based Approach for Constructing Distributed Storage System

This paper presents an approach for constructing distributed storage sys...
05/04/2020

Stability Analysis for Nonlinear Weakly Hard Real-Time Control Systems

This paper considers the stability analysis for nonlinear sampled-data s...
07/12/2021

Nonlinear Least Squares for Large-Scale Machine Learning using Stochastic Jacobian Estimates

For large nonlinear least squares loss functions in machine learning we ...

I Introduction

Large-scale networked systems formed by interconnection of a several nonlinear subsystems are encountered in several practical applications such as infrastructure networks including power networks, transportation networks, and communications networks. For example, in power networks, a collection of renewable generators, storage, and loads in a small area can be modeled as a single microgrid with nonlinear dynamics, and a number of such microgrids form a network which balances the supply and demand of electric power in a larger geographical area. Analyzing the stability margins of the dynamics of such infrastructure systems is crucial in ensuring their safe and reliable operation. One standard approach to estimate the stability region of a general nonlinear system is to first find a Lyapunov function for the system and characterize its region of attraction as the stability region. The sum-of-squares approach is one popular method for finding a Lyapunov function for a dynamical system [22, 11, 14, 28, 29]. However, sum-of-squares approaches typically do not scale well to large systems, since a large number of semidefinite programs need to be solved for the sum-of-squares decomposition of polynomial systems even with a few states [22]. Another approach is to employ local linearizations and use quadratic approximations to find Lyapunov functions. However, this approach is typically conservative in the sense that stability can only be certified in a small vicinity of an equilibrium point of a nonlinear system, which may be insufficient to cover the normal range of operation in several practical applications such as microgrids in power systems [13].

Due to the large size and continually expanding scale of many real-world networked systems, it is especially important to develop distributed approaches to assess the stability of such systems. Analyzing the stability of a general networked system in a distributed manner from the stability assessment of its subsystems is a complex open problem. Such a distributed stability assessment, however, is possible for a class of networked systems where each subsystem satisfies passivity or dissipativity properties [30, 19, 2, 1]. However, these results are typically confined to local stability assessments based on linearized system dynamics. Thus, how to find a Lyapunov function (and the corresponding stability region) for a large scale networked system in a distributed and computationally tractable way remains an important open problem.

In this paper, we consider the problem of finding a Lyapunov function and estimating its associated region of attraction in a distributed manner for a large scale networked system comprised of a number of subsystems. We assume that each subsystem is nonlinear and satisfies the dissipativity property [2]. Our approach has two stages: in the first stage, we develop a distributed learning approach to learn the storage functions (similar to the Lyapunov functions) for all the subsystems; in the second stage, we develop a distributed optimization approach to find the Lyapunov function for the networked system using the learned storage functions of the subsystems. Our main algorithm essentially is an iteration over these two parts until some convergence criterion is satisfied.

Specifically, in the first stage, we use a neural network approximation to learn a storage function for each subsystem such that the subsystem satisfies the dissipativity property. Our approach is inspired from the recent works which use the data-based function approximation capability of neural networks to

learn a Lyapunov function for nonlinear systems [18, 5, 13, 12] in a centralized way. In addition to this empirical learning, we also use a satisfiability modulo theories (SMT) solver based falsifier that finds counterexamples to verify this local dissipativity property using the storage function; when no counterexample is found by the falsifier, the subsystem is provably dissipative [5]. In the second stage, we use the fact that the stability of the networked system can be guaranteed by verifying some conditions on the dissipativity of the subsystems and the coupling between dissipative subsystems [2]. For efficient implementation, we formulate and solve this as a distributed optimization problem using an alternating direction method of multipliers (ADMM) procedure which iteratively updates the storage function for each subsystem until the network-level stability conditions are satisfied. This step also allows us to compute the Lyapunov function for the networked system and estimate its region of attraction. Additionally, we learn local controllers at the subsystem-level that enhance the stability region of the networked system.

We demonstrate the performance of our proposed distributed learning approach for stability assessment through extensive case studies in microgrid networks. Our choice of microgrid networks is motivated by their importance in real-world power systems and their natural structure as a network of smaller nonlinear subsystems [21, 16, 32]. In recent years, the stability assessment problem for a network of microgrids has also attracted a lot of attention from the power systems and controls community [23, 10, 26].

A preliminary version of this work was published as a conference paper [15]. In this work, we significantly expand and extend the results in [15]. In particular, we learn a local controller for each subsystem that improves the stability of the network system and makes it robust to bounded disturbances. We also consider a larger network based case study which clearly illustrates the superior performance our distributed learning approach both in terms of finding a larger stability region and in reduced training/computation time. In summary, the key contributions of this paper are as follows:

  1. We develop a distributed approach to learn the Lyapunov function and the stability region of a large scale networked system by exploiting the dissipativity property of its subsystems and leveraging recent advances in neural network based nonlinear function approximation/learning using data.

  2. We learn a local controller for each subsystem that further improves the stability of the entire networked system.

This paper is organized as follows. Section II gives the preliminaries and basic problem formulation, Section III describes our distributed learning based algorithm, Section IV presents the case studies, and Section V concludes with future research directions.

Ii Problem Formulation

We consider a large scale nonlinear networked system formed by the interconnection of nonlinear subsystems. The dynamics of the th subsystem is given by

(1)

where , , , and are the state, primary input, secondary (local) input, and output of the subsystem , respectively. The system dynamics and system output are specified by the continuously differentiable functions and . The state, primary input, secondary input and output of the networked system are then defined as , , , and respectively. The overall dynamics of the networked system is specified as

(2)

where , and .

We assume that the subsystems are coupled through a primary and secondary input as

(3)

where and specify the primary control law and secondary (local) control law, respectively. We adopt this two-level control structure for the following reason. Many large scale real-world networked systems, such as power systems, use a hierarchical control architecture with a dedicated secondary controller at each subsystem that maps its local output to a local control input [25, 20]. Such secondary controllers are known to critically improve the stability of the networked system using only local information [20]. Our separate modeling on the primary () and secondary () control inputs is motivated from such real-world systems. Later, we will also show experimental results which demonstrate the role of the secondary controller in improving the stability region of the networked system.

Without loss of generality, assume that the equilibrium point of (2)-(3) is given by , where is the origin. Also, denote as . For simplicity, we further restrict our consideration to a linear approximation of the coupling (3) between subsystems, given by

(4)

where and are the Jacobian matrices of and evaluated at and , respectively. The overall dynamics of the networked system constituted by the interconnection of these subsystems is now completely specified by (2) and (4) for all , where is a neighborhood around the origin .

We now define the dissipativity property for the subsystems.

Definition 1 (Dissipativity [2]).

The subsystem described by (1) is said to be dissipative with respect to the supply rate function if there exists a continuously differentiable storage function such that , , and

(5)

for all .

In this work, we assume that each subsystem satisfies the dissipativity property with respect to a quadratic supply rate function, as stated below.

Assumption 1 (Dissipative subsystem).

Every subsystem , described by (1), is dissipative with respect to the quadratic supply rate function given by

(6)

where .

Next, we state the definitions of Lyapunov function and region of attraction (RoA). Note that we will use the terms RoA and stability region interchangeably in the following.

Definition 2 (Lyapunov function and region of attraction [17]).

For the system (2), a continuous differentiable scalar function , is a strict Lyapunov function valid in a region if the following conditions are satisfied:
, for all , and , for all .
Further, the region is defined as the region of attraction (RoA) of the equilibrium point , that is,

The stability of the networked system described by (2) and (4) can be assessed by identifying a Lyapunov function satisfying Definition 2 and by characterizing the RoA. In general, sum-of-squares approaches [22, 11, 14, 28, 29] can be used to assess the stability of nonlinear systems. Alternatively, local linearizations of the dynamics may be used to compute a quadratic Lyapunov function [6]. However, these approaches often fail to find a meaningful Lyapunov function and RoA for large-scale, high dimensional, networked systems because of the following challenges. Firstly, sum-of-squares approaches do not scale well computationally and will quickly become intractable when the system dimension increases [22, 11, 14, 28, 29]. Secondly, sum-of-squares approaches and quadratic approximation based approaches are typically very conservative in their estimation of RoA even for small systems [5, 13]. This may lead to the design of conservative controllers and sub-optimal system operation.

In order to overcome these challenges, some recent works have exploited the data-based function approximation capability of neural network to learn a Lyapunov function for nonlinear systems [18, 5, 13], with impressive empirical performance in estimating the stability region for nontrivial nonlinear systems. However, the training time, number of samples needed, and computational complexity of these approaches also increase exponentially in the system dimension, which can be prohibitively high for a networked system constituted by a large number of smaller subsystems. In this work, we propose a distributed learning approach that exploits the dissipativity property of the subsystems to overcome these challenges and efficiently learn the Lyapunov function and stability region of the large-scale networked system.

Iii Distributed Learning of Neural Lyapunov Function for the Networked System

In this section, we propose a two-stage distributed approach to learn a Lyapunov function for the networked system and characterize its RoA.

Iii-a Lyapunov Function for the Networked System from the Storage Functions of the Subsystems

Our key idea is to reduce the dimension of the problem by leveraging the fact that the Lyapunov function for the networked system can be constructed from the storage functions of the subsystems under the dissipativity assumption (Assumption 1) [2]. We formally state this result below.

Proposition 1.

Consider the networked system specified by (2) and (4). Let Assumption 1 hold, and as specified in (6) be the supply rate function and be the corresponding storage function of subsystem . If

(7)

where

is a block diagonal matrix, and

is an identity matrix of appropriate order, then,


the networked system has a stable equilibrium at , and
is a Lyapunov function for the networked system.

Proof.

The proof follows a direct extension of the result given in [2, Chapter 2] as follows. Consider the networked system specified by (1)-(4). Let be a valid storage function for subsystem with respect to the supply rate function such that it satisfies the dissipativity condition (5), for all . Now, considering the function , we have

(8)

where the inequality follows from the dissipativity condition (5). Assuming the specific form for in (6), we have

(9)

where the last equality follows from the definition of and with as specified in the proposition statement.

Now, if (7) is satisfied, using (8)-(9), it immediately implies that . The positivity of also directly follows from that of s. Thus, satisfies the condition to be a valid Lyapunov function for the networked system. ∎

Remark 1.

Proposition 1 will enable us to reduce the high dimensional problem of finding a Lyapunov function for the networked system to smaller problems of finding the storage functions of the subsystems. For example, assuming for all , instead of an dimensional problem of finding the Lyapunov function for the networked system, we only need to solve different dimensional problems of finding the storage functions of the subsystems. Since the computational complexity typically grows exponentially in the dimension of the system, this will reduce the exponential dependence on to a linear dependence on . This reduction in complexity is especially significant for large-scale systems formed by the interconnection of a large number of small subsystems.

Iii-B Learning Neural Storage Functions and Secondary Controllers for Subsystems

Neural networks can efficiently learn/approximate a broad class of nonlinear functions using data, leading to their success in many engineering applications. Recent works have exploited this property of neural networks to learn Lyapunov functions for nonlinear systems [18, 5, 13]. We follow a similar approach for learning the storage function of individual subsystems. In addition to learning the storage function, we will also learn the secondary controller

, in order to maximize the stability region of the networked system. We achieve these goals by designing a loss function for the neural network learning that penalizes any violation of the dissipativity condition given in (

5), and finding the parameter which minimizes this loss.

For each susbsyetm , we create the training datasets and by drawing samples from their domains according to some distribution. We represent the storage function as where is the parameter of the corresponding neural network. We then define the empirical loss function for subsystem as

(10)

where the secondary control input is specified as . We will train a neural network for each subsystem using the above loss function until convergence.

A neural network based learning algorithm can give only an empirical guarantee with respect to its loss function and training data. So, the parameters of the neural network obtained by minimizing the above empirical loss function may not provably yield a storage function that satisfies the condition for dissipativity (5). We overcome this issue as described below.

Satisfiability Modulo Theories (SMT) solvers are widely used to verify correctness of symbolic arithmetic expressions in a specified region in the space of real numbers [3]. In particular, given a symbolic expression (also known as logic formula) , an SMT solver uses a falsification technique to identify counterexamples where is violated. When no such counterexamples exists, is said to be provably valid in . Recent works have implemented SMT solvers to validate Lyapunov functions [13, 5]. We follow a similar procedure to validate the storage functions learned by our neural network. Specifically, by using an SMT solver at every neural network training step, we validate by checking the following first order logic formula over real numbers:

(11)

where is a small lower-bound to avoid arithmetic underflow, and is an upper-bound that defines the verification region . We use a specific SMT solver called dReal [8] to verify the above logic formula.

Under the conditions as described in [5, 8, 7], an SMT solver always returns a counterexample if there exists any. We add these counterexamples to our training dataset , and retrain the neural network to improve the validity of over . When no counterexample is returned, we complete the neural network training, and declare as a valid storage function. Performing this SMT verification based neural network training for all subsystems, we get which can provably certify the dissipativity of the subsystem with respect to the supply rate function , for all .

Iii-C Distributed Learning of Neural Lyapunov Function for the Networked System

After learning the neural storage functions for all subsystems, we can form a candidate neural Lyapunov function for the networked system as, . However, the fact that is a valid storage function for subsystem with respect to the supply rate function , for all , does not immediately imply that is a valid Lyapunov function for the networked system. The linear matrix inequality (LMI) (7) involving and s (which represents s) in Proposition 1 gives a sufficient condition for such a to be valid Lyapunov function for the networked system. Therefore, if (7) is not satisfied, we change s (equivalently s) and relearn the storage functions for the subsystems with respect to these new supply rate functions. We repeat this procedure until convergence. We employ the alternating direction method of multipliers (ADMM) approach for the efficient implementation and guaranteed convergence of this proposed procedure [2], as described below.

To bring the condition (7) into the canonical optimization form, we will define the indicator function as

(12)

Similarly, to bring the dissipativity condition (5) of each subsystem into the optimization form, we define indicator functions as

(13)

The problem of finding a Lyapunov function for the networked system can now be written as the following optimization problem:

s.t. (14)

We construct the optimization problem in above way because through the auxiliary variables s, this network-level optimization problem can be divided into smaller sub-problems and can be solved in a distributed manner. The iterative steps of the algorithm are as follows:

(15a)
(15b)
(15c)

where are matrices to have consensus between and , and is the Frobenius norm.

The first step (15a) finds a new supply rate close to the auxiliary variable given the consensus variable . We then obtain the storage function and secondary controller with respect to through the neural network training described in Section III-B. We emphasize that this step can be implemented in parallel for the subsystems. Since parallel neural network training can be performed at scale using the modern GPU architecture, this approach offer significant reduction in training time. The second step (15b) is to impose the Lyapunov condition for the networked system. It finds (auxiliary) supply rate functions close to given the consensus variables such that they satisfy the sufficient condition (7) to get a Lyapunov condition for the networked system. While this update step is centralized, we note that this is a simple convex optimization problem and does not involve any neural network training. Finally, step (15c) updates the consensus variables and the iteration continues until convergence.

We summarize the stages in the above subsections as a concise algorithm in Algorithm 1.

1:Input: Initial supply rate matrices , initial consensus metrics , input-output coupling matrix , tolerance level .
2:repeat
3:   Learn neural storage functions and secondary controllers according to the procedure described in Section III-B with respect to the current supply rate functions
4:   Update auxiliary variables according to (15b)
5:   Update consensus variables according to (15c)
6:   Update supply rate functions according to (15a)
7:until ()
8:Output: Lyapunov function for the networked system
Algorithm 1 Distributed Neural Lyapunov Function Learning Algorithm

Iv Case Studies

In this section, we demonstrate the performance of our algorithm by evaluating it on three test cases: Three microgrids network with conventional angle droop control, Four microgrids network with quadratic voltage droop control, and IEEE 123-node test feeder network with conventional voltage and angle droop controls. We compare the performance of our algorithm with the following approaches for finding the Lyapunov functions.

  • Quadratic control Lyapunov (QCL) function, which computes the Lyapunov function by linearizing the dynamics around the origin [4]. Additionally, a linear quadratic regulator (LQR) based secondary controller is designed for the improved stability of the system.

  • Quadratic Lyapunov (QL) function, which finds the Lyapunov function in the same way as QCL but without a secondary controller.

  • Centralized neural control Lyapunov (CNCL) function finds a Lyapunov function through neural network based centralized learning [5]. It also learns a secondary controller through neural network based learning.

  • Centralized neural Lyapunov function (CNL) is the same as CNCL but without a secondary controller [13].

We evaluate two version of our algorithms:

  • Distributed neural control (DNCL) Lyapunov function learns the the Lyapunov function as described in Algoirthm 1.

  • Distributed neural Lyapunov (DNL) function is same as DNCL but without a secondary controller.

Iv-a Dynamics of the Network of Microgrids

Since our case studies focuses on the network of microgirds, we first give a brief description about their dynamics and the classical control schemes used in the literature.

Fig. 1: A distribution system consisting of networked microgrids [13]. The internal structure of a single microgrid is shown in the left box.

Fig. 1 shows a network of microgrids where each microgrid comprises of several distributed generation units (DGUs), loads and power electronics (PE) interfaces [32]. We assume that the microgrids function in a grid-connected mode and they are networked with each other via their points of common coupling (PCCs) and distribution lines. The power injections to the PCCs are governed by the standard AC power flow equations as follows:

(16a)
(16b)

where , and are voltage magnitude and phase angle of the th microgrid; and are active and reactive power injections at the th PCC; is the th diagonal element in the admittance matrix; and and are the magnitude and angle of the th element in the admittance matrix respectively.

We consider two different control schemes that are widely used in the literature [9, 24].

Iv-A1 Conventional droop control

In general, each microgrid employs a primary controller to stabilize the network and proportionally share the load among all the DGUs. This is achieved by setting up a control loop at a PE interface placed next to the microgrid. Droop controllers are the most extensively used control schemes, and they are based on active or reactive power decoupling [9]. The droop control based dynamics at th microgrid is given by

(17a)
(17b)
(17c)

where ; ; , ; , , , are the nominal set point values of the voltage magnitude, phase angle, active power and reactive power at the th microgrid, respectively; and are tracking time constants; and are droop coefficients; and is an output-feedback gain matrix. It is easy to verify that the above dynamics conform to the general form of dynamics given in (1)-(4) with , , , . The functions can derived from (17a)-(17b) and can derived from (16a)-(16b).

Iv-A2 Quadratic droop control

Quadratic droop control [24] offers improvement both in theory and practice compared to the conventional droop control. The corresponding dynamics at th microgrid is given by

(18)

such that (16b) holds true. Similar to the previous case, this dynamics also conform to general form given in (1)-(4), with , , , . The functions can derived from (17a)-(17b) and can derived from (16a)-(16b).

In the remainder of this section, we consider specific test cases and validate the effectiveness of our algorithms for both conventional and quadratic droop control cases.

Iv-B Test Case I: Three microgrids network with conventional angle droop control

As standard in the power system literature, we assume that the dynamics of voltage magnitudes are predominantly slow while the phase angles have fast dynamics, which is formalized by assuming that in (17). This is referred to as time scale separation in power system literature [31]. With this assumption, we only focus on the behavior of the phase angle and active power for each subsystem . The distribution line parameters values are selected as given in [13], and the control parameters and set-point values are given in Table I. For neural network training, we use a two layer network, training sample size 2000, and learning rate 0.01. The parameters for SMT based verification are and for . The tolerance level for the algorithm is selected as . The algorithm converged with only three iterations of the outer loop.

MG 1 MG 2 MG 3
1.2 1.0 0.8
1.2 1.2 1.2
(deg.) 0
(p.u.) 1 1.05 0.95
(p.u.) 0.1706 1.4578 -0.0013
TABLE I: Control parameters and reference setpoints of the three microgrids network [13]
(a)
(b)
(c)
(d)
Fig. 2: Visualization of the distributed neural Lyapunov functions () and Lie derivatives () for the three microgrids system. (a) Projection of DNCL function on - plane; (b) Projection of DNL function on - plane; (c) Projection of Lie derivative of DNCL function on - plane; (d) Projection of Lie derivative of DNL function on - plane. The functions are positive and their Lie derivatives are negative, resulting in a valid local Lyapunov functions.

Fig. 2(a) shows the DNCL function obtained using our algorithm, projected on the - plane. Similarly, Fig. 2(b) shows DNL function obtained using our algorithm. Fig. 2(c) and Fig. 2(d) shows the Lie derivative () of the DNCL function and DNL function, respectively. We note that to compute Lie derivatives, we indeed considered the original nonlinear dynamics instead of the linear approximation for coupling. It is clear from these figures that the both DNCL and DNL functions are positive and their Lie derivatives are negative over the specified domain, and hence they satisfy the requirements to be a valid Lyapunov function as given in Definition 2. The plots of these Lyapunov functions and their Lie derivatives on rest of the projection planes follow the same pattern as in Fig. 2, and are omitted for brevity.

Fig. 3 shows the RoAs estimated using the Lyapunov functions obtained using our algorithm and other approaches mentioned in the beginning of this section. It is clear from the figue that, the ROAs of DNCL and DNL are significantly larger than the one obtained through the classical approaches QCL and QL, and the RoA of DNCL is slightly larger than that of the DNL because the additional secondary controller in DNCL. We note that the effect of secondary controller on getting larger RoA will become more prominent as the system dimension increases, as we show in the next case studies. Thus, we show that our approach is less conservative in stability assessment as compared to traditional methods. Not surprisingly, the ROAs of DNCL and DNL are smaller than that of the centralized methods CNCL and CNL because of the performance limitations of distributed methods. However our distributed method offers significant reduction in the training time especially for large system, as shown in Table IV.

Fig. 3: Comparison of the estimated region of attraction (RoA) from different approaches for the three microgrid network. Projections on - space (with ) is shown here.

Iv-C Test Case II: Four microgrids network with quadratic voltage droop control

MG 1 MG 2 MG 3 MG 4
1 1 1 1
0.2 0.2 0.2 0.2
(deg.)
(p.u.) 1 1 1 1
(p.u.) 0.013 -0.013 0.017 -0.009
TABLE II: Control parameters and reference setpoints of the four microgrids network [27]

We next consider a line network of four microgrids with quadratic voltage droop control (18). All the set-point values and control parameters are shown in table II and the distribution line parameters are taken from [27]. For neural network training, we use a two layer network, training sample size 2000, and learning rate 0.01. The parameters for SMT based verification are and . The tolerance level for the algorithm is selected as . The algorithm converged with only four iterations of the outer loop.

Fig. 4: Comparison of the estimated region of attraction (RoA) from different approaches for the four microgrids network. Projections on - plane (with ) are shown.

Fig. 4 shows the RoAs estimated using the Lyapunov functions obtained using our algorithm and other approaches mentioned in the beginning of this section. We can make the following observations: DNCL has larger ROA compared to QCL and QL, DNL’s ROA is larger than QL but is comparable with QCL, and DNCL has larger ROA compared to DNL. ROAs of DNL and QCL are close to each other due to an LQR based secondary controller associated with QCL. Thus, our methods are better than the traditional methods for estimating the RoA. Because of the complexity in the dynamics due to quadratic droop control, the SMT verification associated with CNL and CNCL couldn’t converge. Hence, we omit the RoAs of CNL and CNCL from Fig. 4. Finally, we conclude that for such systems that have complex non-linear nodal dynamics, DNL and DNCL are better suitable than centralized neural Lyapunov approaches.

Iv-D Test Case III: IEEE 123-node test feeder network with conventional voltage and angle droop control

Fig. 5: IEEE 123-node test feeder converted into a five-microgrid system [25]
MG 1 MG 2 MG 3 MG 4 MG 5
8 12 12 9 10
12.9 10.2 11.56 10.83 11.73
2.356 2.2 2.356 2.356 2.08
2.50 2.0 2.222 2.083 2.272
(deg.)
(p.u.) 1 1.003 1 1.003 0.999
(p.u.) -0.13 0.57 -0.25 0.53 -0.72
(p.u.) 0.88 -0.01 -0.1 0.08 -0.85
TABLE III: Control parameters and reference set-points of IEEE 123-node test feeder network [25]

Fig.  5 shows the IEEE 123-node test feeder system partitioned into five microgrids. We do not assume any time scale separation for this case, and consider conventional voltage and angle droop controls as primary control schemes. The set-point values and control parameters are given in Table III, and the distribution line parameters are selected as given in [25]. The system dynamics is as specified in (17). We linearize the input-output coupling relationship around origin and convert the dynamics to the standard form as given in (1)-(4). For neural network training, we use a two layer network, training sample size 4000, and learning rate 0.01. The parameters for SMT based verification are and . The tolerance level for the algorithm is selected as . The algorithm converged with only four iterations of the outer loop.

Fig. 6: Comparison of the estimated region of attraction (RoA) from DNCL and DNL for the IEEE 123-node test feeder network. Projections on - plane (with = = = = = = = = 0) are shown.

Fig. 6 shows the RoAs estimated using our DNCL and DNL functions. Similar to the previous case studies, the RoA for DNCL is larger than that of DNL due to the secondary controller. We note that due to the large scale nature of the system, the RoAs estimated using the classical QCL and QL approaches are significantly smaller (and barely visible) compared to that of DNCL and DNL, and hence we have omitted them from the figure. Moreover, again due to the large scale nature of the system, the SMT-based verification for the CNCL and CNL failed to converge even after a long time. Since CNCL and CNL functions learning did not converge, we were also not able to estimate any RoAs based on these functions, and they are omitted from the figure. We note that this further emphasizes the computational benefit of our distributed approach, which converges fast even with SMT-based verification. Table IV shows the training time comparsion.

DNL
(sec.)
DNCL
(sec.)
CNL
(sec.)
CNCL
(sec.)
IEEE
123-node network
1679 1383.42
TABLE IV: Comparison of the training time for centralized (CNCL, CNL) and decentralized (DNCL, DNL) learning approaches.

Iv-E Advantages Due to the Secondary Controller

We observe that the learned secondary controller in the DNCL provides faster stabilization and robustness to perturbation, in addition to the larger RoA estimate, when compared to DNL where there is no secondary controller. The larger RoA of DNCL is clear from Fig. 3, 4, and 6. The faster convergence is demonstrated through a time domain simulation in Fig. 7. The increased robustness due to the secondary controller is demonstrated in Fig. 8. We add a small perturbation noise to the system state in the time domain simulation. From the figure, it is clear that the secondary controller is able to quickly suppress the disturbance.

Fig. 7: Time domain simulation comparison for the 123-node feeder with initial state . The secondary controller (DNCL) stabilizes the system faster than DNL.
Fig. 8: Time domain simulation comparison for four microgrid network with initial state . The secondary controller (DNCL) is able to suppress the disturbance resulting in a smoother trajectory.

V Conclusion

This paper proposes a novel distributed learning based framework for assessing Lyapunov stability of a class of networked nonlinear systems, where each subsystem is dissipative. The objective of the proposed framework is to construct a Lyapunov function in a distributed manner and to estimate the associated region of attraction for the networked system. We begin by leveraging a neural network function approximation to learn a storage function for each subsystem such that a local dissipativity property is satisfied by each subsystem. We next use a satisfiability modulo theories (SMT) solver based falsifier that verifies the local dissipativity of each subsystem by determining an absence of counterexamples that violate the local dissipativity property, as established by the neural network approximation. Finally, we verify network level stability by using an alternating direction method of multipliers (ADMM) approach to update the storage function of each subsystem in a distributed manner until a global stability condition for the network of dissipative subsystems is satisfied. We demonstrate the performance of the proposed algorithm and its advantages on three different case studies in microgrid networks. We note that this paper focuses on the synthesis of linear secondary controllers. Future work will focus on learning reinforcement learning based non-linear controllers that would further expand the RoA of the system and enhance its stability margins.

References

  • [1] E. Agarwal, S. Sivaranjani, V. Gupta, and P. J. Antsaklis (2020) Distributed synthesis of local controllers for networked systems with arbitrary interconnection topologies. IEEE Transactions on Automatic Control 66 (2), pp. 683–698. Cited by: §I.
  • [2] M. Arcak, C. Meissen, and A. Packard (2016) Networks of dissipative systems: compositional certification of stability, performance, and safety. Springer. Cited by: §I, §I, §I, §III-A, §III-A, §III-C, Definition 1.
  • [3] C. Barrett and C. Tinelli (2018) Satisfiability modulo theories. In Handbook of model checking, pp. 305–343. Cited by: §III-B.
  • [4] H. Chang, C. Chu, and G. Cauley (1995) Direct stability analysis of electric power systems using energy functions: theory, applications, and perspective. Proceedings of the IEEE 83 (11), pp. 1497–1529. Cited by: 1st item.
  • [5] Y. Chang, N. Roohi, and S. Gao (2019) Neural lyapunov control. Advances in neural information processing systems. Cited by: §I, §II, §II, §III-B, §III-B, §III-B, 3rd item.
  • [6] H. Chiang (1989) Study of the existence of energy functions for power systems with losses. IEEE Transactions on Circuits and Systems 36 (11), pp. 1423–1429. Cited by: §II.
  • [7] S. Gao, J. Avigad, and E. M. Clarke (2012) -Complete decision procedures for satisfiability over the reals. In

    International Joint Conference on Automated Reasoning

    ,
    pp. 286–300. Cited by: §III-B.
  • [8] S. Gao, S. Kong, and E. M. Clarke (2013) DReal: an smt solver for nonlinear theories over the reals. In International conference on automated deduction, pp. 208–214. Cited by: §III-B, §III-B.
  • [9] J. M. Guerrero, J. C. Vasquez, J. Matas, L. G. De Vicuña, and M. Castilla (2010) Hierarchical control of droop-controlled ac and dc microgrids—a general approach toward standardization. IEEE Transactions on industrial electronics 58 (1), pp. 158–172. Cited by: §IV-A1, §IV-A.
  • [10] J. He, X. Wu, X. Wu, Y. Xu, and J. M. Guerrero (2019) Small-signal stability analysis and optimal parameters design of microgrid clusters. IEEE Access 7, pp. 36896–36909. Cited by: §I.
  • [11] D. Henrion and A. Garulli (2005) Positive polynomials in control. Vol. 312, Springer Science & Business Media. Cited by: §I, §II.
  • [12] T. Huang, S. Gao, X. Long, and L. Xie (2021) A neural lyapunov approach to transient stability assessment in interconnected microgrids.. In HICSS, pp. 1–10. Cited by: §I.
  • [13] T. Huang, S. Gao, and L. Xie (2021) A neural lyapunov approach to transient stability assessment of power electronics-interfaced networked microgrids. IEEE Transactions on Smart Grid 13 (1), pp. 106–118. Cited by: §I, §I, §II, §II, §III-B, §III-B, Fig. 1, 4th item, §IV-B, TABLE I.
  • [14] Z. Jarvis-Wloszek, R. Feeley, W. Tan, K. Sun, and A. Packard (2003) Some controls applications of sum of squares programming. In IEEE Conference on Decision and Control (CDC), pp. 4676–4681. Cited by: §I, §II.
  • [15] A. Jena, T. Huang, S. Sivaranjani, D. Kalathil, and L. Xie (2021) Distributed learning-based stability assessment for large scale networks of dissipative systems. In 2021 60th IEEE Conference on Decision and Control (CDC), pp. 1509–1514. Cited by: §I.
  • [16] F. Katiraei, R. Iravani, N. Hatziargyriou, and A. Dimeas (2008) Microgrids management. IEEE power and energy magazine 6 (3), pp. 54–65. Cited by: §I.
  • [17] H. K. Khalil and J. W. Grizzle (2002) Nonlinear systems. Vol. 3, Prentice hall Upper Saddle River, NJ. Cited by: Definition 2.
  • [18] G. Manek and J. Z. Kolter (2019) Learning stable deep dynamics models. Advances in neural Information Processing Systems. Cited by: §I, §II, §III-B.
  • [19] P. Moylan and D. Hill (1979) Tests for stability and instability of interconnected systems. IEEE Transactions on Automatic Control 24 (4), pp. 574–575. Cited by: §I.
  • [20] F. Okou, L. Dessaint, and O. Akhrif (2005) Power systems stability enhancement using a wide-area signals based hierarchical controller. IEEE Transactions on Power Systems 20 (3), pp. 1465–1477. Cited by: §II.
  • [21] D. E. Olivares, A. Mehrizi-Sani, A. H. Etemadi, C. A. Cañizares, R. Iravani, M. Kazerani, A. H. Hajimiragha, O. Gomis-Bellmunt, M. Saeedifard, R. Palma-Behnke, et al. (2014) Trends in microgrid control. IEEE Transactions on smart grid 5 (4), pp. 1905–1919. Cited by: §I.
  • [22] P. A. Parrilo (2000) Structured semidefinite programs and semialgebraic geometry methods in robustness and optimization. Ph.D. Thesis, California Institute of Technology. Cited by: §I, §II.
  • [23] P. Shamsi and B. Fahimi (2014) Stability assessment of a dc distribution network in a hybrid micro-grid application. IEEE Transactions on Smart Grid 5 (5), pp. 2527–2534. Cited by: §I.
  • [24] J. W. Simpson-Porco, F. Dörfler, and F. Bullo (2016) Voltage stabilization in microgrids via quadratic droop control. IEEE Transactions on Automatic Control 62 (3), pp. 1239–1253. Cited by: §IV-A2, §IV-A.
  • [25] S. Sivaranjani, E. Agarwal, V. Gupta, P. Antsaklis, and L. Xie (2020) Distributed mixed voltage angle and frequency droop control of microgrid interconnections with loss of distribution-pmu measurements. IEEE Open Access Journal of Power and Energy 8, pp. 45–56. Cited by: §II, Fig. 5, §IV-D, TABLE III.
  • [26] Y. Song, D. J. Hill, T. Liu, and Y. Zheng (2017) A distributed framework for stability evaluation and enhancement of inverter-based microgrids. IEEE Transactions on Smart Grid 8 (6), pp. 3020–3034. Cited by: §I.
  • [27] A. Teixeira, K. Paridari, H. Sandberg, and K. H. Johansson (2015) Voltage control for interconnected microgrids under adversarial actions. In 2015 IEEE 20th Conference on Emerging Technologies & Factory Automation (ETFA), pp. 1–8. Cited by: §IV-C, TABLE II.
  • [28] U. Topcu, A. K. Packard, P. Seiler, and G. J. Balas (2009) Robust region-of-attraction estimation. IEEE Transactions on Automatic Control 55 (1), pp. 137–142. Cited by: §I, §II.
  • [29] U. Topcu, A. Packard, and P. Seiler (2008) Local stability analysis using simulations and sum-of-squares programming. Automatica 44 (10), pp. 2669–2675. Cited by: §I, §II.
  • [30] M. Vidyasagar (1979) New passivity-type criteria for large-scale interconnected systems. IEEE Transactions on Automatic Control 24 (4), pp. 575–579. Cited by: §I.
  • [31] J. R. Winkelman, J. H. Chow, J. J. Allemong, and P. V. Kokotovic (1980) Multi-time-scale analysis of a power system. Automatica 16 (1), pp. 35–43. Cited by: §IV-B.
  • [32] R. Zamora and A. K. Srivastava (2016) Multi-layer architecture for voltage and frequency control in networked microgrids. IEEE Transactions on Smart Grid 9 (3), pp. 2076–2085. Cited by: §I, §IV-A.