Impact of Communication Delay on Asynchronous Distributed Optimal Power Flow Using ADMM

by   Junyao Guo, et al.
ETH Zurich
Carnegie Mellon University

Distributed optimization has attracted lots of attention in the operation of power systems in recent years, where a large area is decomposed into smaller control regions each solving a local optimization problem with periodic information exchange with neighboring regions. However, most distributed optimization methods are iterative and require synchronization of all regions at each iteration, which is hard to achieve without a centralized coordinator and might lead to under-utilization of computation resources due to the heterogeneity of the regions. To address such limitations of synchronous schemes, this paper investigates the applicability of asynchronous distributed optimization methods to power system optimization. Particularly, we focus on solving the AC Optimal Power Flow problem and propose an algorithmic framework based on the Alternating Direction Method of Multipliers (ADMM) method that allows the regions to perform local updates with information received from a subset of but not all neighbors. Through experimental studies, we demonstrate that the convergence performance of the proposed asynchronous scheme is dependent on the communication delay of passing messages among the regions. Under mild communication delays, the proposed scheme can achieve comparable or even faster convergence compared with its synchronous counterpart, which can be used as a good alternative to centralized or synchronous distributed optimization approaches.



There are no comments yet.


page 1

page 2

page 3

page 4


Asynchronous ADMM for Distributed Non-Convex Optimization in Power Systems

Large scale, non-convex optimization problems arising in many complex ne...

An Asynchronous Approximate Distributed Alternating Direction Method of Multipliers in Digraphs

In this work, we consider the asynchronous distributed optimization prob...

A Parallel Distributed Algorithm for the Power SVD Method

In this work, we study how to implement a distributed algorithm for the ...

Distributed and Asynchronous Operational Optimization of Networked Microgrids

Smart programmable microgrids (SPM) is an emerging technology for making...

Advances in Asynchronous Parallel and Distributed Optimization

Motivated by large-scale optimization problems arising in the context of...

ASYNC: Asynchronous Machine Learning on Distributed Systems

ASYNC is a framework that supports the implementation of asynchronous ma...

STRONG: Synchronous and asynchronous RObust Network localization, under Non-Gaussian noise

Real-world network applications must cope with failing nodes, malicious ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Optimizing the operation of the electric power systems has become an increasingly challenging task due to the increasing number of control variables associated with distributed generation and flexible loads and the system’s inherently non-linear physical characteristics. Conventionally, the optimal management of a large-scale power system is solved as a single optimization problem at a centralized location, which limits the size of the problem that can be solved in a given amount of time. Moreover, as all the measurements need to be collected by a central controller, the communication load of the backbone network is high which may result in large communication delays or failures of data delivery. To address these issues, there has been a growing interest in distributed optimization, where the optimization problem associated with a large region is decomposed into subproblems each associated with a smaller region. These regions are connected via transmission lines and therefore need to communicate periodically to achieve an overall optimal solution for operating the entire grid.

Many iterative distributed optimization methods have been proposed [1][2], and for most of these methods, synchronization of the subproblems is required; i.e., at each iteration, all subproblems need to be solved and then start next iteration. However, synchronization may not be easily acquired in a distributed system without a centralized coordinator because a region may not know how long to wait for other regions a priori before proceeding to its next iteration. Furthermore, even if synchronization is achievable, it may lead to an inefficient implementation of distributed methods. The sizes and complexities of the control regions are dependent on the power system’s physical configuration. Therefore, these regions are usually heterogeneous and require different amounts of computation time. Moreover, the communication delays among these regions are also heterogeneous which are dependent on the communication infrastructures and network topologies deployed. In a synchronous scheme, as all regions need to wait for the slowest region to finish its computation or data transmission, some regions would remain idle for most of the time which results in under-utilization of both the computation and communication resources.

The synchronization issue has been systematically studied in the research fields of distributed computing with seminal works [2][3]. While the concept of asynchronous iterative computing is not new, it remains an open question whether those methods can be applied to complex systems such as power systems which have non-convex physical characteristics. Most existing asynchronous distributed optimization methods can only tackle convex optimization problems [2, 3, 4], and therefore can be only applied to solve approximations of the non-convex problems arising in power systems [5][6][7]. Several asynchronous algorithms that tackle problems with some level of non-convexity are recently proposed with applications in the wireless sensor network [8]

and machine learning

[9]. But the problem considered in these studies is the consensus problem where the nodes are usually homogeneous and local updates are easy to compute. However, many distributed optimization applications in power system entail a different problem formulation from the consensus problem where geographical partitioning of the system is usually considered and neighboring control regions are coupled by non-convex AC power flow constraints, which are not handled in [8][9]. Moreover, different from the schemes in [5][9] that require a master node and are designed for a parallel computing environment with a cluster of computers, the computational entities for distributed optimization in power systems are placed at distant locations and the existence of any centralized node might harm the scalability of the algorithm.

To provide insight into the applicability of asynchronous distributed methods to power system optimization, in this paper, we propose an algorithmic framework that allows each region to perform local updates in an asynchronous fashion without any centralized coordination. The optimization problem studied is the non-convex AC Optimal Power Flow (AC OPF) problem. The proposed framework assumes a message-passing model where each region is allowed to solve its local OPF problem with partial but not all updated information received from its neighbors. Our proposed algorithm is based on the ADMM method used in [10] and we extend this method to fit into the asynchronous framework with convergence analysis provided. Particularly we study the impact of communication delay on the convergence performance of the proposed method, whereas the aforementioned studies have not investigated the role of communications in their proposed asynchronous methods. Through experimental studies, we show that under mild communication delays, the proposed asynchronous ADMM approach could reduce the execution time compared with its synchronous counterpart by reducing the waiting time for slow regions. However, the performance of asynchronous ADMM deteriorates under large communication delays. These findings indicate that asynchronous distributed computing schemes could be beneficial for the operation of power systems, with the premise that the communication infrastructure or schemes are carefully chosen to support relatively fast communications.

The rest of the paper is organized as follows: Section II formulates the AC OPF problem. Section III presents the synchronous distributed ADMM and its application to solving the AC OPF problem. Section IV proposes an asynchronous distributed ADMM approach based on a message-passing model that is well-suited for the considered partial mesh network. Simulation results are given in Section V where we demonstrate the impact of communication delays on the level of asynchronism of the system, the choice of algorithm parameter, and the convergence speed and solution quality of the proposed asynchronous approach. Finally, Section VI concludes the paper and proposes possible future directions.

Ii Problem Formulation

We consider the standard AC OPF problem, where the objective is to minimize the total generation cost. The OPF problem is formulated as follows:

subject to (1b)

for where is the number of buses. Here, denote the complex voltage, the active power generation and the reactive power generation at bus . are the cost parameters of generator at bus . is the -th entry of the line admittance matrix, and is the set of buses connected to bus . This problem is non-convex due to the non-convexity of the AC power flow equations (1b). We omit the line thermal limits in this paper to keep the presentation simple, whereas in [11] the inclusion of this constraint is discussed and it has been shown that the resulting problem can also be solved by the distributed ADMM approach.

Iii Synchronous Distributed ADMM

Geographical decomposition of the system is considered in this paper where a power grid is partitioned into a number of smaller regions each solving a local OPF problem. In the following analysis, we use , and to denote the total number of regions, the set of inter-region tie lines and the set of neighboring regions that connect to region via transmission lines, respectively. We also introduce to denote the set of buses included in region with .

In Problem (1), the power flow balance constraints (1b) at the boundary buses couple the neighboring regions which prevent them from solving local OPF problems independently. To remove such coupling, the voltages at the boundary buses of each region are duplicated. Assume region and region are connected via tie line where and . The voltages at bus and bus are duplicated, and the copies assigned to region are and . Similarly, region is assigned the copies and . To ensure equivalence with the original problem, constraints and are added to the problem. The set is also introduced to denote the joint set of and the duplicates of buses in that are directly connected to buses in .

To apply the distributed ADMM approach used in [10][11], for each tie line , we introduce two auxiliary variables and and transform the aforementioned additional constraints into their following equivalent form


Here and are scaling factors, where is set to be larger than to emphasize on which is strongly related to the line flow through tie line [10].

By introducing and to denote all the primal variables and auxiliary variables in region , respectively, Problem (1) can now be expressed in terms of local OPF problems:

subject to (3b)

where is the local objective function of region . Constraint (3b) is acquired by expressing (2) using and . Constraints (3c) include the local feasibility constraints (1b)-(1d) for and constraint (1e) for . Let denote the Lagrange multiplier associated with constraint (3b) and define the following Augmented Lagrangian function


where is a penalty parameter which can be different for different regions. The ADMM method minimizes (4) by iteratively carrying out the following updating steps [12]:


where denotes the counter of iterations. With fixed, each subproblem in the -update only contains the local variable , which can be solved independent of others. The -update can also be performed locally. The -update requires the information from two neighboring regions. By minimizing (4), for any tie line can be calculated by


Here, the subscript chooses the element in and the row in that are corresponding to tie line . Thereby, region can update locally once it receives , and from .

To ensure the convergence of the variables and to finite values for non-convex problems, the penalty is usually increased over the iterative process. Similar to the method used in [10], we apply the following updating rule


with constants and . is defined as the primal residue[12]. Penalty is first updated for each region via (7a) after the -update and the updated penalty is exchanged between neighboring regions. Then is adjusted via (7b) by using the maximum penalty in the neighborhood of region .

The synchronous distributed ADMM algorithm is presented in Algorithm 1. The stopping criterion is defined as that both the primal residue () and the maximum constraint mismatch are smaller than some [12][11]. Under the considered non-convex setting, the convergence of this ADMM approach to feasible and is proved in [10] with the assumption that both and are bounded and that a local minimum can be identified when solving the local OPF problems. Note that with an increasing penalty, Algorithm 1 generally converges to a suboptimal solution around a local optimal point due to the fact that some difficulty may arise in finding local optimum in the -update while is large. However, a suboptimal solution that is close to the optimal point satisfies the requirements of many practical applications.

1:Initialization Given , set , ,
3:     Set
4:     Update
     Update using (6)
     Update by solving the local OPF problem
     Update using
     Update according to (7a)
5:     All regions exchange the updated , and
6:     Update using (7b)
7:Until a predefined stopping criterion is satisfied
Algorithm 1 Synchronous ADMM

Iv Asynchronous Distributed ADMM

In this section, we propose an algorithmic framework that extends Algorithm 1 into an asynchronous setting. The proposed framework utilizes the message-passing model [3] where each region determines when to carry out its next iteration of local computation based on the messages it receives from its neighbors. We say that a neighbor is ‘arrived’ at region if the updated information of is received by . We assume that each region always sends out its updated information to neighbors and this information would eventually arrive at its destination; i.e., the delay is bounded. This assumption guarantees that each region could get the information from all of its neighbors once in a while, which is in line with the assumption of partial asynchronism used in related studies[2]. Region is allowed to update its local variables after it receives new information from at least neighbors with and denoting the total number of neighbors of region . At worst, any region should wait for at least one neighbor because otherwise its local update makes no progress without any new information. Figure 1 illustrates the proposed asynchronous scheme by assuming three regions each connecting to the other two regions. The blue bar denotes the local computation and the grey line denotes the message passing. As shown in Fig. (a)a, approximates the synchronous ADMM algorithm, i.e., Algorithm 1, where each region only performs local computation after all neighbors arrive. Figure (b)b shows an asynchronous case where each region can perform local update only with one neighbor arrived, which could reduce the waiting/idle time for some regions.

(a) synchronous,
(b) asynchronous,
Fig. 1: Illustration of synchronous and asynchronous distributed ADMM.

Algorithm 2 presents the asynchronous ADMM approach from each region’s perspective with denoting the local iteration counter. The proposed approach does not require any centralized coordination and is applicable to a partial mesh network where each region only communicates with its neighbors. During the initialization, each region solves its local OPF problem without considering its coupling with neighboring regions. In the -update, only the entries in associated with the arrived neighbors are updated while the entries associated with the unarrived neighbors remain to be their last updated values. Similarly, is updated by considering the most recent received from any neighbor .

Given , set , ,
Update by solving the local OPF
Broadcast to region ,
3:     Repeat
4:          Wait until at least neighbors arrive
5:     Set
6:     Update
     Update associated with arrived neighbors using (6)
     Update using (7b)
     Update by solving the local OPF problem
     Update using
     Update according to (7a)
7:     Broadcast to region ,
8:Until a predefined stopping criterion is satisfied
Algorithm 2 Asynchronous ADMM in Region

Under the assumption of bounded delay and some other mild conditions on the properties of the considered non-convex problem formulation, the sequence generated by Algorithm 2 asymptotically converges to a KKT stationary point of problem (3) with local optimality if a fixed is used. However, same as Algorithm 1, a suboptimal solution might be obtained due to increasing . Here, is used to denote the global iteration counter which is increased by 1 whenever a region carries out a local update. We use this global counter only for the purpose of analysis, which is not needed for implementation of Algorithm 2. A rigorous proof for the convergence property of Algorithm 2 will appear in a future publication, while we state some of the sufficient conditions here. For the asymptotic convergence of Algorithm 1, should be a compact smooth manifold which is indeed the case for the OPF problem, and should be bounded by projecting onto a compact box, i.e., . Furthermore, a local minimum should be identified in the local -update, which is usually observed from our empirical studies and particularly the case when a good partition of the problem is derived such that the coupling among the regions is small [11].

An important parameter in Algorithm 2 is the number of neighbors arrived before next local update. Here only sets the lower bound of the number of neighbors to wait, while in practice, the number of actual arrived neighbors is highly dependent on the communication delays of passing messages. If the communication delay is small compared to the local computation time, it is highly likely that during the time when region is solving its local problem, the messages from many of its neighbors will arrive. Thereby, the actual arrived neighbors could be much more than the predefined lower bound and region could immediately start its next iteration without waiting. On the other hand, if the communication delay is large, then region indeed has to wait even for receiving information from one neighbor. In summary, communication delays determines the severity of asynchronism among the regions, and the larger the delay, the more severe the asynchronism.

Due to the different severity of asynchronism, the penalty needs to be carefully chosen to ensure converging to a solution of good quality. The penalty plays a critical role even in Algorithm 1. The larger is, the faster increases and consequently the faster ADMM converges, which, however, generally leads to solutions with worse quality since the algorithm proceeds more aggressively to reach any feasible point regardless of its optimality. In the asynchronous case, the penalty needs to be increased at a much slower pace especially under the circumstances where the number of arrived neighbors is small. This is because with partial and delayed information from neighbors, a region tends to make biased decision and therefore needs to proceed with its local iterations with additional caution.

V Simulation Results

To quantify the convergence performance of any asynchronous distributed approach such as convergence speed and solution quality is generally hard. In this section, we conduct experimental studies to demonstrate the impact of communication delays on the number of arrived neighbors, the choice of penalty parameter and the convergence performance of Algorithm 2.

V-a Experiment Setup

The simulations are conducted mainly using the IEEE 118-bus test system. This system is partitioned into eight regions using the partitioning approach proposed in [13] that has been shown to improve the performance of the ADMM method [11]. For each region determined by this partition, the number of neighbors and the average computation time of solving the local OPF problem at one iteration are shown in Table I.

Region 1 2 3 4 5 6 7 8
3 2 3 1 4 3 3 1
Computation time () 0.31 0.13 0.09 0.12 0.15 0.14 0.13 0.11
TABLE I: Number of neighbors and local computation time of each region

Algorithm 2 is conducted in Matlab R2016a on a personal computer that emulates the process illustrated in Fig. 1. The initialization of uses a flat start, and the stopping criterion is that the maximum primal residue and constraint mismatch are both smaller than p.u. The initial penalty is set to 85000, which works well empirically. We use number of local iterations, the execution time and the gap of the objective function to measure the performance of Algorithm 2. The execution time records the total time Algorithm 2 takes until convergence including the computation time, the communication delay, and the waiting time for neighbors. For non-convex problems, there is generally a gap between the objective value achieved by the distributed method and the centralized method. The gap in the objective value measures this relative error (in ) of the objective value achieved by Algorithm 2 with respect to the one obtained by a centralized method and a solution can be considered of good quality if this gap is smaller than .

V-B Impact of Communication Delay

Delay () 0.003-0.005 0.03-0.05 0.3-0.5 0.6-1 1.2-2
3.0 3.0 2.6 2.4 1.8
1.6 1.5 1.3 1.2 1.0
1.5 1.4 1.4 1.2 1.0
1.0 1.0 1.0 1.0 1.0
3.3 3.2 2.9 2.0 1.1
2.7 2.6 1.7 1.5 1.2
2.8 2.7 2.4 1.8 1.1
1.0 1.0 1.0 1.0 1.0
TABLE II: Communication delay and number of arrived neighbors

To investigate how communication delay affects the actual number of arrived neighbors in Algorithm 2, in the subsequent simulations, we set to a very small value such that each region can perform its local update as long as it has one neighbor arrived. We consider a wide range of communication delays, and the delay here refers to the time of the message sent from the source region until it arrives at the destination region. For each pair of neighboring regions, its associated communication delay is randomly generated within the range listed in the second row of Table II. In power system applications, communication delay could range from a couple of milliseconds to several seconds depending on the infrastructure and technology used. For example, passing message between two regions could take just a few milliseconds using a direct fiber optical link, but could also take a few seconds if there is no direct link available and the message have to be routed through some regional or even central control centers.

Table II shows the average number of arrived neighbors of region (denoted by ) before its each local update. We calculate this statistic using the first 20 local iterations for each region because the first few iterations are critical in ADMM that determines the final point it converges to. It is shown in Table II that with increasing communication delays, the number of arrived neighbor decreases as expected. With small delays such as in Case I and II, all regions can receive the updated information from the majority of their neighbors, while with large delays such as in Case V, most regions have to wait until one neighbor arrives. This indicates that larger communication delays lead to more severe asynchronism.

The severity of asynchronism has a strong impact on the choice of penalty parameter to achieve a solution of good quality. Figure 2 shows the gap of the objective function between Algorithm 2 and the centralized solution with respect to the increasing rate of the penalty . For comparison, we also simulated the synchronous counterpart of Algorithm 2 where is set to 1. In the synchronous case, the faster the penalty increases, the larger the gap is. But as all regions need to wait for all neighbors regardless of the communication delay, similar trends can be observed for all cases with various communication delays. In contrast, in the asynchronous case, the sensitivity of Algorithm 2 to the increasing rate of penalty highly depends on the communication delays. For example, with large communication delays such as in Case V, one needs to increase the penalty at a very slow pace to reach a solution with a gap smaller than .

(a) synchronous ADMM
(b) asynchronous ADMM
Fig. 2: Solution quality and the penalty increasing rate.

Table III demonstrates the performance of Algorithm 2 and its synchronous counterpart, i.e., Algorithm 2 with , with , and denoting the maximum, minimum and average local iterations among all regions. The following behaviors of Algorithm 2 can be observed: 1) With the same penalty increasing rate, asynchronous ADMM on average takes more iterations than synchronous ADMM but with less time spent on each iteration. Therefore, asynchronous ADMM generally takes shorter time to converge but the solution quality is worse. 2) To achieve similar level of solution quality, asynchronous ADMM needs to adopt a smaller penalty increasing rate that leads to more iterations and possibly longer execution time. 3) With the increase of communication delays, the number of iterations of asynchronous ADMM increases even with fixed penalty increasing rate due to the more severe asynchronism among regions. 4) On this test system, asynchronous ADMM achieves comparable performance of its synchronous counterpart for Cases I to IV where the delay is small or comparable with the computation time, but slows down substantially if the delay is dominant such as in Case V.

Case Method Gap(%) Time()
I sync 1.18 36 34 35 0.40 9.7
async 1.18 92 26 56 0.60 7.8
async 1.10 150 45 91 0.40 11.5
II sync 1.20 31 30 31 0.38 12.3
async 1.20 107 28 61 0.70 10.6
async 1.10 157 48 96 0.40 16.0
III sync 1.20 48 48 48 0.39 28.8
async 1.20 150 38 89 1.02 14.0
async 1.06 368 119 233 0.40 33.6
IV sync 1.20 48 48 48 0.39 52.1
async 1.20 222 55 126 1.39 20.9
async 1.04 618 172 377 0.40 57.1
V sync 1.16 57 56 57 0.32 118.0
async 1.16 390 99 238 1.52 40.0
async 1.02 1735 475 1119 0.35 169.4
TABLE III: Performances of synchronous and asynchronous ADMM under various communication delays

V-C Application to a Large-Scale Power System

To demonstrate the capability of the proposed asynchronous ADMM method on large-scale systems, we apply it to solving the AC OPF problem on the Polish 2383-bus system which is partitioned into 40 regions. The local computation time for each region ranges from 0.05 to 1.2 seconds. The communication delay for each link is within the range of 0.3 to 1 second. As shown in Table IV, under considered communication delays, asynchronous ADMM outperforms its synchronous counterpart. The asynchronous scheme is more beneficial for this large system because the control regions in this system are more unbalanced and the synchronous scheme wastes lots of time waiting for slow regions, which is generally the case in large-scale systems.

Method Gap(%) Time()
sync 1.10 49 46 47 0.54 104
async 1.01 2284 37 513 0.43 51
TABLE IV: Performances of synchronous and asynchronous ADMM on the Polish system

Vi Concluding Remarks

This paper proposes an asynchronous distributed optimization method based on ADMM that allows the control regions in the power system to perform local updates with information received from a subset of its directly connected neighbors. Experimental results show that communication delay significantly affects the number of arrived neighbors, and thereby affects the performance of the proposed asynchronous scheme. Under the settings where communication delay is smaller or comparable with the local computation time, the proposed asynchronous ADMM can achieve comparable or even better performance compared with its synchronous counterpart. These findings indicate that asynchronous distributed methods could be beneficial for large-scale power system optimization but have to be deployed with careful design of the communication infrastructure and schemes. In the future, we plan to investigate other factors that affect the performance of the asynchronous scheme and its applicability to large-scale real-world systems.


The authors would like to thank ABB for the financial support and Dr. Xiaoming Feng for his invaluable inputs.


  • [1] A. J. Conejo, E. Castillo, R. García-Bertrand, and R. Mínguez, Decomposition techniques in mathematical programming: engineering and science applications.   Springer Berlin, 2006.
  • [2] D. P. Bertsekas and J. N. Tsitsiklis, Parallel and distributed computation: numerical methods.   Prentice hall Englewood Cliffs, NJ, 1989, vol. 23.
  • [3] C. Dwork, N. Lynch, and L. Stockmeyer, “Consensus in the presence of partial synchrony,” Journal of the ACM (JACM), vol. 35, no. 2, pp. 288–323, 1988.
  • [4] Z. Peng, Y. Xu, M. Yan, and W. Yin, “Arock: an algorithmic framework for asynchronous parallel coordinate updates,” SIAM Journal on Scientific Computing, vol. 38, no. 5, pp. A2851–A2879, 2016.
  • [5] I. Aravena and A. Papavasiliou, “A distributed asynchronous algorithm for the two-stage stochastic unit commitment problem,” in 2015 IEEE Power Energy Society General Meeting, July 2015, pp. 1–5.
  • [6] A. Abboud, R. Couillet, M. Debbah, and H. Siguerdidjane, “Asynchronous alternating direction method of multipliers applied to the direct-current optimal power flow problem,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2014, pp. 7764–7768.
  • [7] H. K. Nguyen, A. Khodaei, and Z. Han, “Distributed algorithms for peak ramp minimization problem in smart grid,” in IEEE International Conference on Smart Grid Communications (SmartGridComm), 2016, pp. 174–179.
  • [8] S. Kumar, R. Jain, and K. Rajawat, “Asynchronous optimization over heterogeneous networks via consensus admm,” IEEE Transactions on Signal and Information Processing over Networks, vol. 3, no. 1, pp. 114–129, 2017.
  • [9] T. H. Chang, M. Hong, W. C. Liao, and X. Wang, “Asynchronous distributed admm for large-scale optimization – part i: Algorithm and convergence analysis,” IEEE Transactions on Signal Processing, vol. 64, no. 12, pp. 3118–3130, June 2016.
  • [10] T. Erseghe, “A distributed approach to the OPF problem,” EURASIP Journal on Advances in Signal Processing, vol. 2015, no. 1, pp. 1–13, 2015.
  • [11] J. Guo, G. Hug, and O. K. Tonguz, “A case for non-convex distributed optimization in large-scale power systems,” IEEE Transactions on Power Systems, vol. PP, no. 99, pp. 1–1, 2016.
  • [12] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends® in Machine Learning, vol. 3, no. 1, pp. 1–122, 2011.
  • [13] J. Guo, G. Hug, and O. K. Tonguz, “Intelligent partitioning in distributed optimization of electric power systems,” IEEE Transactions on Smart Grid, vol. 7, no. 3, pp. 1249–1258, 2016.