Machine Learning for AC Optimal Power Flow

10/19/2019 ∙ by Neel Guha, et al. ∙ 0

We explore machine learning methods for AC Optimal Powerflow (ACOPF) - the task of optimizing power generation in a transmission network according while respecting physical and engineering constraints. We present two formulations of ACOPF as a machine learning problem: 1) an end-to-end prediction task where we directly predict the optimal generator settings, and 2) a constraint prediction task where we predict the set of active constraints in the optimal solution. We validate these approaches on two benchmark grids.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The Optimal Power Flow problem (OPF) consists of determining the optimal operating levels for different generators within a transmission network in order to meet the demand that is changing over space and time. An established area of research in both power systems and operations, OPF is applied every day in the management and regulation of power grids around the world. In this work, we hope to obtain real-time approximate solutions to the OPF problem using machine learning.

The classical formulation of ACOPF (presented in Section 3) is a challenging non-convex and NP-hard problem (bienstock2015strong). In addition to minimizing generator costs, solutions must adhere to physical laws governing power flow (i.e. Kirchhoff’s voltage law) and respect the engineering limits of the grid. As a result, ACOPF is computationally intractable under the demands of daily grid management. In order to account for rapid fluctuations in power demand and supply, grid operators must solve ACOPF over the entire grid (comprising of tens of thousands of nodes) every five minutes 111The addition of renewable sources of energy (wind, solar, etc) adds more unpredictability and is a motivation for improved techniques for ACOPF (cain2012history)

. Most traditional approaches (genetic algorithms, convex relaxations, etc) either fail to converge within this time frame or produce suboptimal solutions. In order to practically manage the grid, operators solve a linearized version of ACOPF practice known as DC Optimal Power Flow (DCOPF). However, DCOPF presents a number of issues. True grid conditions can deviate from the linear assumptions imposed by DCOPF, increasing the likelihood of instability and grid failure


. Relying on DCOPF also has significant implications for climate change. A 2012 report from the Federal Energy Regulatory Commission estimated that the inefficiencies induced by approximate-solution techniques may cost billions of dollars and release unnecessary emissions

(cain2012history). Determining an efficent solution for ACOPF could also be adapted to combined economic emission dispatch (CEED) - a variant of OPF which incorporates a per-generator emissions cost into the classic objective function (venkatesh2003comparison).

In this paper, we observe that it should be possible to learn a model that can predict an accurate solution over a fixed grid topology/constraint set. Intuitively, we expect some measure of consistency in the solution space - similar load distributions should correspond to similar generator settings. This suggests an underlying structure to the ACOPF problem, which a machine learning model can exploit.

Machine learning present several advantages. Neural networks have demonstrated the ability to model extremely complicated non-convex functions, making them highly attractive for this setting. A model could be trained off-line on historic data and used in real-time to make predictions on an optimal power setting. In this work, we explore two applications of machine learning for OPF:

1. End-to-end: Train a model to directly predict the optimal generator setting for a given load distribution. This is challenging, as the model’s output must be adherence with physical laws/engineering limits.

2. Constraint prediction: Train a model to predict which constraints are active (i.e at equality) in the optimal solution. Knowing this active set can be used to warm start existing approaches (i.e. interior point methods) and reduce solution time.

2 Related Work

Prior work has explored different applications of machine learning on the grid. This includes work on estimating active constraints for DCOPF (ng2018statistical; misra2018learning), predicting grid failures (rudin2012machine), or choosing between traditional solvers (king2015network). Machine learning has also been applied to related variants of the OPF problem, including automated grid protection (donnot2017introducing), price proxy prediction (DBLP:journals/corr/CanyasseDM16), or private information recovery (dontiinverse)

. To the extent of our knowledge, there has been limited work on direct applications of deep learning towards ACOPF.

3 Method

We now present the traditional ACOPF problem, and describe how to formalize it as a machine learning task(frank2012primer). For a fixed grid topology , let denote the set of buses (nodes), denote the set of branches (edges), and denote the set of controllable generators. For bus , we enumerate (real power injection), (reactive power injection), (real power demand), (reactive power demand), (voltage magnitude), and (voltage angle). the power demand at AC OPF can be framed as:

|l|[3] P_i^G∑_i ∈GC_i(P_i^G) P_i(V, δ)= P_i^G - P_i^L, ∀i ∈N Q_i(V, δ)= Q_i^G - Q_i^L, ∀i ∈N P_i^G,min ≤P_i^G ≤P_i^G,max, ∀i ∈G Q_i^G,min ≤Q_i^G ≤Q_i^G,max, ∀i ∈G V_i^min ≤V_i ≤V_i^max, ∀i ∈N δ_i^min ≤δ_i ≤δ_i^max, ∀i ∈N

Where (1a) typically represents a polynomial cost function, (1b)-(1c) corresponds to the power flow equations, and (1d)-(1g) represent operational limits on real/reactive power injections, nodal voltage magnitude, and nodal voltage angles222A single reference bus ("slack" bus) is fixed to respectively. More recent settings of OPF - including ours- also include limits on branch currents. These are outlined in more detail by frank2012primer. We now present two formalizations of AC OPF as a machine learning problem. In our setting, we assume that and (real and reactive demand) are known across all buses.

3.1 End-to-end Prediction

In this setting, we pose the AC OPF problem as a regression task, where we predict the grid control variables ( and ) from the grid demand ( and ). These fix a set of equations with equal number of unknowns, which can be solved to identify the remaining state values for the grid. Formally, given a dataset of solved grids with load distributions and corresponding optimal generator settings , our goal is to learn which minimizes the mean-squared error between the optimal generator settings and the predicted generator settings . Solving for the remaining state variables can be posed as a power flow problem, and reduces to finding , , and such that (1b)-(1g) are satisfied.

The central challenge in this setting is ensuring that the neural network’s solution respects physical laws and engineering limits. Though provable guarantees may be difficult to make, we experiment by incorporating soft penalties into our loss function that encourage predictions to fall within legal limits. These correspond to linear penalties that activate when when (1d) and (1f) are violated. In future work we hope to explore more sophisticated (and robust) techniques for enforcing legality.

3.2 Optimal Constraint Prediction

Given that neural networks may learn solutions that violate physical constraints, and are thus untrustworthy in practical settings, we explore optimal constraint prediction as formulated by misra2018learning. In this setting, our model is trained to predict the set of constraints that are active in the optimal solution for some load distribution. A constraint is active if the corresponding state/control variable is at the maximum or minimum allowed value. As misra2018learning describe, knowing the active set of constraints can be used to warm start a more traditional optimization method, and reduce time to convergence.

Formally, for each grid we define a constraint vector

corresponding to an enumeration of constraints (1d)-(1e), where if the -the constraint is active in the optimal solution, and otherwise. We learn which maps from the load distribution to this constraint vector. This corresponds to a multi-label classification problem.

Optimal constraint prediction presents several advantages over end-to-end prediction.

  1. [leftmargin=*]

  2. Solver Speedup: From an optimization perspective, knowing the set of active constraints equates to warm-starting, and can significantly speed-up more traditional algorithms like interior point methods, active set methods, simplex methods, and others. Quantifying this speedup is the focus of ongoing work.

  3. Reliability: This setting reduces the risk of a neural network producing a solution which violates physical laws/engineering limits. Because the physical and engineering constraints are enforced by the solver, an incorrect prediction will at worst increase solution time or lead to a suboptimal solution. In the end-to-end setting described in Section 3.1, incorrect predictions could destabilize the grid.

  4. Task complexity

    : Classifying the set of active constraints is significantly easier than predicting a set of real valued targets.

4 Results

We validated approaches for end-to-end prediction and constraint prediction on IEEE 30-bus 333 and 118-bus test cases444 These test cases include predetermined constraints.

4.1 Dataset Generation

The IEEE test cases include a pre-calculated load distribution (denoted as . In order to construct a dataset for each case, we repeatedly sample candidate load distributions , for some fixed . We identify by solving the OPF problem for via Matpower (zimmerman2011matpower). In some cases, the solver fails to converge, suggesting that the sampled has no solution given the grid constraints. In this case, we discard .

We generated 95000 solved grids for case118 and 812888 solved grids for case30 with (a perturbation to the IEEE base demand). Interestingly, we observe that while 100% of the samples generated for case118 were successfully solved, only of the samples for case30 were successfully solved. For all prediction tasks, we used a 90/10 train-test split and report results on the test set.

4.2 End to end prediction

We evaluate task performance along two metrics:

  • [leftmargin=*]

  • Legality Rate: The proportion of predicted grids which satisfy all engineering and physical constraints.

  • Avg. Cost Deviation: The average fractional difference between the cost of the predicted grid, and the cost of the true grid: over legal grids.

Roughly, this captures the reliability and optimality of a particular model. We examine a range of different architectures and training strategies. We performed a grid search considering models with 1-2 hidden layers, 128/256/512 hidden neurons, ReLU/Tanh activations. We also experimented with vanilla MSE loss, and a variant with linear penalties for constraint violations (described in Section

3.1). Each model was trained with Adam (

) until loss convergence, for a maximum of 2000 epochs.

Grid Legality Rate Avg. Cost Deviation
case30 0.51 0.002
case118 0.70 0.002
Table 1: End-to-end prediction performance. Average cost deviation is only reported for legal grids.

Table 1 reports the best performance for each grid type. For case30, the optimal model was a two layer neural network with tanh activations, and no loss penalty. For case118, the optimal model was a three layer network with 512 hidden neurons, ReLU activations, and a constraint loss penalty. Interestingly, we observe better performance on case118 than case30. Though we would intuitively expect task difficulty to scale with grid size, this result suggests that other factors could affect a model’s generalization ability. In particular, smaller grids could be less stable, and thus more likely to produce a wide range of (less predictable) behavior under varying demand distributions. We also observe that the cost of the optimal model predictions were within of the optimal cost.

4.3 Constraint Prediction

For constraint prediction, we evaluate performance in terms of accuracy (i.e. the proportion of constraints classified successfully). We perform a similar hyperparameter grid search and report the best results in Table


Grid % Accuracy
case30 0.99
case118 0.81
Table 2: Constraint prediction performance

In general, we find neural networks to be highly successful at determining which the active constraint set.

5 Conclusion

In this work, we presented two approaches that leverage machine learning for solving ACOPF. Preliminary experiments present promising results in both settings. In next steps, we hope to evaluate our methods on more complex grid architectures, and explore different approaches for incorporating grid constraints into our models.