DeepAI AI Chat
Log In Sign Up

Asynchronous ADMM for Distributed Non-Convex Optimization in Power Systems

by   Junyao Guo, et al.
ETH Zurich
Carnegie Mellon University

Large scale, non-convex optimization problems arising in many complex networks such as the power system call for efficient and scalable distributed optimization algorithms. Existing distributed methods are usually iterative and require synchronization of all workers at each iteration, which is hard to scale and could result in the under-utilization of computation resources due to the heterogeneity of the subproblems. To address those limitations of synchronous schemes, this paper proposes an asynchronous distributed optimization method based on the Alternating Direction Method of Multipliers (ADMM) for non-convex optimization. The proposed method only requires local communications and allows each worker to perform local updates with information from a subset of but not all neighbors. We provide sufficient conditions on the problem formulation, the choice of algorithm parameter and network delay, and show that under those mild conditions, the proposed asynchronous ADMM method asymptotically converges to the KKT point of the non-convex problem. We validate the effectiveness of asynchronous ADMM by applying it to the Optimal Power Flow problem in multiple power systems and show that the convergence of the proposed asynchronous scheme could be faster than its synchronous counterpart in large-scale applications.


page 1

page 2

page 3

page 4


Impact of Communication Delay on Asynchronous Distributed Optimal Power Flow Using ADMM

Distributed optimization has attracted lots of attention in the operatio...

An Asynchronous Approximate Distributed Alternating Direction Method of Multipliers in Digraphs

In this work, we consider the asynchronous distributed optimization prob...

Block Distributed Majorize-Minimize Memory Gradient Algorithm and its application to 3D image restoration

Modern 3D image recovery problems require powerful optimization framewor...

Decomposition of non-convex optimization via bi-level distributed ALADIN

Decentralized optimization algorithms are important in different context...

A Block-wise, Asynchronous and Distributed ADMM Algorithm for General Form Consensus Optimization

Many machine learning models, including those with non-smooth regularize...

Scalable Primal Decomposition Schemes for Large-Scale Infrastructure Networks

The real-time operation of large-scale infrastructure networks requires ...

Distributed Distributionally Robust Optimization with Non-Convex Objectives

Distributionally Robust Optimization (DRO), which aims to find an optima...