Matrix Exponential Learning for Resource Allocation with Low Informational Exchange

02/19/2018
by   Wenjie Li, et al.
0

We consider a distributed resource allocation problem in a multicarrier multi-user MIMO network where multiple transmitter-receiver links interfere among each other. Each user aims to maximize its own energy efficiency by adjusting its signal covariance matrix under a predefined power constraint. This problem has been addressed recently by applying a matrix exponential learning (MXL) algorithm which has a very appealing convergence rate. In this learning algorithm, however, each transmitter must know an estimate of the gradient matrix of the user utility. The knowledge of the gradient matrix at the transmitters incurs a high signaling overhead especially that this matrix size increases with the number of antennas and subcarriers. In this paper, we therefore investigate two strategies in order to decrease the informational exchange per iteration of the algorithm. In the first strategy, each user sends at each iteration part of the elements of the gradient matrix with respect to a certain probability. In the second strategy, each user feeds back sporadically the whole gradient matrix. We focus on the analysis of the convergence of the MXL algorithm to Nash Equilibrium (NE) under these two strategies. Upper bounds of the average convergence rate are obtained in both situations with general step-size setting, from which we can clearly see the impact of the incompleteness of the feedback information. We prove that the algorithm can still converge to NE and the convergence rate are not seriously affected. Simulation results further corroborate our claim and show that, in terms of convergence rate, MXL performs better under the second proposed strategy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/21/2022

MSTGD:A Memory Stochastic sTratified Gradient Descent Method with an Exponential Convergence Rate

The fluctuation effect of gradient expectation and variance caused by pa...
research
04/07/2022

First-Order Algorithms for Nonlinear Generalized Nash Equilibrium Problems

We consider the problem of computing an equilibrium in a class of nonlin...
research
08/01/2023

Mirror Natural Evolution Strategies

The zeroth-order optimization has been widely used in machine learning a...
research
02/08/2022

Convergence of a New Learning Algorithm

A new learning algorithm proposed by Brandt and Lin for neural network [...
research
03/02/2021

Convergence Rate of the (1+1)-Evolution Strategy with Success-Based Step-Size Adaptation on Convex Quadratic Functions

The (1+1)-evolution strategy (ES) with success-based step-size adaptatio...
research
06/09/2020

Fast Gradient-Free Optimization in Distributed Multi-User MIMO Systems

In this paper, we develop a gradient-free optimization methodology for e...

Please sign up or login with your details

Forgot password? Click here to reset