Fundamental limits of over-the-air optimization: Are analog schemes optimal?

09/11/2021
by   Shubham K Jha, et al.
0

We consider over-the-air convex optimization on a d-dimensional space where coded gradients are sent over an additive Gaussian noise channel with variance σ^2. The codewords satisfy an average power constraint P, resulting in the signal-to-noise ratio (SNR) of P/σ^2. We derive bounds for the convergence rates for over-the-air optimization. Our first result is a lower bound for the convergence rate showing that any code must slowdown the convergence rate by a factor of roughly √(d/log(1+𝚂𝙽𝚁)). Next, we consider a popular class of schemes called analog coding, where a linear function of the gradient is sent. We show that a simple scaled transmission analog coding scheme results in a slowdown in convergence rate by a factor of √(d(1+1/𝚂𝙽𝚁)). This matches the previous lower bound up to constant factors for low SNR, making the scaled transmission scheme optimal at low SNR. However, we show that this slowdown is necessary for any analog coding scheme. In particular, a slowdown in convergence by a factor of √(d) for analog coding remains even when SNR tends to infinity. Remarkably, we present a simple quantize-and-modulate scheme that uses Amplitude Shift Keying and almost attains the optimal convergence rate at all SNRs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/07/2013

Optimal rates for zero-order convex optimization: the power of two function evaluations

We consider derivative-free algorithms for stochastic and non-stochastic...
research
11/23/2020

Lower Bound on the Capacity of the Continuous-Space SSFM Model of Optical Fiber

The capacity of a discrete-time model of optical fiber described by the ...
research
01/08/2018

How To Make the Gradients Small Stochastically

In convex stochastic optimization, convergence rates in terms of minimiz...
research
06/16/2021

On the Fragile Rates of Linear Feedback Coding Schemes of Gaussian Channels with Memory

In <cit.> the linear coding scheme is applied, X_t =g_t(Θ - E{Θ|Y^t-1, ...
research
06/04/2020

On the Minimax Optimality of the EM Algorithm for Learning Two-Component Mixed Linear Regression

We study the convergence rates of the EM algorithm for learning two-comp...
research
12/11/2017

Short-Packet Two-Way Amplify-and-Forward Relaying

This letter investigates an amplify-and-forward two-way relay network (T...

Please sign up or login with your details

Forgot password? Click here to reset