End-to-End Learning of Geometrical Shaping Maximizing Generalized Mutual Information

12/11/2019
by   Kadir Gümüs, et al.
0

GMI-based end-to-end learning is shown to be highly nonconvex. We apply gradient descent initialized with Gray-labeled APSK constellations directly to the constellation coordinates. State-of-the-art constellations in 2D and 4D are found providing reach increases up to 26% w.r.t. to QAM.

READ FULL TEXT

page 1

page 2

page 3

research
12/09/2021

End-to-End Learning of Joint Geometric and Probabilistic Constellation Shaping

We present a novel autoencoder-based learning of joint geometric and pro...
research
11/08/2022

Geometric Constellation Shaping for Fiber-Optic Channels via End-to-End Learning

End-to-end learning has become a popular method to optimize a constellat...
research
03/30/2017

Simplified End-to-End MMI Training and Voting for ASR

A simplified speech recognition system that uses the maximum mutual info...
research
03/13/2020

Learning Unbiased Representations via Mutual Information Backpropagation

We are interested in learning data-driven representations that can gener...
research
01/28/2013

Generalized Bregman Divergence and Gradient of Mutual Information for Vector Poisson Channels

We investigate connections between information-theoretic and estimation-...
research
08/30/2019

Maximizing Mutual Information for Tacotron

End-to-end speech synthesis method such as Tacotron, Tacotron2 and Trans...
research
02/28/2019

End-to-End Efficient Representation Learning via Cascading Combinatorial Optimization

We develop hierarchically quantized efficient embedding representations ...

Please sign up or login with your details

Forgot password? Click here to reset