Learning Maximum Margin Channel Decoders

03/13/2022
by   Amit Tsvieli, et al.
0

The problem of learning a channel decoder is considered for two channel models. The first model is an additive noise channel whose noise distribution is unknown and nonparametric. The learner is provided with a fixed codebook and a dataset comprised of independent samples of the noise, and is required to select a precision matrix for a nearest neighbor decoder in terms of the Mahalanobis distance. The second model is a non-linear channel with additive white Gaussian noise and unknown channel transformation. The learner is provided with a fixed codebook and a dataset comprised of independent input-output samples of the channel, and is required to select a matrix for a nearest neighbor decoder with a linear kernel. For both models, the objective of maximizing the margin of the decoder is addressed. Accordingly, for each channel model, a regularized loss minimization problem with a codebook-related regularization term and hinge-like loss function is developed, which is inspired by the support vector machine paradigm for classification problems. Expected generalization error bounds for the error probability loss function are provided for both models, under optimal choice of the regularization parameter. For the additive noise channel, a theoretical guidance for choosing the training signal-to-noise ratio is proposed based on this bound. In addition, for the non-linear channel, a high probability uniform generalization error bound is provided for the hypothesis class. For each channel, a stochastic sub-gradient descent algorithm for solving the regularized loss minimization problem is proposed, and an optimization error bound is stated. The performance of the proposed algorithms is demonstrated through several examples.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset