Learning to Learn to Demodulate with Uncertainty Quantification via Bayesian Meta-Learning

08/02/2021
by   Kfir M. Cohen, et al.
0

Meta-learning, or learning to learn, offers a principled framework for few-shot learning. It leverages data from multiple related learning tasks to infer an inductive bias that enables fast adaptation on a new task. The application of meta-learning was recently proposed for learning how to demodulate from few pilots. The idea is to use pilots received and stored for offline use from multiple devices in order to meta-learn an adaptation procedure with the aim of speeding up online training on new devices. Standard frequentist learning, which can yield relatively accurate "hard" classification decisions, is known to be poorly calibrated, particularly in the small-data regime. Poor calibration implies that the soft scores output by the demodulator are inaccurate estimates of the true probability of correct demodulation. In this work, we introduce the use of Bayesian meta-learning via variational inference for the purpose of obtaining well-calibrated few-pilot demodulators. In a Bayesian framework, each neural network weight is represented by a distribution, capturing epistemic uncertainty. Bayesian meta-learning optimizes over the prior distribution of the weights. The resulting Bayesian ensembles offer better calibrated soft decisions, at the computational cost of running multiple instances of the neural network for demodulation. Numerical results for single-input single-output Rayleigh fading channels with transmitter's non-linearities are provided that compare symbol error rate and expected calibration error for both frequentist and Bayesian meta-learning, illustrating how the latter is both more accurate and better-calibrated.

READ FULL TEXT
research
07/27/2019

Uncertainty in Model-Agnostic Meta-Learning using Variational Inference

We introduce a new, rigorously-formulated Bayesian meta-learning algorit...
research
06/06/2021

Meta-Learning Reliable Priors in the Function Space

Meta-Learning promises to enable more data-efficient inference by harnes...
research
06/21/2020

Gradient-EM Bayesian Meta-learning

Bayesian meta-learning enables robust and fast adaptation to new tasks w...
research
10/06/2022

Few-Shot Calibration of Set Predictors via Meta-Learned Cross-Validation-Based Conformal Prediction

Conventional frequentist learning is known to yield poorly calibrated mo...
research
03/27/2023

Meta-Calibration Regularized Neural Networks

Miscalibration-the mismatch between predicted probability and the true c...
research
07/05/2022

Meta-Learning a Real-Time Tabular AutoML Method For Small Data

We present TabPFN, an AutoML method that is competitive with the state o...
research
10/12/2021

Meta Learning Low Rank Covariance Factors for Energy-Based Deterministic Uncertainty

Numerous recent works utilize bi-Lipschitz regularization of neural netw...

Please sign up or login with your details

Forgot password? Click here to reset