Bayesian Attention Networks for Data Compression

03/29/2021
by   Michael Tetelman, et al.
0

The lossless data compression algorithm based on Bayesian Attention Networks is derived from first principles. Bayesian Attention Networks are defined by introducing an attention factor per a training sample loss as a function of two sample inputs, from training sample and prediction sample. By using a sharpened Jensen's inequality we show that the attention factor is completely defined by a correlation function of the two samples w.r.t. the model weights. Due to the attention factor the solution for a prediction sample is mostly defined by a few training samples that are correlated with the prediction sample. Finding a specific solution per prediction sample couples together the training and the prediction. To make the approach practical we introduce a latent space to map each prediction sample to a latent space and learn all possible solutions as a function of the latent space along with learning attention as a function of the latent space and a training sample. The latent space plays a role of the context representation with a prediction sample defining a context and a learned context dependent solution used for the prediction.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/22/2017

The Geometry of Continuous Latent Space Models for Network Data

We review the class of continuous latent space (statistical) models for ...
research
09/04/2019

Prediction, Consistency, Curvature: Representation Learning for Locally-Linear Control

Many real-world sequential decision-making problems can be formulated as...
research
07/06/2023

A Privacy-Preserving Walk in the Latent Space of Generative Models for Medical Applications

Generative Adversarial Networks (GANs) have demonstrated their ability t...
research
01/29/2021

Parsimonious Bayesian Factor Analysis for modelling latent structures in spectroscopy data

In recent years animal diet has been receiving increased attention, in p...
research
08/26/2022

Comparing multiple latent space embeddings using topological analysis

The latent space model is one of the well-known methods for statistical ...
research
03/31/2020

Learning from Small Data Through Sampling an Implicit Conditional Generative Latent Optimization Model

We revisit the long-standing problem of learning from small sample. In r...
research
05/11/2021

Gradient flow encoding with distance optimization adaptive step size

The autoencoder model uses an encoder to map data samples to a lower dim...

Please sign up or login with your details

Forgot password? Click here to reset