Critical Slowing Down Near Topological Transitions in Rate-Distortion Problems

03/03/2021
by   Shlomi Agmon, et al.
0

In Rate Distortion (RD) problems one seeks reduced representations of a source that meet a target distortion constraint. Such optimal representations undergo topological transitions at some critical rate values, when their cardinality or dimensionality change. We study the convergence time of the Arimoto-Blahut alternating projection algorithms, used to solve such problems, near those critical points, both for the Rate Distortion and Information Bottleneck settings. We argue that they suffer from Critical Slowing Down – a diverging number of iterations for convergence – near the critical points. This phenomenon can have theoretical and practical implications for both Machine Learning and Data Compression problems.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

05/08/2020

Lossy Compression with Distortion Constrained Optimization

When training end-to-end learned models for lossy compression, one has t...
09/21/2020

Computing the Rate-Distortion Function of Gray-Wyner System

In this paper, the rate-distortion theory of Gray-Wyner lossy source cod...
01/11/2022

Rate Distortion Theory for Descriptive Statistics

Rate distortion theory was developed for optimizing lossy compression of...
06/18/2021

Universal Rate-Distortion-Perception Representations for Lossy Compression

In the context of lossy compression, Blau Michaeli (2019) adopt a ma...
02/27/2020

A Free-Energy Principle for Representation Learning

This paper employs a formal connection of machine learning with thermody...
10/26/2018

Information Bottleneck Methods for Distributed Learning

We study a distributed learning problem in which Alice sends a compresse...
07/01/2019

Rate Distortion Theorem and the Multicritical Point of Spin Glass

A spin system can be thought of as an information coding system that tra...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.