Learning Stationary Markov Processes with Contrastive Adjustment

03/09/2023
by   Ludvig Bergenstråhle, et al.
0

We introduce a new optimization algorithm, termed contrastive adjustment, for learning Markov transition kernels whose stationary distribution matches the data distribution. Contrastive adjustment is not restricted to a particular family of transition distributions and can be used to model data in both continuous and discrete state spaces. Inspired by recent work on noise-annealed sampling, we propose a particular transition operator, the noise kernel, that can trade mixing speed for sample fidelity. We show that contrastive adjustment is highly valuable in human-computer design processes, as the stationarity of the learned Markov chain enables local exploration of the data manifold and makes it possible to iteratively refine outputs by human feedback. We compare the performance of noise kernels trained with contrastive adjustment to current state-of-the-art generative models and demonstrate promising results on a variety of image synthesis tasks.

READ FULL TEXT

page 5

page 8

page 13

page 14

page 15

page 16

page 17

page 18

research
12/19/2013

Multimodal Transitions for Generative Stochastic Networks

Generative Stochastic Networks (GSNs) have been recently introduced as a...
research
03/02/2022

The Optimal Noise in Noise-Contrastive Learning Is Not What You Think

Learning a parametric model of a data distribution is a well-known stati...
research
03/03/2021

Contrastive learning of strong-mixing continuous-time stochastic processes

Contrastive learning is a family of self-supervised methods where a mode...
research
11/03/2022

Can RBMs be trained with zero step contrastive divergence?

Restricted Boltzmann Machines (RBMs) are probabilistic generative models...
research
03/20/2017

Learning to Generate Samples from Noise through Infusion Training

In this work, we investigate a novel training procedure to learn a gener...
research
03/18/2015

GSNs : Generative Stochastic Networks

We introduce a novel training principle for probabilistic models that is...
research
10/04/2022

Contrastive Learning Can Find An Optimal Basis For Approximately View-Invariant Functions

Contrastive learning is a powerful framework for learning self-supervise...

Please sign up or login with your details

Forgot password? Click here to reset