DeepAI AI Chat
Log In Sign Up

One-element Batch Training by Moving Window

by   Przemysław Spurek, et al.
Jagiellonian University

Several deep models, esp. the generative, compare the samples from two distributions (e.g. WAE like AutoEncoder models, set-processing deep networks, etc) in their cost functions. Using all these methods one cannot train the model directly taking small size (in extreme -- one element) batches, due to the fact that samples are to be compared. We propose a generic approach to training such models using one-element mini-batches. The idea is based on splitting the batch in latent into parts: previous, i.e. historical, elements used for latent space distribution matching and the current ones, used both for latent distribution computation and the minimization process. Due to the smaller memory requirements, this allows to train networks on higher resolution images then in the classical approach.


page 7

page 8


Perceptual Generative Autoencoders

Modern generative models are usually designed to match target distributi...

Score-based Generative Modeling in Latent Space

Score-based generative models (SGMs) have recently demonstrated impressi...

Micro Batch Streaming: Allowing the Training of DNN models Using a large batch size on Small Memory Systems

The size of the deep learning models has greatly increased over the past...

NashAE: Disentangling Representations through Adversarial Covariance Minimization

We present a self-supervised method to disentangle factors of variation ...

An introduction to distributed training of deep neural networks for segmentation tasks with large seismic datasets

Deep learning applications are drastically progressing in seismic proces...

Auto-Encoding Goodness of Fit

For generative autoencoders to learn a meaningful latent representation ...

Cramer-Wold AutoEncoder

We propose a new generative model, Cramer-Wold Autoencoder (CWAE). Follo...