The Utility of Decorrelating Colour Spaces in Vector Quantised Variational Autoencoders

09/30/2020
by   Arash Akbarinia, et al.
0

Vector quantised variational autoencoders (VQ-VAE) are characterised by three main components: 1) encoding visual data, 2) assigning k different vectors in the so-called embedding space, and 3) decoding the learnt features. While images are often represented in RGB colour space, the specific organisation of colours in other spaces also offer interesting features, e.g. CIE L*a*b* decorrelates chromaticity into opponent axes. In this article, we propose colour space conversion, a simple quasi-unsupervised task, to enforce a network learning structured representations. To this end, we trained several instances of VQ-VAE whose input is an image in one colour space, and its output in another, e.g. from RGB to CIE L*a*b* (in total five colour spaces were considered). We examined the finite embedding space of trained networks in order to disentangle the colour representation in VQ-VAE models. Our analysis suggests that certain vectors encode hue and others luminance information. We further evaluated the quality of reconstructed images at low-level using pixel-wise colour metrics, and at high-level by inputting them to image classification and scene segmentation networks. We conducted experiments in three benchmark datasets: ImageNet, COCO and CelebA. Our results show, with respect to the baseline network (whose input and output are RGB), colour conversion to decorrelated spaces obtains 1-2 Delta-E lower colour difference and 5-10 embedding space is easier to interpret in colour opponent models.

READ FULL TEXT

Authors

page 5

page 12

page 13

page 14

page 15

page 17

page 18

page 19

06/12/2017

Channel-Recurrent Variational Autoencoders

Variational Autoencoder (VAE) is an efficient framework in modeling natu...
06/10/2022

Learning the Space of Deep Models

Embedding of large but redundant data, such as images or text, in a hier...
07/24/2017

Building Graph Representations of Deep Vector Embeddings

Patterns stored within pre-trained deep neural networks compose large an...
08/28/2018

EmbeddingVis: A Visual Analytics Approach to Comparative Network Embedding Inspection

Constructing latent vector representation for nodes in a network through...
06/29/2021

A Mechanism for Producing Aligned Latent Spaces with Autoencoders

Aligned latent spaces, where meaningful semantic shifts in the input spa...
07/03/2020

Variational Autoencoders for Anomalous Jet Tagging

We present a detailed study on Variational Autoencoders (VAEs) for anoma...
02/13/2020

Neuromorphologicaly-preserving Volumetric data encoding using VQ-VAE

The increasing efficiency and compactness of deep learning architectures...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.