Color Constancy Convolutional Autoencoder

06/04/2019
by   Firas Laakom, et al.
0

In this paper, we study the importance of pre-training for the generalization capability in the color constancy problem. We propose two novel approaches based on convolutional autoencoders: an unsupervised pre-training algorithm using a fine-tuned encoder and a semi-supervised pre-training algorithm using a novel composite-loss function. This enables us to solve the data scarcity problem and achieve competitive, to the state-of-the-art, results while requiring much fewer parameters on ColorChecker RECommended dataset. We further study the over-fitting phenomenon on the recently introduced version of INTEL-TUT Dataset for Camera Invariant Color Constancy Research, which has both field and non-field scenes acquired by three different camera models.

READ FULL TEXT
research
06/21/2021

Pre-training also Transfers Non-Robustness

Pre-training has enabled many state-of-the-art results on many tasks. In...
research
09/18/2023

Image-Text Pre-Training for Logo Recognition

Open-set logo recognition is commonly solved by first detecting possible...
research
03/21/2017

INTEL-TUT Dataset for Camera Invariant Color Constancy Research

In this paper, we provide a novel dataset designed for camera invariant ...
research
05/21/2020

Text-to-Text Pre-Training for Data-to-Text Tasks

We study the pre-train + fine-tune strategy for data-to-text tasks. Fine...
research
12/20/2014

An Analysis of Unsupervised Pre-training in Light of Recent Advances

Convolutional neural networks perform well on object recognition because...
research
06/11/2019

Bag of Color Features For Color Constancy

In this paper, we propose a novel color constancy approach, called Bag o...
research
12/24/2019

Cascading Convolutional Color Constancy

Regressing the illumination of a scene from the representations of objec...

Please sign up or login with your details

Forgot password? Click here to reset