Rehabilitating the Color Checker Dataset for Illuminant Estimation

05/30/2018
by   Ghalia Hemrit, et al.
0

In a previous work, it was shown that there is a curious problem with the benchmark Color Checker dataset for illuminant estimation. To wit, this dataset has at least 3 different sets of ground-truths. Typically, for a single algorithm a single ground-truth is used. But then different algorithms, whose performance is measured with respect to different ground-truths, are compared against each other and then ranked. This makes no sense. In fact it is nonsense. We show in this paper that there are also errors in how each ground-truth set was calculated. As a result, all performance rankings based on the Color Checker dataset - and there are scores of these - are ill-founded. In this paper, we re-generate a new 'recommended' set of ground-truth based on the calculation methodology described by Shi and Funt. We then review the performance evaluation of a range of illuminant estimation algorithms. Compared with the legacy ground-truths, we find that the difference in how algorithms perform can be large with many local rankings of algorithms being reversed. Finally, we draw the readers attention to our new 'open' data repository which, we hope, will allow the Color Checker set to be rehabilitated and, once again, to become a useful benchmark for illuminant estimation algorithms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset