Inferring Human Observer Spectral Sensitivities from Video Game Data

07/01/2020 ∙ by Chatura Samarakoon, et al. ∙ University of Cambridge 0

With the use of primaries which have increasingly narrow bandwidths in modern displays, observer metameric breakdown is becoming a significant factor. This can lead to discrepancies in the perceived color between different observers. If the spectral sensitivity of a user's eyes could be easily measured, next generation displays would be able to adjust the display content to ensure that the colors are perceived as intended by a given observer. We present a mathematical framework for calculating spectral sensitivities of a given human observer using a color matching experiment that could be done on a mobile phone display. This forgoes the need for expensive in-person experiments and allows system designers to easily calibrate displays to match the user's vision, in-the-wild. We show how to use sRGB pixel values along with a simple display model to calculate plausible color matching functions (CMFs) for the users of a given display device (e.g., a mobile phone). We evaluate the effect of different regularization functions on the shape of the calculated CMFs and the results show that a sum of squares regularizer is able to predict smooth and qualitatively realistic CMFs.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In Specimen, the players pick which colored blob (‘specimen’) matches the background color.

Traditional color matching experiments involve changing the intensities of long, medium, and short wavelength primaries till the mixture (right) is perceptually identical to the monochromatic light (left).

Due to the narrow emission bandwidths there is limited information at some wavelengths. (e.g., around 500nm and 580nm).

Color matching experiments have been the foundation for modern colorimetry since the pioneering work by Maxwell, Young and Helmholtz [7]. These experiments measure an individual’s cone spectral sensitivities (also referred to as cone fundamentals). They involve presenting a human participant with two light sources and asking them to modulate the mixture of primaries (e.g., red, green and blue) in one light source until it visually matches a second light source comprising a single wavelength (see Figure 1[4, 16]

. From this, the required proportions of each of the red, green, and blue primaries to match each target wavelength can be obtained. The three functions for red, green, and blue comprises the individual color matching functions (CMFs). They are usually given in the space of imaginary primaries X, Y, and Z to make the CMFs strictly positive. The CMFs can be used to numerically describe perceptual color equivalence. They can also be converted to the cone fundamentals through a linear transformation. As might be expected, these in-person user studies are expensive and time consuming to conduct.

Modern displays are using increasingly narrower primaries. They have been shown to be more prone to observer metameric breakdown compared to displays with wideband primaries [6]. Observer metameric breakdown is the process where two different observers disagree on what ‘color’ a particular spectrum is. Thus far, standardization of color spaces and associated primaries have been used as a way to ensure color constancy across different media (e.g., displays, print media, etc.). However, observer metameric breakdown cannot be as easily circumvented. One solution is to do a secondary display calibration to suit the sensitivities of the user’s eyes. This could work for devices that primarily have a single user (e.g., mobile phones, laptops).

In this work, we explore a framework for characterizing a user’s spectral sensitivities without using an in-person experiment and evaluate the results using the Specimen dataset [12].

1.1 The Specimen dataset

In 2015, PepRally released Specimen, a color matching game for iOS [3]. The game involves the players picking which color blob (‘specimen’) matches the background color (see Figure 1). If the player choses the correct specimen they will be presented with a new background color. Incorrect choices reset the bonus streak. This repeats until all the specimens have been matched correctly. The game keeps a log of every choice a player makes through in-game analytics with details about the chosen color and the correct (background) color choice as sRGB values, and some other bits of information like device model, time, and anonymized user ID. The game sends the anonymized data to an analytics facility using Yahoo Slurry. The aggregate analytics dataset contains data from 41,000 players, totaling 489k play sessions and 28.6 million color matches.

We present the results of the preliminary investigation we carried out into using these RGB color matching data to extract the players’ CMFs. We show that its possible to devise a numerical method to extract the CMFs.

1.2 Contributions

In this work we present the following contributions.

  1. [noitemsep, nolistsep]

  2. A mathematical framework for extracting color vision cone response for a particular human observer (the so-called cone fundamentals) from data obtained from a popular color matching game [3, 8] in the iOS app store.

  3. Implementation of the mathematical framework using TensorFlow’s optimizer backend.

  4. Demonstration that the implementation of the framework, applied to data from the Specimen dataset, and across four different priors (uniform, gaussian triple, standard observer, and standard observer), provides meaningful color matching functions.

2 Problem Definition

Let be an incorrectly chosen color and be the correct target color, both in the in sRGB color space with values normalized to the range . The shapes of the red, green and blue primary emissions of the iPhone X display were extracted from the white point spectrum. Flow chart showing the architecture used to learn the spectral correction term.

Also, let be the spectra for the chosen display’s red, green, and blue primaries. The spectrum of the chosen color and the target color can be calculated using the Equations 1 and 2 respectively.

(1)
(2)

Let be the CMFs for or CIE standard observers. The CIE tristimulus values for the chosen spectrum are calculated as,

(3)
(4)
(5)

Similarly for the target spectrum .

Let be the individual CMFs for the user. Also, let be a correction term with the same domain as such that . The correction term represents the deviation of the user’s vision from the standard observer color matching functions.

The user adjusted tristimulus values can be calculated using Equations 3 to 5 with individual CMFs, , instead of the standard CMFs, . Let and be these user adjusted tristimulus values for the chosen spectrum and the target spectrum respectively.

If this implies that the colors are perceptually similar. The target is to find the correction term that makes . This can be posed as the following minimization problem,

(6)

To prevent the correction term from causing large deviations, we explored numerous regularization functions. The regularized optimization problem is given by the following equation.

(7)

where R is a regularization function.

Equation 7 gives the optimization process using a single color selection event. However, the optimum value for should be valid across all the color selection events for a given user. Thus, for the optimization, we consider all the user’s selection events in a batch and calculate the mean value for across this batch before evaluating the cost function given in Equation 7. By evaluating Equation 7, we can infer the individual CMFs from the RGB values of the mismatched color pairs.

Without regularization the optimization overfits and results in calculated values are physically impossible (correction (above), CMFs (below)). Max absolute regularization leads to slightly smoother curves than non-regularized optimization (correction (above), CMFs (below)).

3 Methodology

The Specimen app stopped collecting data in 2018. At the time, the only iOS phone with an OLED display was the iPhone X, which we used as the target platform. We use the fact that OLED displays have clearly delineated primary emission spectra in out analysis. We used the white-point spectral measurements of the display carried out by Raymond Soneira [2] as a starting point and manually isolated the shapes of the three primary emissions from the combined white spectrum. We used this as the display model.

In the Specimen dataset we found 141 users who used iPhone Xs totaling 21,250 color matches. From that set of players, we chose the player with the most incorrect color matches (344 incorrect matches from 2,042 total matches) as the individual whose CMFs are to be learnt in this preliminary report.

We calculated the emission spectra for each mismatched chosen and target color pair by multiplying the normalized R, G, and B values with the corresponding subpixel emission.

We then used Tensorflow’s optimizer backend with the Adam optimizer [5] with up to 10,000 iterations to find the CMF correction term required to make the chosen spectrum and the target spectrum, perceptually identical.

We evaluated four priors for the CMFs; namely, the and CIE standard observer CMFs [1], a uniform prior, and a gaussian mixture prior. For the gaussian mixture, we used the peak wavelengths and intensities from the standard observer.

Additionally, we used the following regularizers;

  1. [noitemsep,nolistsep]

  2. Max of absolute values :

  3. Mean of Absolute values:

  4. Sum of Absolute values [14]:

  5. Root Mean Squares :

  6. Mean of Squares :

  7. Sum of Squares [15]:

We are using XYZ color matching functions instead of the cone fundamentals because it simplifies the optimization process. And as outlined before, cone fundamentals can be extracted from the CMFs using a known linear transformation [13] as outlined before.

4 Results

4.1 Standard Observer Priors

(a) CMF
(b) CMF
Figure 1: Optimizing with sum of squares regularizer show that both and standard observer CMF has been adjusted to explain the observed color confusions better.

The Figures 2, 1, 5 and 5 on the sidebar show the qualitative results of the different regularizers for the standard observer prior. We were unable to calculate gradients when using root mean squares regularizer and thus were unable to optimize using it. The first three regularizers, , , and , led to CMFs that have negative values and are implausible: The CIE XYZ color space is defined such that the CMFs are strictly positive. Both and regularizers led to smooth CMFs with the former achieving smaller final cost.

Mean absolute regularization causes peaky artifacts (correction (above), CMFs (below)). standard observer CMFs were approximated with three gaussians. The CMFs learnt with the gaussian approximation is similar to those obtained when using the standard observer prior.

Overall, sum of squares () led to the smoothest and the most qualitatively realistic CMFs. Figures 0(a) and 0(b) show the results for the chosen user for both and standard observer priors using a weight of 0.05 on the regularizer term. The figures show that the training process is able to learn corrections for the standard observer CMFs that better explains the observed color confusions without resulting in unnatural CMF shapes.

4.2 Uniform Priors

With a uniform prior, all three CMFs start off with a fixed value across the range of wavelengths. However, the chosen color and the target color in the XYZ space is coupled together because the calculation of both those values rely on the base CMF (see Figure 2). This led to the optimization process resulting CMFs that are uniform, which is incorrect. We also tried adjusting the values for the CMF prior to match the peak intensities of the standard observer CMFs which produced a similarly unrealistic result. This result is not surprising but was worth evaluating to provide a complete picture.

4.3 Gaussian Priors

With the Gaussian prior, the standard observer CMFs were approximated with 3 gaussians (see Figure 1). Figure 1 shows the solution obtained with the sum of squares regularizer with a 0.05 weight (identical setup to Figures 0(a) and 0(b)). The results show a striking similarity to the results of the standard observer prior showing that our method produces consistent results.

5 Future work

As outlined before, this work presents a purely numerical framework for extracting CMFs. Although the results show that the optimization process results in plausible, smooth CMFs, there is no guarantee that they are physically accurate.

In addition to that, this work makes the following approximations that limit the accuracy of the results. From the perspective of the display, we are using a simple model assuming that the spectrum of the real display emission can be obtained by using the normalized RGB values as a multiplier to rescale the peak emissions of the primaries. We are also using normalized intensity as opposed to spectral radiance in calculating the [X,Y,Z] tristimulus values. Furthermore, we are using the CIE XYZ space to minimize the perceived difference instead of the CIE Lab space which is better at representing perceptual differences.

As a necessary extension to this “late breaking results” submission, we are exploring the following avenues to forgo the approximations and make the results physically accurate.

We need three things to make our system feasible; namely, a model that incorporates physical attributes of the human eye, a more accurate display model, and a user study to validate the results.

To achieve the first goal, we aim to find the correction term in cone fundamental space instead of the XYZ CMF space. This would allow us to incorporate optical properties of the eye more easily into the regularization term. In addition to that, we aim to carry out the optimization in the CIE LAB space instead of the CIE XYZ space.

To build a better model of the display we aim to measure the variation of the emission spectrum of the display using an optical spectrometer while varying the display’s RGB values. This would allow us to create an model that accurately maps from the RGB space to the spectral space.

For the user study, we hope to carry out a traditional color matching experiment to measure the users’ true CMFs. Following that we aim to have the users play a version of the Specimen game and predict the CMFs. We can then use the measured CMFs to learn the regularization needed to ensure that the calculated CMFs match the measured CMFs. Finally, we can use this validated method to extract physically accurate CMFs from the Specimen dataset for the 141 players. Sum of Squares regularization leads to the smoothest and most realistic looking CMFs (correction (above), CMFs (below)). Mean Squares regularizer results in a smooth set of CMFs but show more fluctuation around the CMF prior compared to the sum of squares regularizer (correction (above), CMFs (below)).

6 Related Work

As outlined before, the study of human vision using color matching experiments go back to the 19th century with Maxwell,Young and Helmholtz [7]. The modern day color matching experiments are based on Guild [4] and Wright’s [16] work in the 1920s. In their experiments, they used a set of monochromatic target sources and an controllable additive mixture of red, green and blue sources.

Conceptually our work is similar to the extent that we are also using controllable red, green blue primaries, but in the form of pixels on a opto-electronic display. But instead of having an independent target that we are trying to find a match for, both the target and the match are coupled by the properties of the display. To our knowledge, no other work has been reported in the literature that approaches this problem numerically and at this scale. We are able to carry out this analysis because we have access to the Specimen dataset, which is not in the public domain.

The methods we describe in this work could form the basis for new color transformations which trade display power dissipation for color fidelity [10]. Such color approximation optimizations could in turn be combined with power-saving I/O encoding techniques [11] or even inferring permissible color approximation from programming languages that permit programmers to specify accuracy constraints [9].

7 Acknowledgements

This research is supported by an Alan Turing Institute award TU/B/000096 under EPSRC grant EP/N510129/1. C. Samarakoon is supported by the EPSRC DTP Studentship award EP/N509620/1.

References

  • [1] U. Color and Vision Research Lab (2020) CVRL Colour Matching Functions Dataset. Cited by: §3.
  • [2] DisplayMate Technologies (2017) iPhone X OLED Display Technology Shoot-Out. Cited by: §3.
  • [3] E. Gorochow, C. Whitney, S. Randazzo, and PepRally (2015) Specimen. Cited by: item 1, §1.1.
  • [4] J. Guild (1925-02) The geometrical solution of colour mixture problems. Transactions of the Optical Society 26 (3), pp. 139–174. External Links: ISSN 1475-4878 Cited by: §1, §6.
  • [5] D. P. Kingma and J. L. Ba (2015-12) Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, External Links: 1412.6980 Cited by: §3.
  • [6] A. Sarkar, L. Blondé, P. Le Callet, F. Autrusseau, J. Stauder, and P. Morvan (2010-07) Modern displays: Why we see different colors, and what it means?. In 2010 2nd European Workshop on Visual Information Processing, EUVIP2010, pp. 1–6. External Links: ISBN 9781424472871 Cited by: §1.
  • [7] J. SchandaJ. Schanda (Ed.) (2007-07) Colorimetry: Understanding the CIE System. John Wiley & Sons, Inc., Hoboken, NJ, USA. External Links: ISBN 9780470049044 Cited by: §1, §6.
  • [8] K. St Clair (2020-05) The science of colour is upending our relationship with screens. Wired UK. Cited by: item 1.
  • [9] P. Stanley-Marbell and D. Marculescu (2006-09) A Programming Model and Language Implementation for Concurrent Failure-Prone Hardware. In Proceedings of the 2nd Workshop on Programming Models for Ubiquitous Parallelism, PMUP ’06, pp. 44–49. Cited by: §6.
  • [10] P. Stanley-Marbell, V. Estellers, and M. Rinard (2016) Crayon: saving power through shape and color approximation on next-generation displays. In Proceedings of the Eleventh European Conference on Computer Systems, EuroSys ’16, pp. 11:1–11:17. Cited by: §6.
  • [11] P. Stanley-Marbell and M. Rinard (2016) Reducing serial i/o power in error-tolerant applications by efficient lossy encoding. In Proceedings of the 53rd Annual Design Automation Conference, DAC ’16, pp. 62:1–62:6. Cited by: §6.
  • [12] P. Stanley-Marbell and M. Rinard (2018-07) Perceived-Color Approximation Transforms for Programs that Draw. IEEE Micro 38 (4), pp. 20–29. External Links: ISSN 0272-1732 Cited by: §1.
  • [13] A. Stockman (2019-12) Cone fundamentals and CIE standards. Current Opinion in Behavioral Sciences 30, pp. 87–93. External Links: ISSN 23521546 Cited by: §3.
  • [14] R. Tibshirani (1996) Regression Shrinkage and Selection via the Lasso. Journal of the Royal Statistical Society. Series B (Methodological) 58 (1), pp. 267–288. External Links: 11/73273, ISBN 0849320240, ISSN 00359246 Cited by: item 3.
  • [15] R. A. Willoughby (1979-04) Solutions of Ill-Posed Problems (A. N. Tikhonov and V. Y. Arsenin). SIAM Review 21 (2), pp. 266–267. External Links: ISSN 0036-1445 Cited by: item 6.
  • [16] W. D. Wright (1929) A re-determination of the trichromatic coefficients of the spectral colours. Transactions of the Optical Society 30 (4), pp. 141–164 (en). External Links: ISSN 14754878 Cited by: §1, §6.