Colour vision contributes significantly to our perception of the world by providing valuable information about properties of objects and facilitating their segmentation from each other and the background . Its evolution might be guided by ecologically important tasks such as collecting ripe fruits or spotting predators. Besides that, our brains have evolved to communicate perception of colour through natural language. Colour terms are extensively practised in our day-to-day life. For instance, we tend to describe objects by their colour names (e.g. pass me the blue book; look at that orange house). Moreover, we explicitly benefit from colours to facilitate various tasks (e.g. software programmers colour-code their source code to aid interpretation; pedestrians and drivers rely on colour-coded city traffic lights).
Consequently, any computer application seeking to intuitively interact with humans (e.g. visual searching, image labelling, and content retrieval) requires to embody colour naming in its routine 
. Furthermore, numerous computer vision algorithms (such as scene segmentation, high-dynamic-range imaging (HDR), target tracking, object recognition, and texture classification) can greatly benefit from segmentation of an image to its constituent colours: either by improving their accuracy or lowering their computational complexity. Despite the omnipresence of colour in our lives and the prominent role played by our perceptual machinery, only a handful of computational colour naming models has been developed and even fewer attempt to incorporate our knowledge of the perceptual system into them.
Colour naming (also referred here as “colour categorisation”) is a highly multidisciplinary topic. A large-scale linguistic survey by anthropologists Berlin & Kay  hinted at eleven basic colour terms – i.e. black, blue, brown, green, grey, orange, pink, purple, red, white, and yellow – that are shared across most evolved languages and cultures. Universality of these colour terms has been challenged by the role of linguistic contexts . Nevertheless, they have been reconfirmed in various other studies [8, 34, 20] and to a certain extent explained by physiological evidence that demonstrate low-level mechanisms contribute to colour categorical perception prior to language acquisition . Present general consensus favours an intermediate free-from-language low-level colour perception stage supported by non-verbal cognitive experiments .
Colour naming at first might appear to be fully deterministic (indeed a few computational models have taken this approach [35, 21]). However, Kay & McDaniel  suggested that the determining perceptual input comes from the language-processing part of the brain. Therefore the underlying visual mechanism behind colour naming must be modelled by continuous mathematics, i.e. fuzzy logic. This insight (also supported by psychophysics) implies that in practice every pixel has a value of “belongingness” (from zero to hundred per cent) to each colour category which is directly computed from the measured reflectance spectrum of a surface at that point.
Initial works on fuzzy models started with Lammens , who fitted the data collected by Berlin & Kay into some variations of Gaussian functions. Mojsilovic  continued this approach with a new perceptual colour metric. Seaborn et al. 
clustered psychophysical colour points with a k-means algorithm while Benavente et al.
tackled the problem by means of a triple-sigmoidal parametric model, with a few lightness planes sliced into different colour categories and the rest approximated through interpolation. Contrary to previous algorithms that are based on fitting colour categories to psychophysically obtainedfocal colours, van de Weijer et al.  proposed to learn colour names from real-world images using probabilistic latent semantic indexing. Our proposal to capture colour terms using geometrical shapes is fundamentally different from current methods: (i) we benefit from parametric modelling [13, 6] with the added advantage of partitioning the colour space directly into three-dimensional shapes rather than interpolating from two-dimensional planes; (ii) unlike algorithms that learn every pixel independently through histograms with no explicit constraints on colour regions , we impose ellipsoidal shapes that act as natural restrictions to such colour regions.
Acknowledging the fact that concept of colour is a product of our brain, it naturally follows that the best way to address colour naming is to model what we know physiologically and psychophysically about the human cortical machinery. For example, it is widely accepted that colour categorisation has been shaped by evolution and neonatal adaptation to break down an extremely complex world into cognitively tractable entities, reducing the nearly two million colours that can be distinguished perceptually  to about thirty categories than can be recalled by average subjects . In particular, the elven universal colour categories  are unlikely to be arbitrary and possibly reflect ideal divisions of an irregularly shaped perceptual color space . In a recent psychophysical experiment Parraga & Akbarinia  observed that in chromatically opponent space categorical frontiers between these eleven universal colours form ellipsoidal shapes in line with the elliptical isorresponses of V1 neurons reported in a physiological study by Horwitz & Hass .
Following this rationale, in this paper we present a biologically-inspired colour naming model based on an “ideal” partitioning of colour-opponent space (as suggested by Regier et al. ) through parsimonious ellipsoidal shapes (as revealed by psychophysics  and physiology ). We extend the work of  by: (i) demonstrating that parameters of ellipsoids and growth ratio can be learnt more ecologically from segmented images; (ii) accounting for rotation along each axis and all ellipsoids; (iii) showing that it is straightforward to incorporate new colour terms within the new framework; (iv) prototyping means of ellipsoids adaptation to the image contents in order to account for the phenomenon of colour constancy; and (v) experimenting our model on real-world images.
2 Ellipsoidal colour categorisation model
In this section: (i) we review relevant physiological and psychophysical facts about colour vision and colour naming; (ii) we detail theory of proposed model; and (iii) we explain different means of obtaining parameters of our model.
2.1 Colour perception
At present, we have a fairly rigorous understanding of cone photoreceptors that initiate colour vision by absorbing light at the back of retina. Signals produced by these cells are combined in an antagonistic manner to from the opponent channels that convey information to the visual cortex through the lateral geniculate nucleus (LGN) [15, 26]. As we advance deeper inside cortical areas, our knowledge of cerebral mechanisms involved in colour vision becomes less clear. In the primary visual cortex (V1), there is population of specialised neurons called single- and double-opponent cells that respond non-linearly to chromatic stimulus . A recent study by Horwitz & Hass  analysed neurons in V1 in terms of their uniform responses to three-dimensional shapes in colour-opponent space. A large subset of these neurons (termed Neuron-3) responded best to ellipsoids whose major and minor axes are aligned to perceptual cardinal directions (see the schematics in Fig 1). Their findings show how neurons of V1 can act jointly to process colour.
Similar ellipsoidal shapes have also emerged in psychophysical measures of colour boundaries where subjects were asked to produce the intermediate colour between two basic colour terms on a calibrated CRT monitor . This does not appear as a great surprise since colour categories tend to occupy connected regions of colour space . However, these results could in turn explain the organisation of universal colour terms around foci with perceptual constraints governing their position and shape, i.e. supporting the hypothesis that colour naming reflects optimal partitions of colour space .
2.2 Ellipsoidal partitioning of colour space (EPCS)
We modelled each colour category as an ellipsoid in three-dimensional colour-opponent space following these rationale:
The presence of Neuron-3 in V1  shows the plausibility of low-level models of opponent colour processing, i.e. ellipsoids are parsimonious shapes that can be implemented by low-level visual neurons.
Contours of ellipsoids provide an appropriate fit to the psychophysically-measured colour categorical boundaries in .
In this context, the centre of an ellipsoid can be interpreted as the focal colour and its geometrical properties determine the optimal partitioning .
An ellipsoid aligned to the axes of a Cartesian coordinate system is defined as:
where are the coordinates of the ellipsoid-centre; and represent the length of the semi-axes. To account for any rotations around the axes of coordinate system, we defined our complete set of ellipsoid parameters with nine parameters:
where are the rotational angles around each of the colour-opponent axes.
A naïve procedure to categorise pixels into different colour terms can be described as a simple binary test: when a pixel is inside an ellipsoid, it belongs to that category, otherwise it does not. However, there are two major flaws with this approach: (i) pixels outside of all ellipsoids will be categorised as neither of the colour terms; and (ii) the colour categorisation will lack the fuzziness proposed by  as its underlying visual mechanism. Thus, to simulate the large variability present in the categorisation decision we utilised the sigmoid curve that is a spacial case of the logistic function, given as
where is the steepness of the curve (also knows as the growth ratio). Larger values of results in a more binarised categorisation, whereas smaller values of increase the fuzziness of our model.
There are various ways to model the steepness of each colour category. The simplest is to set as a constant number. Another strategy is to establish a relation between the steepness of each category and size of its ellipsoid. We favoured a more adjustable solution in which is set as a free variable for each colour category. This allows our model to vary its level of fuzziness for different colour names. Therefore, in our model each colour term, , consists of ten parameters:
Belongingness of a pixel to a colour category is computed by:
is the probability of pixelbelonging to colour term ; is the steepness of the corresponding colour category; is the centre of its ellipsoid; is the position of the half-height transition point, which in our model is defined as the distance from the centre of an ellipsoid to its surface in the direction joining and .
Although trivial, it is worth mentioning that when a pixel falls inside an ellipsoid, is smaller than , as a result the input of the natural exponential function becomes a negative value. Consequently, the entire natural exponential term becomes smaller than . The belongingness of a pixel to a colour category increases as tends towards and it reaches its maximum value at the centre of an ellipsoid, where the exponential term drops to .
Deterministic colour naming requires a unique term for every pixel. This can be achieved through different strategies of combining probabilities of all colour categories, for instance considering the perceptually neighbouring colour (i.e. red and orange, or pink and purple). However, this is beyond the scope of this paper and we adopted a simple maximum pooling mechanism: the highest probability among all colour categories is assigned as the colour term of that pixel:
2.3 Acquiring model parameters
2.3.1 Colour space
The first prerequisite for modelling the processes that occur in the visual cortex is to represent the chromatic signal in a colour-opponent space (resembling the signal arriving from the retina). We selected the CIE L*a*b* colour space because is considered to be perceptually uniform  and is widely used in computer vision and visual sciences. Nevertheless, since in our model we employ ellipsoids to partition a given colour space into different colour categories, our model is not dependant on the CIE L*a*b* and should work equally well in other colour-opponent spaces, such as CIE L*u*v*, lsY and DKL.
2.3.2 Parameters optimisation
The proposed colour ellipsoids are parsimonious geometrical shapes whose parameters can be determined by different procedures. The simplest option would be to draw those ellipsoids manually and set the steepness to a constant value. Alternatively, surface of each ellipsoid can be fitted into data points that represent boundaries of a colour term; and the steepness of a category can be defined as the average length of its ellipsoid semi-axes, , similar to . The most comprehensive solution would probably be to construct a ground truth for every point in a canonical colour-opponent space by means of psychophysical experiments. From this ground truth all the ten parameters of our model can simultaneously be learnt in an optimisation framework.
However, in practice collecting such an exhaustive ground truth from a large set of subjects is extremely time consuming. To overcome this issue we simulated the ground truth from the validation set of the Ebay colour naming dataset presented in  (8 images per each of the eleven basic colour names, making a total of 88). Given pixel , we counted the number of times it was categorised as each of the eleven basic colour names. Dividing this by the total number of times pixel was categorised resulted in the degree of membership to each colour term.
We learnt the parameters of our model with a sequential quadratic programming optimisation method ( number of iterations and as tolerance constraint) with the error function
where is number of pixels in the ground truth set; is defined in Eq. 5; and is the ground truth value of pixel belonging to category . We simply initialised each colour ellipsoid, , as follows
where are the average coordinates (in CIE L*a*b* colour space) of all the pixels whose ground truth value of category is non-zero. We did not set any constraints on the optimisation of ellipsoid centres. Naturally, we restricted the length of semi-axes to positive values and the rotational angles to the range of
. Steepness of sigmoidal function was limited to the range of.
Fig 2 illustrates the eleven colour ellipsoids learnt from our simulated ground truth. One can highlight a few aspects of the zenithal view that express high congruence with our very own colour perception as follow:
The achromatic categories are placed at the centre of all ellipsoids in line with the hue circle, first proposed by Newton.
The ellipsoids corresponding to opponent colours, i.e. red-green and yellow-blue, do not overlap. This is in line with Hering’s colour theory which states that these colour cannot be perceived together.
3 Experiments and results
We learnt parameters of our model – termed Ellipsoidal Partitioning of Colour Space (EPCS) – from two different ground truths:
We quantitatively evaluated the proposed model by conducting experiments on two different kinds of datasets: (i) colour chips categorised by psychophysical experiments; and (ii) colour segmented objects in real-world images.
|Munsell chart||EPCS [Ps] Segmentation||EPCS [Rw] Segmentation|
3.1 Munsell colour chart
The left panel of Fig 3 shows the Munsell chart that contain 330 different pixels (eight chromatics rows, each consisting of 40 hues in increments of 2.5, and one column of 10 achromatic lightness). Many colour naming studies have compared their categorisation results to the psychophysical experiments of Berlin & Kay  (24 native speakers from 110 languages were asked to name each Munsell chip) and Sturges & Whitfield  (20 English speakers named each Munsell sample twice). Our segmentation of the Munsell chart is illustrated on the right panel of Fig 3. Our results match perfectly with psychophysical experiment of Sturges & Whitfield and only vary on five points (all caused by the white colour) comparing to the survey of Berlin & Kay.
Table 1 quantitatively compares accuracy of our model to seven state-of-the-art algorithms that have also reported theirs results on the Munsell chart. In comparison to the colour naming survey of Berlin & Kay  EPCS practically matches the best reported results in the literature (NICE), far ahead of the third best colour naming models (SFKM, TSEM). With respect to the psychophysical experiment of Sturges & Whitfield  our model along with SFKM, TSEM and NICE obtains perfect accuracy.
We can observe a large difference between two variations of our model mainly caused by white pixels. EPCS [Rw] (learnt only from real-world images) categorises pixels with a faint colour as white, whereas EPCS [Ps] (learnt by influence of colour naming experiments in controlled environment) categorise those pixels into chromatic categories. This is an issue noted by  as well.
|Berlin & Kay||Sturges & Whitfield|
3.2 Real-world images
We evaluated the proposed model on two datasets of real-world images111The source code and all the materials are available under this link https://goo.gl/ZCBLJA.. Along with our model we tested three state-of-the-art methods (whose source codes are publicly available) : Benavente et al.’s triple sigmoid elliptic model (TSEM) , van de Weijer et al.’s probabilistic latent semantic analysis (PLSA) , and Parraga & Akbarinia’s neural isoresponsive colour ellipsoids (NICE) . We assessed each algorithm based on their true positive ratio, i.e. , where represents pixels whose colour names are correctly labelled and
are those that are mislabelled. Due to the nature of the available ground truths, which primarily contain one colour category per image, other evaluation metrics were inappropriate. Images of tested datasets are of various size and in order to avoid the bias for smaller images, we first computed the true positive ratio for each image and reported results are averaged over all.
3.2.1 Ebay dataset
Ebay dataset  consists of four sets of man-made objects, i.e. cars, dresses, pottery and shoes. Every set contains 110 images, i.e. ten images for each of the eleven basic colour terms. The ground truth masks are based on semi-automatic segmentation algorithms. To compensate for absence of natural objects (such as fruits, vegetables, flowers etc. – where colour arguably plays an important role for recognition) we extended this dataset by creating an extra set of images containing natural objects following the same procedure as the original authors.
We have reported true positive ratio of four methods on Ebay dataset in Table 2. Evidently EPCS [Rw] outperforms all other methods. We can also observe a large gap between performance of EPCS [Ps] and PLSA in comparison to TSEM and NICE in all five subcategories. In three sets (dresses, shoes and natural) EPCS [Ps] obtains higher true positive ratio compared to PLSA. Advantage of EPCS [Ps] over PLSA becomes more tangible by considering their respective performance on psychophysical data, where EPCS [Ps] performs notably better (see Table 1).
3.2.2 Small objects dataset
Small objects dataset 
contains 300 16-bit images of various material (e.g. paper, plastic, metal, wood, fruits) captured under different types of illuminants. Each image comes with a manual segmentation of its constituting regions according to their colour names. However, number of pixels for each of the eleven basic colour terms is not uniformly distributed.
We have reported true positive ratio of four methods on small objects dataset in Table 3. We can observe similar scenarios as Ebay dataset. EPCS [Rw] performs best among all while EPCS [Ps] and PLSA obtain better results in comparison to TSEM and NICE.
We have illustrated one exemplary image from the small object dataset in Fig 4. We can observe that EPCS [Ps] mislabels the white part of the wall as pink. This is the main reason that EPCS [Rw] clearly performs better than EPCS [Ps] in real-world images (refer to Tables 2 and 3). However, we would like to emphasise results of the later one that in psychophysical data where environment is controlled and no noise is present obtain almost perfect true positive ratio (similar to NICE), and it reasonably performs well on real-world images (contrary to NICE). We believe high accuracy on psychophysical experiments is essential because a colour naming model should first and foremost correctly categorise individual pixels. Other challenging tasks of colour naming, e.g. faint colours appear as white in an image context, are caused by different phenomenons such as colour constancy and induction. These challenges should be solved by modelling colour naming in a dynamic fashion.
|Original||EPCS [Ps]||EPCS [Rw]|
|(a) TSEM ||(b) PLSA ||(c) EPCS [Ps]|
Fig 5 shows three examples from the real-world datasets. In each panel the original image is displayed accompanied with its respective results from each of the algorithms considered: TSEM , PLSA , and EPCS. The blue flowers (first row) which are largely misclassified as purple by TSEM and PLSA, are correctly assigned to the blue category in our model. We observed quite a few similar cases with other blue objects.
The brown pottery mug (Fig 5, second row) is almost entirely miscategorised as red by TSEM and PLSA. On the contrary, EPCS accurately labels it as brown. A closer inspection to the corresponding probability maps reveals that TSEM assigns pixels of the mug to the red category with a very high probability (almost ). PLSA labels them as red (with probability) while granting some likelihood to perceptually neighbouring colours (about to orange and to brown). However, this uncertainty spreads to the purple category as well with about probability. EPCS’s results show more consistency with about probability on the brownness of the mug, while acknowledging that the neighbouring colour red is also probable (with about ). It is worth paying extra attention to the background as well, where TSEM misassigns a great portion of it to the blue category. Contrary to this, PLSA and EPCS have no difficulties to correctly label it as black.
The white car, shown in the third row of Fig 5, is a difficult case due to the cast of green light over its body and surroundings. All three methods, in general, accurately label the car as white. Nonetheless, there are pixels near the back wheel and on the front door that are mistakenly categorised as green. This issue is more noticeable for TSEM and its minimal in EPCS.
4.1 Model extension
There are situations where one might want to add extra categories to the eleven basic colour terms (e.g., some languages contain two names for “blue” like Russian, Italian and Spanish from the River Plate area). Alternatively, there are many intermediate colour terms used in everyday language (such as, olive, turquoise, cream) that arguably deserve their own category. New colour names are usually learnt by humans after the presentation of very few examples and our model can simulate this process straightforwardly. As an illustration, we learnt the colour term “cream” from merely two images, (see Fig 6), following the same procedure explained in section 2.3.2.
Fig 7 shows the impact of this newly introduced cream category on the colour segmentation of an image from the Pascal Project Dataset . Relying only on the eleven colour terms EPCS incorrectly labels the wall on the back of the image as pink (although with low probability that is on average smaller than
). Segmenting with twelve categories allows us to accurately classify the wall as cream. The flexibility of our algorithm can be further exploited to create a personalised colour naming model which reflects the individual variability present in the psychophysical data. This is very economical and can be achieved by segmenting a handful of images from a personal digital assistant (PDA) for example. Furthermore, an interactive application can allow subjects to manipulate the colour ellipsoids directly to achieve the colour categorisation they desire.
|Original||11 Categories||12 Categories|
4.2 Model adaptation
One important aspect of any colour naming model is its context adaptability. This is feasible within our model by dynamically adjusting the ellipsoids to the image or even the pixel being processed. One of the greatest challenges in colour naming algorithms is the frontier between chromatic and achromatic colours as we experienced in our experiments and mentioned by . In a neutral background colours appear more saturated comparing to a colourful environment . As a proof-of-concept we attempted to address this issue by adapting achromatic ellipsoids to the level of colourfulness of an image. We stretched the chromatic semi-axes (a*b*) of achromatic ellipsoids on the direction that average pixels of an image differed from neutral grey. The results of this experiment are reported in Table 4.
Our naïve adaptation increases the true positive ratio by on three sets of real-world images (shoes and natural categories of Ebay dataset and small objects). This by no means is a finished adaptable model, rather a demonstration that our model can capture the variation in image content with the addition of simple extensions. This can be further explored by adapting chromatic ellipsoids to the presence or absence of certain colour categories in the image, following reports that link them to the phenomenon of colour constancy . For instance when the green signal is abundant one could shrink the green ellipsoid or translate its centre. The adaptability of the ellipsoids in our model in turn could offer a framework in which colour constancy and colour categorisation are addressed simultaneously.
In this paper we presented a biologically-inspired colour categorisation model where each colour term is represented by an ellipsoid in colour-opponent space. To capture the fuzzy nature of colour names and account for the non-linear operations performed by visual cortex neurons, we computed the final degree of membership to a category using a sigmoid curve. Theoretically, we justified our geometrical framework by linking it to physiological and psychophysical evidence. In practice, we showed that the parameters of our parsimonious model can be learnt from a simple optimisation procedure and conducted two kinds of experiments to verify its sanity. Results obtained on the Munsell chart are in excellent agreement with the psychophysical results of colour naming. We also perform better than other popular algorithms in real-world images. The advantage of the proposed model is more tangible by realising that, unlike all other state-of-the-art algorithms, it performs well on both types datasets. This shows that our model can both explain psychophysically-based colour naming results and perform an accurate categorisation of real-world images.
Biologically-inspired chromatic models have been successful in a wide range of colour computational tasks, e.g. colour induction , colour constancy [27, 4], saliency  and boundary detection [3, 2]
. This is not surprising since colour is a sensation that originates from within our brains, which in turn is the product of millions of years of evolution, adaptation and “learning” from the visual environment. From this point of view, we believe our approach to colour categorisation can compete with other deterministic and learning-based approaches. In this line, we demonstrated that our model can be easily extended to incorporate more colour terms from few examples (as human infants do) and adapt itself to the content of image. Implicitly demonstrating the potential of biologically-inspired colour categorisation modelling for different applications such as image segmentation and image retrieval. Naturally, our model (as any other colour naming model) is likely to improve its accuracy in different environments when complemented with good colour constancy and colour induction algorithms and fundamentally with larger and better ground truths.
There are at this point two main lines of investigation for the future. The first one consists on improving the assignment of colour names by considering more sophisticated rules than a simple max pooling. The second one points to making the model dynamically responsive to context (either by rearranging the ellipsoids according to image content, or alternatively, supplementing the model with a centre-surround adaptation mechanism) to account for the well known colour phenomena of induction and constancy.
This work was funded by the Spanish Secretary of Research and Innovation (TIN2013-41751-P and TIN2013-49982-EXP) and has been partially presented at the European Conference on Visual Perception (ECVP) .
-  A. Akbarinia and C. A. Parraga. Biologically plausible colour naming model. In Perception, volume 44, pages 115–115, 2015.
-  A. Akbarinia and C. A. Parraga. Feedback and surround modulated boundary detection. International Journal of Computer Vision, Jul 2017.
-  A. Akbarinia, C. A. Parraga, et al. Biologically-inspired edge detection through surround modulation. In Proceedings of the British Machine Vision Conference, pages 1–13, 2016.
-  A. Akbarinia, R. G. Rodríguez, and C. A. Parraga. Colour constancy: Biologically-inspired contrast variant pooling mechanism. In British Machine Vision Conference (BMVC), September 2017.
-  R. Benavente and M. Vanrell. Fuzzy colour naming based on sigmoid membership functions. In Conference on Colour in Graphics, Imaging, and Vision, volume 2004, pages 135–139. Society for Imaging Science and Technology, 2004.
-  R. Benavente, M. Vanrell, and R. Baldrich. Parametric fuzzy sets for automatic color naming. JOSA A, 25(10):2582–2593, 2008.
-  B. Berlin and P. Kay. Basic color terms: Their universality and evolution. Univ of California Press, 1991.
-  R. M. Boynton and C. X. Olson. Salience of chromatic basic color terms confirmed by three measures. Vision research, 30(9):1311–1317, 1990.
-  D. H. Brainard and A. Radonjic. Color constancy. The visual neurosciences, 1:948–961, 2004.
R. O. Brown and D. I. MacLeod.
Color appearance depends on the variance of surround colors.Current Biology, 7(11):844–849, 1997.
-  CIE. Colorimetry, volume 15. CIE Publication, 3 edition, 2004.
-  G. Derefeldt and T. Swartling. Colour concept retrieval by free colour naming. identification of up to 30 colours without training. Displays, 16(2):69–77, 1995.
-  J. M. G. ele Lammens. A computational model of color perception and color naming. PhD thesis, Citeseer, 1994.
-  M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2):303–338, 2010.
-  M. D. Fairchild. Color appearance models. John Wiley & Sons, 2013.
-  C. L. Hardin and L. Maffi. Color categories in thought and language. Cambridge University Press, 1997.
-  G. D. Horwitz and C. A. Hass. Nonlinear analysis of macaque v1 color tuning reveals cardinal directions for cortical color processing. Nature Neuroscience, 15(6):913–919, 2012.
-  T. Indow. Multidimensional studies of munsell color solid. Psychological Review, 95(4):456, 1988.
-  P. Kay and C. K. McDaniel. The linguistic significance of the meanings of basic color terms. Language, pages 610–646, 1978.
-  P. Kay and T. Regier. Resolving the question of color naming universals. Proceedings of the National Academy of Sciences, 100(15):9085–9089, 2003.
-  H. Lin, M. Luo, L. MacDonald, and A. Tarrant. A cross-cultural colour-naming study. part iiia colour-naming model. Color Research & Application, 26(4):270–277, 2001.
-  R. E. MacLaury, G. W. Hewes, P. R. Kinnear, J. Deregowski, W. R. Merrifield, B. Saunders, J. Stanlaw, C. Toren, J. Van Brakel, and R. W. Wescott. From brightness to hue: An explanatory model of color-category evolution [and comments and reply]. Current Anthropology, pages 137–186, 1992.
-  A. Mojsilovic. A computational model for color naming and describing color composition of images. Image Processing, IEEE Transactions on, 14(5):690–699, 2005.
N. Murray, M. Vanrell, X. Otazu, and C. A. Parraga.
Saliency estimation using a non-parametric low-level vision model.In
Computer vision and pattern recognition (cvpr), 2011 ieee conference on, pages 433–440. IEEE, 2011.
-  X. Otazu, M. Vanrell, and C. A. Párraga. Multiresolution wavelet framework models brightness induction effects. Vision Research, 48(5):733–751, 2008.
-  C. A. Parraga. Color vision, computational methods for. Encyclopedia of Computational Neuroscience, Ed. D. Jaeger and R. Jung, SpringerReference, 10:58, 2013.
-  C. A. Parraga and A. Akbarinia. Colour constancy as a product of dynamic centre-surround adaptation. Journal of Vision, 16(12):214–214, 2016.
-  C. A. Parraga and A. Akbarinia. Nice: A computational solution to close the gap from colour perception to colour categorization. PLoS ONE, 11:1–32, 03 2016.
-  M. R. Pointer. On the number of discernible colours. Color Research & Application, 23(5):337–337, 1998.
-  T. Regier, P. Kay, and N. Khetarpal. Color naming reflects optimal partitions of color space. Proceedings of the National Academy of Sciences, 104(4):1436–1441, 2007.
-  M. Seaborn, L. Hepplewhite, and J. Stonham. Fuzzy colour category map for the measurement of colour similarity and dissimilarity. Pattern Recognition, 38(2):165–177, 2005.
-  R. Shapley and M. J. Hawken. Color in the cortex: single-and double-opponent cells. Vision research, 51(7):701–717, 2011.
-  W. T. Siok, P. Kay, W. S. Wang, A. H. Chan, L. Chen, K.-K. Luke, and L. H. Tan. Language regions of brain are operative in color perception. Proceedings of the National Academy of Sciences, 106(20):8140–8145, 2009.
-  J. Sturges and T. Whitfield. Locating basic colours in the munsell space. Color Research & Application, 20(6):364–376, 1995.
-  S. Tominaga. A colour-naming method for computer color vision. In Proceedings of the 1985 IEEE International Conference on Cybernetics and Society, volume 573, page 577, 1985.
-  J. van de Weijer and F. S. Khan. An overview of color name applications in computer vision. In Computational Color Imaging, pages 16–22. Springer, 2015.
-  J. Van De Weijer, C. Schmid, J. Verbeek, and D. Larlus. Learning color names for real-world applications. Image Processing, IEEE Transactions on, 18(7):1512–1523, 2009.
-  J. Vazquez-Corral, M. Vanrell, R. Baldrich, and F. Tous. Color constancy by category correlation. Image Processing, IEEE Transactions on, 21(4):1997–2007, 2012.
-  Z. Yuan, B. Chen, J. Xue, N. Zheng, et al. Illumination robust color naming via label propagation. In Proceedings of the IEEE International Conference on Computer Vision, pages 621–629, 2015.