Simple Primary Colour Editing for Consumer Product Images

06/06/2020 ∙ by Han Gong, et al. ∙ 0

We present a simple primary colour editing method for consumer product images. We show that by using colour correction and colour blending, we can automate the pain-staking colour editing task and save time for consumer colour preference researchers. To improve the colour harmony between the primary colour and its complementary colours, our algorithm also tunes the other colours in the image. Preliminary experiment has shown some promising results compared with a state-of-the-art method and human editing.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 5

page 6

Code Repositories

prod_recolor

An image processing tool for consumer product re-coloring.


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Research and data have been increasingly used to optimise the design process [yang2020predicting]. Previous research shows that product-colour appearance can affect consumers’ purchase decisions while consumers’ product-colour preferences vary from category to category  [luo2019influence, yu2018role]. To understand consumer-product colour preference, the standard marketing images have been manually recoloured using software such as Adobe Photoshop [ps] or GIMP [gimp]. The recolouring process requires researchers (or designers) to manually adjust colours by picking colours and adopting non-binary per-layer masking. However, visible artefacts and incompatible background colours often remain even after very careful editing. We conclude four main requirements from colour preference researchers:
Minimum machine processing time. Users would not prefer a slow processing speed as usually colour modifications are applied to multiple products for at least dozens of target colours.
Minimum user manipulation time. A high demand of user interaction time would be undesirable. Methods that require multiple user strokes or manual selection of multiple colours would not be ideal. We need a method that only requires a single primary colour specification and takes care of the rest of colour processing automatically.
Artefact resiliency. Artefacts, such as JPEG blocks and unnatural edges, are usually introduced after re-colouring. It is expected to preserve all image details except for the primary colour modification.
Colour harmony preservation. The chosen colour for change may not fit the product’s existing complementary colours. In some cases, tuning of complementary colours is desirable.

Figure 1: Primary colour editing pipeline. Given an input image, our method amends the product’s primary colour according to a target colour in 3-4 steps: A) Colour intensities clusters in CIE L*a*b* colour space [cie]

are computed (left: original distributions; right: primary colour altered distributions); B) Cluster RGB correspondences are used for estimating a colour correction matrix. Note that the L* channel intensities are also used for clustering but are not illustrated on the exemplar graphs; C) An alpha-blending process is applied to remove colour changes, which are less relevant to the primary colour change, from the colour corrected image; D) Some residual colour artefacts can be optionally removed using a gradient preservation method ’regrain’ 

[Pitie2] (see the latter section for visualisation). The a*b* chromaticity gamut images are taken from Wikipedia [wiki_lab].

Our proposed method in this paper addresses these requirements by providing an alternative design tool which is fully automatic. Existing studies suggested that colour manipulations offer the potential for software to generate recoloured images (target colour images). More promising applications of automatic colour manipulations will lead the trend of generative design systems in colour and design [sbai2018design, yang2020predicting]. There have been a number of methods for colour manipulations such as colour transfer [Pitie2, ReinhardTransfer, Nguyen], colour hint propagation [farbman2010diffusion, chen2014sparse, chen2012manifold, an2008appprop, levin2004colorization], or palette editing [nguyen2017group, chang2015palette, zhang2017palette]. However, none of the previous methods is directly applicable to the primary colour editing problem. Rapid digital workflows in practice would also require automatic methods for evaluating and/or comparing colours and designs [anderson2018design].

In this paper, we propose a simple method which automates primary colour editing at an optimised consumption of user and machine processing time and it preserves colour harmony to some extent. Our method is based on the assumption that simulating colour change as a 2-D colour homography [finlayson2019color] (i.e. as a change of light) usually avoids image processing artefacts [gong2016recoding, finlayson2019color] such as JPEG blocks, sharp edges, and colour combination conflicts. Our colour editing pipeline is depicted in Figure 1 where the colour editing task is reformulated as a 2-D colour homography colour correction problem. Additionally, we may apply a gradient preservation step to remove some residual artefacts. Compared with the previous recolouring methods, ours requires minimum user input and its design is relatively simple.

2 Related Work

Our work is relevant to the colour editing methods in three categories: A) colour transfer; B) colour hint propagation; C) palette-based colour editing.

2.1 Colour transfer

Colour transfer is an image editing process which adjusts the colours of a picture to match a target picture’s colour theme. This research was started by Reinhard et al. [ReinhardTransfer] and followed up by the others [Nguyen, Pitie2, MKL_ct] recently. Most of these methods align the colour distributions in different colour spaces, which usually involve statistics alignment [ReinhardTransfer, Nguyen, MKL_ct] or iterative distribution shifting [Pitie2].

2.2 Colour hint propagation

Some methods require user hints, e.g. strokes, to guide recolouring of object surfaces. This direction of research was started by Levin et al. [levin2004colorization] where they colourise grey-scale images based on user colour stroke and solves for a large and sparse system of linear equations. Their key assumption is that the colours of neighbouring pixels with similar luminance should have similar chromaticities. More recent methods [an2008appprop, chen2012manifold, farbman2010diffusion] make use of masks, either soft or hard, to assist re-colourisation. Their colour modification model is based on a diagonal colour correction matrix used for white balance, e.g. [CM] with limits on the range of applicable colour changes. Some others, e.g. [chen2014sparse], have used sparse coding/learning that the sparse set of colour samples provide an intrinsic basis for an input image and the coding coefficients capture the linear relationship between all pixels and the samples. This branch of methods require heavy user inputs and therefore not immediately useful for our problem.

2.3 Palette-based colour editing

Some methods adopt colour intensity clustering, e.g.k-means++ algorithm [kmeans++], to initially generate a colour palette of the input image. After palette adjustments, different approaches were applied for manipulating colour changes. Zhang et al. [zhang2017palette] decompose the colours of the image into a linear combination of basis colours before reconstructing a new image using the linear coding coefficients. Chang et al. [chang2015palette]

adopt a monotonic luminance mapping and radial basis functions (RBFs) for interpolating/mapping chromaticities. This branch of methods are most close to our solution however none of them is optimised for the particular task of rapid primary colour editing for consumer product images.

2.4 Colour homography

Our solution is based on the colour homography colour change model. The colour homography theorem [finlayson2019color, gong2016recoding, CIC2016, PICS2016] presents that chromaticities across a change in capture conditions (light color, shading and imaging device) are a homography apart. Suppose that we map an RGB to a corresponding RGI (red-green-intensity) c using a full-rank matrix :

(1)

The and chromaticity coordinates are written as . We treat the right-hand-side of Equation 1 as a homogeneous coordinate and we have . When the shading is fixed, it is known that across a change in illumination or a change in device, the corresponding RGBs are approximately related by a linear transform M that where is the corresponding RGBs under a second light or captured by a different camera [MARIMONT.WANDELL, MALONEY86B]. We have which maps colours in RGI form between illuminants. Due to different shading, the RGI triple under a second light is written as , where denotes the unknown scaling. Without loss of generality we regard c as a homogeneous coordinate i.e. assume its third component is 1. Then, (rg chromaticity coordinates are a homography apart). In this paper, we will model the major colour change initially as a colour homography change but without considering the individual scale differences between each RGB correspondences, i.e. a linear transform of colour change is applied.

3 Simple Primary Colour Editing

Our algorithm starts with the simple observation that a simple 2-D colour homography model allows for a wider range of colour changes (as opposed to a diagonal colour correction matrix) and usually produces fewer colour combination conflicts [gong2016recoding, finlayson2019color]. In Figure 1, we overview the colour processing pipeline which consists of three major steps and one optional step: A) Clustering: The CIE L*a*b* [cie] intensities of an input RGB image are clustered using MeanShift [meanshift]. The primary colour cluster is altered to match the target colour (see the red line) that the cluster centres form the before-and-after sparse colour intensities correspondences; B) Colour correction: The L*a*b* colour correspondences are converted to RGB space before being used to estimate a 2-D colour homography matrix (without considering scale differences); C) Irrelevant colour change suppression: a soft alpha-blending mask is computed to suppress aggressive colour changes irrelevant to the primary colour change; D) Gradient preservation (optional): a gradient preservation step can be applied to remove more residual artefacts. We also note that the computational cost can be reduced by using down-sampled thumbnail images for model parameter estimation. We provide the algorithm details in the following sub-sections.

3.1 Intensity clustering and altering

To estimate a reliable colour change model, the first step is to extract the predominant colours which best capture the input image’s colour theme. We adopt MeanShift [meanshift] clustering to extract at most 5 predominant colours (i.e. cluster centres) from the input image. The intention of not collecting too many colours is to avoid noise and reduce computational cost. The cluster number of 5 is only an empirical value, e.g. 6 also works. Clearly, a fixed set of MeanShift parameters never guarantee a maximum number of 5 colour clusters. We thus propose a simple adaptive MeanShift clustering procedure which gradually increases the initial small kernel bandwidth value as shown in Algorithm 1

1, ;
2 repeat
3       ;
4       ;
5       ;
6      
7until ;
Algorithm 1 Adaptive MeanShift clustering

where is the MeanShift function with a flat kernel and bandwidth , is a

matrix of cluster centres (each row is a L*a*b* intensity vector),

counts the number of cluster centres , is a factor controlling the kernel width growth rate in each iteration.

Given the obtained predominant colours, we construct the sparse colour correspondences to be supplied for colour change model estimation. Since we aim to only change the one primary colour if possible, the remaining of target predominant colours are kept the same as the original predominant colours except that the only primary colour is modified as the target primary colour. Through this, we construct a target predominant colour set denoted as (see also Figure 1 (A) for illustration).

3.2 Colour Homography colour change

Given the source and target colour sets and , we make use of a simple 2-D colour homography matrix to achieve primary colour change while minimising colour artefacts. A full colour homography change is an optimised chromaticity mapping in RGB space. However, since the brightness of colour matters in this application, we omit the shading factor and only estimate a linear matrix transform (which is still a homography matrix) using weighted least-squares as the follows:

(2)

where is a regularisation term, is a diagonal matrix whose diagonal elements are the associated normalised weights of all the predominant colours (i.e. cluster centre sizes), is a identity matrix. Denoting the ’flatten’ RGB intensities of the input image as a matrix ( is the number of pixels), we can compute its primary-colour-changed RGB intensities as . An intermediate processed example can be found in Figure 1 (B).

3.3 Irrelevant colour change suppression

Some of the colour changes after the linear transform may look aggressive, e.g. the pink ring of the ’Tide’ logo in Figure 1 (B). We adopt an alpha-blending procedure to address this as the follows:

(3)

where is the modified RGB colour output, is an N-vector denoting per-pixel scaling factors (in the range of ) and places an N-vector along the diagonal of an diagonal matrix. Our intuition is to smoothly reduce the impact of the colour changes that are irrelevant to the primary colour and control this by . We measure the irrelevance by the a*b* chromaticity difference between each colour (row) in and the target primary colour:

(4)

where and are the errors in a*b* channels. A higher indicates a higher degree of irrelevance but this value can be sometimes too big. Thus, we further cap and normalise as :

(5)

where is an upper threshold value. The individual is assigned as the corresponding element of . The processing result can be sensitive to and thus must be carefully chosen. An exemplar visualisation of in its image grid form is shown in Figure 2 (A). Aiming at obtaining a blending result which preserves the edge details of the original image, we look for the optimum which minimises Equation 6.

(6)

where is the operator to output per element absolute value of a matrix, indicates an intensity channel of a* or b*, and indicate the grid images of the ’unflatten’ intensity matrix and respectively, edge is a binary edge detector using the Sobel approximation [sobel33x3] to the derivative (without edge-thinning), entropy is a function which measures the amount of information – entropy [entropy] – as defined in Equation 7.

(7)

where is a normalised input vector (summed up to 1) which, in our case, is a ’flattened’ error-of-edge image (e.g. Figure 2 (C)), is an element index. When the entropy of the error of two edge images is low, it indicates a higher similarity of edge features between two intensity images. However, we do not have a closed-form solution for its global minima. In practice, a suitable local minima in a reasonable range usually serves the purpose. We thus propose a brutal search for a local minima solution of in the range of with an interval precision of . A visualised example of and its plot of search are shown in Figure 2.

Figure 2: Visualisations of the alpha blending. A) Visualised alpha blending mask used in the irrelevant colour change suppression step. B) The associated entropy plot (horizontal ticks: ). C) Visualisation of binary edge difference in channel a* when or (more different). The bottom-right number indicates the value of . See also Figure 1 (A) for the input image and the target primary colour.

3.4 Artefact cleansing

As the previous alpha blending step has attempted the minimisation of edge artefacts, mostly users can get an artefact-free output image. However, for some rare cases, we also adopt an optional artefact cleansing step called ’regrain’ which was first proposed in [Pitie2]. It provides strong gradient preservation but also has side effects which may cause minor undesired blurs along edges. Please refer to the cited paper for the algorithm details. Figure 3 shows an example where this optional step improves the result by removing some JPEG block artefacts.

Figure 3: Example of the ’regrain’ [Pitie2] artefact cleansing.

3.5 Acceleration

Our colour manipulation pipeline requires the solution of 10 key model parameters, namely and . Using full-resolution images is not necessary and we therefore adopt thumbnail images () for solving for and and apply the estimated parameters to a full-resolution input image to get a full-resolution output.

4 Evaluation

In this section, we present the result comparison and some useful discussions about our method’s practical use-cases.

4.1 Results

We compare our method with a state-of-the-art palette-based re-colouring method [chang2015palette] and the manually edited results produced by a professional colour preference researcher. Figure 4 shows some visual result comparisons. We found that our outputs are mostly comparable to the manually edited results which take 2-5 minutes’ labour time per image. Most of the human labour time is spent on masking the image (for primary colour pixels). Once the mask is completed, the remaining recolouring time takes about 1 minute. All the results in Figure 4 have been produced without the ’regrain’ step enabled.

Figure 4: Result visual comparison. The target colours are shown at the top right of the input images. The label ’Human’ refers to the column of results manually generated by a professional colour preference researcher.

Our method also has some failure cases as shown in Figure 5. These failures were caused by the initial step of colour clustering. When the input image only has one colour, the MeanShift clustering algorithm can mishandle the primary colour extraction. Lowering the maximum cluster number (i.e. 5 in Algorithm 1) can resolve this issue. That said, we could provide this as an optional parameter for users.

Figure 5: Failure cases. The input images are shown at the bottom right and the target primary colours are shown at the bottom left. A) Only a part of the primary colour was replaced. B) An incorrect primary colour was picked and replaced.

Our method provides practical editing efficiency without user interventions. Using the thumbnail acceleration trick, our unoptimised MATLAB implementation (without the regrain step) takes about 1s to process an 1.2 mega-pixel image on a MacBook Pro 2015 laptop (2.5 GHz Quad-Core Intel Core i7 CPU).

4.2 Discussions

’Work to forecast’ has suggested the use of colour to forecast consumer demand or resource-saving levels [schonberger2018reconstituting]. Colour has also been suggested as one of the most powerful visual elements in packaging. Thus, choosing an appropriate colour for the design of packaging or product can significantly affect consumer decision-making [kauppinen2014strategic]. This work could be applied as a product-colour predictor for studying product-packaging-colour in consumer purchase behaviour. Or, as an image generation tool, it helps designers/researchers preview the multi-colour options of a product image.

We also acknowledge that more rigorous user experiments in controlled lighting/display conditions could still be possibly carried out after the UK Covid-19 lockdown [horton2020offline]. We, therefore, commit to providing our source code to the research community in the hope that its evaluation and more potential use-cases can be further driven by the other cross-discipline communities.

5 Conclusion

In this paper, we present a simple product re-colouring method for assisting consumer colour preference research. We show that by using a colour manipulation pipeline, we can automate this primary colour editing task for consumer colour preference researchers. The complementary colours in the product image will also be adjusted to potentially make the primary colour fits better. Future work is required to explore more of its use-cases and strengthen its artefact resiliency.

6 Acknowledgment

We also thank Dr Qianqian Pan from the University of Leeds for her useful discussions.

References