Image De-Quantization Using Generative Models as Priors

07/15/2020 ∙ by Kalliopi Basioti, et al. ∙ Rutgers University 0

Image quantization is used in several applications aiming in reducing the number of available colors in an image and therefore its size. De-quantization is the task of reversing the quantization effect and recovering the original multi-chromatic level image. Existing techniques achieve de-quantization by imposing suitable constraints on the ideal image in order to make the recovery problem feasible since it is otherwise ill-posed. Our goal in this work is to develop a de-quantization mechanism through a rigorous mathematical analysis which is based on the classical statistical estimation theory. In this effort we incorporate generative modeling of the ideal image as a suitable prior information. The resulting technique is simple and capable of de-quantizing successfully images that have experienced severe quantization effects. Interestingly, our method can recover images even if the quantization process is not exactly known and contains unknown parameters.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

page 5

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Visual quality of images is largely affected by the diversity of its color palette, namely the number of available colors for its digital representation. In image quantization, the size of the original color palette is reduced by substituting every color of the original image with the closest color in the reduced-size-palette. Image quantization occurs in many cases. Examples are photography with digital cameras that have limited color palettes; image compression where we decrease the color levels by reducing the number of bits per pixel to represent to corresponding color. This in turn results in smaller storage space requirement or processing with smaller bit accuracy. Although image quantization may affect the size of the color palette whenever it is deemed necessary, it is clear that it is a lossy procedure. Recovering the lost information, namely the original chromatic representation, it is not straightforward nor is the realistic enrichment of the number of colors.

There are various image quantization techniques starting with the simplest where we uniformly quantize each chromatic component and ending with more sophisticated methods where one can, for example, employ the K-means procedure to cluster the pixels, with each pixel being represented by a 3-D vector containing the chromatic components, and then selecting a representative for each cluster

[1]. In what follows, for simplicity, we focus mostly on uniform quantization, but our results are readily extendable to more complicated quantization schemes.

In our present work we are interested in the problem of de-quantization, namely, from a quantized image to recover the original multi-color-level version. It is obvious that the de-quantization problem is ill-posed since many different images after being quantized can yield the same quantized version. In order to be able to obtain a single solution it is therefore clear that we must impose meaningful constraints. These constraints can be vaguely distinguished into two major catergories. In the first we attempt with the constraints to capture the structural properties of the image. Well known methods in this category include: a) the total variation method [2] where one assumes that natural images tend to be smooth with not very high-frequency components; b) low rank methods [3] where the original image, when represented as a matrix, accepts a low rank decomposition; c) sparse methods [4] where the original image enjoys a sparse representation in the space spanned by a suitable dictionary.

The second category involves constraints that attempt to capture the statistical behavior of the original image. More precisely, the original image is assumed to be a realization of a random image

where for the latter we have available a statistical description in the form of a joint probability density of its elements (chromatic components of each pixels). Instead of the probability density one may also employ an equivalent statistical representation as, for example, a

generative model. By properly engaging this prior knowledge one may arrive in solving the de-quantization problem very efficiently. Currently, there exist very few methods that fall under the second category. Early such techniques were using the generative model only partially [5] by limiting it to the generator function. This idea frequently resulted in convergence to visually incorrect estimates. In order to improve upon the original estimates the discriminator function (coming from an adversarial design of the generator model) was also employed in [6, 7] in the form of a regularizer term. However, this idea increases the computational complexity significantly since it requires off-line tuning of the weighting parameter of the regularizer term.

It is clear that the two categories rely on completely different forms of constraints with the first assuming some form of smoothness in the image or some transformed version of the image and the second relying on a frequentistic description of the original image, namely, how probable is the occurrence of the existing realization.

In this work we adopt the second category of constraints and assume that a generative model is available that captures the statistical behavior of the original image. We basically intend to specialize the results developed for image restoration [8] to the de-quantization problem. This specialization is not straightforward since it demands proper mathematical analysis and the definition of suitable functions which are not mentioned in the general problem treated in [8]. As in [8], we intend to reach our goal through a rigorous mathematical analysis which is based on the classical statistical estimation theory and which will lead us to a very well defined optimization problem, the solution of which will provide the desired estimate. Regarding the final optimization we must add that it will not contain any parameters that require fine-tuning using proper pre-processing, as is the case with the existing methods we mentioned before. But what is even more worth mentioning is the fact that we will be able to treat successfully problems which contain unknown parameters (e.g. not exactly known quantization strategies), something that has no equivalent in existing methods. Next, let us briefly recall relevant results from statistical estimation.

Ii Statistical Estimation Theory

Even though a more detailed version of the results we are going to use appears in [8], for completeness and self sufficiency of our article we present without major discussion theoretical elements that are relevant to the problem of interest. We begin by considering two random vectors where is hidden while is observed. Using , we would like to estimate the hidden vector when

are statistically related, with their relationship captured by the joint probability density function

.

Using one can apply the well known Bayesian estimation theory [9, pp. 259–279] in order to estimate from . We recall that an estimator is any deterministic function of and it is by the application of the Bayesian methodology that we can identify the optimum estimator. The Bayesian approach requires the definition of a cost function which places a cost on each combination of estimate and true value. Optimum is considered the estimator that minimizes the average cost , where expectation is with respect to and .

There are various meaningful cost functions and their corresponding optimum estimators. We recall [9, pp. 267-268] the minimum mean square error (MMSE) criterion which leads to the conditional mean estimator; the minimum mean absolute error (MMAE) criterion which leads to the conditional median estimator [9, page 267] and finally the maximum a-posteriori probability (MAP) estimator which is defined as

(1)

that proposes as optimum estimate the most likely given the observations . It is the MAP estimator we are going to adopt in our methodology since it has proven itself to yield efficient solutions in other image restoration problems [8].

There is an additional reason we prefer the MAP estimator. In particular, it turns out that it also allows, without much extra complication, the treatment of estimation problems containing unknown parameters. As we argue in [8], if the joint density of and is of the form containing a vector of unknown parameters then, following again the Bayesian approach and assuming the most uninformative prior density for , in the form of an improper uniform, we arrive at the following extension of the MAP estimator in the presence of unknown parameters

(2)

where some known set. As we can see we first perform a maximum likelihood estimate of the parameter vector and then we MAP-optimize over the desired . We would like to stress that the optimization proposed in (2) basically assumes that the parameters are realization dependent. In other words if denotes our quantized image then may be different for each image we are asked to process. This assumption clearly excludes methods where one may use past data to learn before hand and then treat the joint density of and as completely known.

Iii Generative Models and Parametric Transformations

A generative model is comprised of the generator function and the input distribution . Regarding the problem of interest it is completely unimportant how the generative model is obtained. We can use adversarial approaches [10, 11, 12], non-adversarial approaches [13, 14] or VAEs [15]. Each one of these methods can provide the necessary pair which is needed for our analysis. We would like to stress that we do not require any discriminator function as existing techniques since this function might not even exist if we do not train our generator using adversarial methods.

From the above it is clear that we are interested in a random vector . Instead of describing its statistical behavior through the corresponding probability density we assume that is the output of the known deterministic generator function with the input being random and following the known probability density .

The vector denotes the ideal image and we subject to a deformation through a deterministic function where has known functional form and a number of unknown parameters expressed with the vector . Finally to this deformed outcome we add noise which leads to the observed vector . The noise vector follows a probability density which has known functional form but may contain a number of unknown parameters expressed with the vector . Summarizing, if is our observation vector then

(3)

Regarding our problem of interest corresponds to the quantized image and to the original. We must add that may represent additive noise but also modeling errors. Indeed since

is usually some finite dimensional neural network it cannot express exactly the desired random vector

and therefore does not represent exactly the observations even if they are noiseless. It is this representation error that we also model as additive “noise”.

Of course the goal is from to obtain an estimate . However such an estimate would require the joint density which is not available. We propose instead a two-step alternative: In the first step we estimate , that is, the proper input to the generator that gives rise to the observations ; in the second step we define . Indeed this process makes sense. If the generator model represents very closely the statistical behavior of the random vector , this means that, to each realization of there exists a realization of such that . Consequently it is the input to the generator we are seeking first and then we use it to generate the corresponding output.

It is not difficult to verify (see [8] for details) that the joint density of and is given by

where the combination replaces the parameter vector of our general theory. We can now apply (2) and this yields

(4)

If, additionally, we assume that the additive noise is Gaussian with mean 0 and covariance matrix where

is the identity matrix, i.e. 

, and that the input density is also Gaussian with mean 0 and covariance , that is, (we recall that can be selected by us and the most common forms are i.i.d. uniform or i.i.d. standard normal), then the previous estimate becomes

(5)

where denotes the size of and and where the previous expression is obtained from (4) by finding explicitly the optimum and substituting.

If we are going to employ some version of steepest decent to iteratively solve (5) then we can see that the previous problem is equivalent to the solution of the following minimization problem

(6)

where each iteration of the steepest descent will provide parallel estimates of and . Of course once the iterations converge then we only need the final estimate for .

When we are going to apply this general theory to the problem of interest we will specify, more precisely, the transformation . But even at this point we can claim a number of interesting characteristics that our proposed method is going to enjoy:

  • It is the result of a rigorous mathematical analysis based on the statistical estimation theory.

  • It does not contain any unknown weights that require fine-tuning with the help of training data.

  • It can accommodate transformation that contain unknown parameters that may change with every realization and therefore it is not possible to fine-tune them before hand. The last characteristic is truly exceptional and not encountered in any other generative model based method that offers a solution to the problem of interest.

Let us now apply this general idea to various versions of the de-quantization problem.

Iv Application to De-Quantization

Suppose that we have an image where are the image pixels. If the image is colored then we know that each is a 3-D vector containing the corresponding three chromatic components. We assume that each chromatic component takes value in the interval . If our quantization method produces colors with each being a 3-D vector containing its corresponding chromatic components then the transformation is applied to each pixel separately and we have with defined as follows

(7)

In other words, for each pixel , we select the color from which is closest (in the Euclidean distance sense) to its 3-D chromatic vector. Unfortunately transformations of the form of (7) are not differentiable, a property which is necessary when we employ steepest descent type iterative solvers for (6). Even though (7) is the quantizer used to produce the actual data , for the estimation of we need to replace it with a function which is differentiable. We propose the following softmax alternative

(8)

Indeed we observe that as we have that . In fact it is better if we let to be a parameter to be optimized instead of fixing it to some arbitrary value. This means that will be a component of in (6).

If we adopt a more classical quantization scheme as for example uniform quantization and the same number of levels in each chromatic component, then the previous transformation simplifies and instead of applying it per pixel using the 3-D chromatic vectors we can use the same idea and apply it in each pixel and each chromatic component separately. This means that if and each pixel where denote the red, green, blue components then with

(9)

where denotes the indicator function of the set . We have a similar definition for and . It is also clear that the total number of different colors is . For the solution of (6) we need to replace the transformation with a differentiable alternative and we can use the softmax version

(10)

where, again, as we can see from Fig. 1, we have as .

Fig. 1: Softmax approximation defined in (10) of the uniform quantization function defined in (9) for and different values of the parameter .

Let us now present different de-quantization challenges using the popular CelebA [16] dataset which contains 202,599 RGB images that were cropped and resized to and then separated into two sets 202,499 for training and 100 for testing. The proposed methodology will be tested against the 100 testing images.

Since our method requires a generative model we will use the training set in order to design it and it will be the same for all de-quantization problem we present next. We employ the adversarial approach and in particular the progressive GAN method developed in [11]. We adopt the following configuration: For the generator we use input of size 512 and i.i.d. Gaussian elements as discussed in Section III. We have five layers and each layer consists of two convolutions with two kernels except the first layer that has one and one kernel and the last that has two and one kernel resulting in an output of

. After each convolution, a leaky ReLU is applied except for the last

convolution where no nonlinear function is employed. The intermediate layers also involve an upsampling operation. For the discriminator we use input and six layers in total. The first five layers have two convolutions with two kernels except for the first layer which has an additional layer and the last layer which has a and a kernel. After each convolution, we apply a leaky ReLU except for the last kernel where no nonlinearity is used. In the intermediate layers, we apply downsampling except for the last layer. Finally, we employ a fully connected part that provides the scalar output of the discriminator. We recall that in our approach the discriminator is not used since the generator along with the input density are supposed to completely capture the statistical behavior of the dataset.

Before we proceed with our simulations we would like to discuss a slight variation of our main de-quantization method based on an idea proposed in [7] as a simpler alternative to (6). Instead of using the transformation , in [7] they replace it with the identity, namely . This idea used in our approach consists in replacing (6) with the simpler optimization problem

(11)

where no quantization scheme is applied on the output of the generator and it is simply contained in the observations . As we will have the chance to confirm in the simulations that follow, this simplification has quite satisfactory performance if the number of colors after quantization is relatively high. However, when is small it may fail miserably.

Iv-a Uniform quantization on all chromatic components

Let us start with the simplest version of the problem where we quantize the chromatic components uniformly as described in (9). We are going to see the performance for different values of the number of levels . For quantization we use the transformation defined in (9) but for de-quantization in (6) we employ the approximation defined in (10).

Fig. 2: De-quantization results with levels per chromatic component. Row a) Original; b) Quantized; c) [7], identity transform; d) Proposed, identity transform; e) Proposed, softmax transform.

If is the number of pixels then in this case since each pixel contains three chromatic components.

In all competing methods, in this and all subsequent simulations, we apply the momentum gradient descent [17]

with normalized gradients where the momentum hyperparameter is set to 0.999, the learning rate to 0.1 and each algorithm runs for 200,000 iterations.

As we can see in Fig. 2 when the number of quantization levels is small and of the order of 2 or 3 then the proposed method when we employ the softmax transformation defined in (10) has a notably superior performance to the other two alternatives. When however increases to 4 and 5 resulting in a high number of overall number of colors then this difference becomes less pronounced. What we can observe from

Yeh Prop. Identity Prop. Softmax
Error PSNR Error PSNR Error PSNR
8 0.071 13.105 0.046 13.549 0.012 19.496
27 0.008 20.500 0.005 23.091 0.004 23.774
64 0.004 23.221 0.003 25.369 0.003 25.848
125 0.003 25.008 0.002 26.743 0.002 26.746
TABLE I: Reconstruction Errors and PSNRs

the figure it is also corroborated by Table I where we present the reconstruction errors and the peek to SNR index. As we see our proposed method based on the softmax approximation of the quantizer for has 7 and 4 times smaller error compared to the method in [6] and the simplified version of our approach. These factors when become 2 and 1.2 respectively.

Iv-B De-quantization and colorization

Let us now combine de-quantization with the colorization problem. In particular we assume that first we apply a linear transformation where from the RGB representation we obtain a grayscale intensity equivalent and then we apply a scalar uniform quantization scheme on the intensity of each pixel. Let us make explicit the resulting transformation. First we recognize three matrices

where each one isolates the corresponding RGB chromatic component. In particular isolates the red component of each pixel. The components are then combined as to form the intensity vector . We recall that the combination coefficients are considered as the ideal values to transform an RGB to grayscale. Once we have the intensity we quantize each pixel intensity uniformly using (9) with replacing and this constitutes the monochromatic quantized vector . The goal, as before, is from the quantized grayscale image to recover the original RGB image .

The next simulation focuses on exactly this problem. Specifically we intend to adopt a rather severe quantization policy with only two levels 0 and 1. If denotes the intensity of the -th pixel then this is quantized to the value . In other words we use as threshold the middle of the intensity range generating a uniform two-level quantization. Of course if we would like to use this scheme for de-quantization we need to replace the indicator function with a differentiable alternative. As such we can use the sigmoid where again as we approach the ideal quantizer. Of course we will consider to be a parameter to be optimized through the optimization problem defined in (6) in order to find the sigmoid approximation that best fits the data. In this case if is the number of pixels then since for each pixel we have only one intensity value.

Fig. 3: De-quantization and colorization of 2-level gray images. Row a) Original; b) Black and white version; c) [7], identity transform; d) Proposed, identity transform; e) Proposed, sigmoid transform.

As we can see from Fig. 3 only our full method with the quantization scheme approximated by the sigmoid obtains extremely satisfactory results. We not only de-quantize but we also colorize successfully recovering correctly the chromatic components. This is rather extraordinary considering the very limited information contained in the quantized black and white data. Apparently, the generative model contains sufficient prior information which is capable, at least for this class of images, to fill the gaps and successfully restore the original images.

We must stress that in this example the transformation leading to the quantized values was considered completely known. In other words the combination coefficients and the quantizer threshold 0.5 where used in the restoration process.

Iv-C De-quantization, colorization and parameter estimation

Let us now consider again the transformation from RGB to grayscale using the same combination we mentioned in our previous example. However, in this example the two-level quantization threshold is no longer set to 0.5 but to with the latter being unknown during the restoration process. Such a situation arises when for instance for the two level quantization we use the Otsu thresholding scheme where the threshold is image (realization) dependent and computed with the method proposed in [18]. Consequently, at each pixel, quantization is of the form where and, as mentioned, for the de-quantization and colorization procedure is an unknown parameter. For restoration we need to approximate the index function with a differentiable alternative. We propose the sigmoid of the form which is centered around the unknown threshold . Both parameters are considered unknown and the optimization problem in (6) must include a minimization with respect to . As in the previous experiment since, again, each pixel has one intensity value.

Fig. 4: De-quantization and colorization of 2-level gray images. Row a) Original; b) Black and white version; c) [7], identity transform; d) Proposed, identity transform; e) Proposed, sigmoid transform and threshold estimation.

Fig. 4 captures the corresponding results when is set to 0.4. In the method of [7] and our version with the simplified optimization problem where the transformation is replaced by the identity the exact knowledge of the quantizer is not required. In the same figure, the last row contains our results when the threshold is unknown

Fig. 5: De-quantization and colorization of 2-level gray images. Row a) Original; b) Black and white version; c) Proposed, identity transform; d) Proposed, sigmoid transform and threshold estimation.

and we estimate it along with and . Fig. 5 contains a similar experiment only now the threshold is set by the Otsu thresholding scheme. In this case the method in [7] cannot be applied because it is impossible to fine-tune the weight of the regularizer term since it requires a constant transformation . As we mentioned, this is not the case here because the transformation contains parameters that are realization dependent (change with every image). For this reason we included only our simplified method with the results appearing in the third row, while we reserved the last row for the full method.

Fig. 6:

Estimation error variance of

as a function of the number of iterations.

Fig. 6 contains the evolution of the estimation error variance of with the number of iterations. We have included the two cases we presented before namely (blue) and set by the Otsu thresholding mechanism (red). As we can see our parallel estimator provides relatively reasonable estimates considering the severe deformation the original image has been subjected to.

Yeh Prop. Identity Prop. Softmax
Error PSNR Error PSNR Error PSNR
0.092 11.770 0.076 11.474 0.029 15.841
0.112 11.693 0.087 10.956 0.029 15.750
0.085 11.079 0.033 15.744
TABLE II: Reconstruction Errors and PSNRs

The satisfactory performance of our full method is also depicted in Table II where the indices suggest that it enjoys significantly better performance compared to its competitors.

V Conclusion

We have presented a de-quantization method based on a generative modeling of the ideal image. The proposed technique requires the solution of an optimization problem that was derived through rigorous mathematical analysis of the de-quantization problem and using ideas borrowed from statistical estimation theory. Our optimization problem is completely specified without any unknown weights attached to a regularizer term that require fine-tuning before hand. We must also mention that our processing scheme is capable of treating problems with unknown parameters as for example in two level quantization when the quantization threshold is unknown. In such cases it simultaneously estimates the parameters and restores the original image very successfully.

Acknowledgement

This work was supported by the US National Science Foundation under Grant CIF 1513373, through Rutgers University.

References