An Adaptive Statistical Non-uniform Quantizer for Detail Wavelet Components in Lossy JPEG2000 Image Compression

The paper presents a non-uniform quantization method for the Detail components in the JPEG2000 standard. Incorporating the fact that the coefficients lying towards the ends of the histogram plot of each Detail component represent the structural information of an image, the quantization step sizes become smaller at they approach the ends of the histogram plot. The variable quantization step sizes are determined by the actual statistics of the wavelet coefficients. Mean and standard deviation are the two statistical parameters used iteratively to obtain the variable step sizes. Moreover, the mean of the coefficients lying within the step size is chosen as the quantized value, contrary to the deadzone uniform quantizer which selects the midpoint of the quantization step size as the quantized value. The experimental results of the deadzone uniform quantizer and the proposed non-uniform quantizer are objectively compared by using Mean-Squared Error (MSE) and Mean Structural Similarity Index Measure (MSSIM), to evaluate the quantization error and reconstructed image quality, respectively. Subjective analysis of the reconstructed images is also carried out. Through the objective and subjective assessments, it is shown that the non-uniform quantizer performs better than the deadzone uniform quantizer in the perceptual quality of the reconstructed image, especially at low bitrates. More importantly, unlike the deadzone uniform quantizer, the non-uniform quantizer accomplishes better visual quality with a few quantized values.

READ FULL TEXT VIEW PDF

page 18

page 19

page 20

12/30/2010

A Fast Statistical Method for Multilevel Thresholding in Wavelet Domain

An algorithm is proposed for the segmentation of image into multiple lev...
09/28/2019

Additive Powers-of-Two Quantization: A Non-uniform Discretization for Neural Networks

We proposed Additive Powers-of-Two (APoT) quantization, an efficient non...
07/23/2020

Improving distribution and flexible quantization for DCT coefficients

While it is a common knowledge that AC coefficients of Fourier-related t...
10/11/2018

A Novel Chaotic Uniform Quantizer for Speech Coding

Quantization is an essential step in the analog-to-digital conversion pr...
05/03/2022

Uniform Vs. Non-Uniform Coarse Quantization in Mutual Information Maximizing LDPC Decoding

Recently, low-resolution LDPC decoders have been introduced that perform...
10/05/2021

Time-Based Quantization for FRI and Bandlimited signals

We consider the problem of quantizing samples of finite-rate-of-innovati...
08/18/2021

Non-uniform quantization with linear average-case computation time

A new method for binning a set of n data values into a set of m bins for...

Abstract

The paper presents a non-uniform quantization method for the Detail components in the JPEG2000 standard. Incorporating the fact that the coefficients lying towards the ends of the histogram plot of each Detail component represent the structural information of an image, the quantization step sizes become smaller at they approach the ends of the histogram plot. The variable quantization step sizes are determined by the actual statistics of the wavelet coefficients. Mean and standard deviation are the two statistical parameters used iteratively to obtain the variable step sizes. Moreover, the mean of the coefficients lying within the step size is chosen as the quantized value, contrary to the deadzone uniform quantizer which selects the midpoint of the quantization step size as the quantized value. The experimental results of the deadzone uniform quantizer and the proposed non-uniform quantizer are objectively compared by using Mean-Squared Error (MSE) and Mean Structural Similarity Index Measure (MSSIM), to evaluate the quantization error and reconstructed image quality, respectively. Subjective analysis of the reconstructed images is also carried out. Through the objective and subjective assessments, it is shown that the non-uniform quantizer performs better than the deadzone uniform quantizer in the perceptual quality of the reconstructed image, especially at low bitrates. More importantly, unlike the deadzone uniform quantizer, the non-uniform quantizer accomplishes better visual quality with a few quantized values.

Keywords: Mean and standard deviation, quantization, discrete wavelet transform, image compression, image processing, human visual system, JPEG2000 standard.

1 Introduction

The JPEG2000 standard (hereinafter: standard) is a state-of-art compression for diverse applications and platforms, embedded into a single system and a single compressed bit stream [1, 2, 3]. Apart from providing higher compression efficiency and improved quality of an image compared to baseline JPEG, additional features like the multiresolution representation, embedded coding, signal to noise ratio scalability, region of interest coding and error robustness are included in the standard [2, 3, 4].

One major reason for better performance of the standard over baseline JPEG and other image compression methods is the introduction of the discrete wavelet transform (DWT) into the standard. The perfect reconstruction of DWT has enabled the lossy and lossless compression into a unified system. Moreover, DWT possesses multiresolution capability which naturally allows this property in the standard. Further, DWT provides high energy compaction resulting in better compression ratio. Also, DWT removes blocking affects due to higher decorrelation of the image because it is applied to a complete image [4].

Another important factor that results in the improved performance is the embedded coding of DWT coefficients, which is accomplished at present by using a uniform quantizer [4, 5]. Fig. 1 displays the fundamental block diagram of the standard. The quantization process contributes most to the lossy compression because the quantization step size controls the compression ratio and bitrate. For example, in any uniform quantizer, large step sizes will result in higher compression compared to small step sizes. Other encoding steps like DWT and entropy coding as well as choice of color space also contribute to the higher compression, but unlike quantization, there is no direct objective correlation between the variation in these encoding steps and the amount of compression that can be achieved. Their influence and control are limited. Therefore, the right choice of quanizer and quantization steps size is important to obtain maximum compression for the perceived quality of an image.

The Part I-II of the standard employs a deadzone uniform quantizer for compression, which incorporates Shannon’s rate-distortion (R-D) theory [6, 7] to select quantization step sizes for each subband. Mean-Squared Error (MSE) is the most common measure used to optimize R-D for the given data. However, in the case of images (and also videos), MSE may not relate to the perceived quality of image [8, 9] because it does not incorporate Human Visual System (HVS) into its calculations. HVS is the factor that determines the visual quality of image. To incorporate HVS, the standard allows the assignment of a weighting factor to each subband depending upon its visual significance in the image. This technique is called the visual frequency weighting [10]. Further, the visual frequency weighting is categorized into the fixed visual weighting [10] and visual progressive weighting [11]. In the fixed visual weighting, the weights are computed based on the viewing distance of the reconstructed image. Whereas, for the visual progressive weighting, the weights are selected from a table of weights which corresponds to a specific bitrate. It enables the inclusion of common visual factor in both the quantization as well as embedding process for effective compression. Moreover, the distortion-adaptive visual progessive weighting is also present in the standard [12], which considers visual distortions due to suprathresholds before choosing an appropriate weight for a subband.

Figure 1: Block Diagram of the JPEG2000 Standard.

The standard allows the use and implementation of HVS algorithms to produce perceptually superior lossy images. Many perceptual based quantization schemes have been proposed that incorporated HVS in the lossy image compression. Nadenau and Reichel [13] analyzed the contrast sensitivity curves for different color spaces, and accordingly, calculated the visual weighting factors for each subband. In their book [14], Taubman and Marcellin suggested a visual masking technique for the standard. Zeng et al. [15] applied a masking function to retain the significance of the image edges. Watson et al. [16] characterized the visibility thresholds by modelling uniform noise using the subband level, orientation, and display resolution. This mathematical model tried to precapture the quantization distortion. Liu et al. [17] incorporated the visibility thresholds, measured from the uniform noise modelling, into the standard. Using phychophysical experiments, Larabi et al. [18] applied visual thresholds on each subband to obtain visually lossless compressed images for the digital cinema. It was shown that there is necessity to increase the maximum bitrate i.e., 250 Mbps, to achieve it. Similarly, Ramos and Hemami [19, 20] proposed a method to select the quantization step sizes using the visibility thresholds through phychophysical experiments for the low bitrate wavelet based image compression. Subsequently, incorporating the previous results [19, 20], Chandler and Hemami [21] proposed a unified contrast-based quantization scheme which provided competitive visual quality at high bitrates and improved visual quality at low bitrates. Gaubatz et al. [22, 23, 24] proposed a spatially selective quantization scheme for the wavelet-based image compression that performed similar to the standard in visual quality. Introducing adaptive quantization step sizes for a subband, Albanesi and Guerrini [25] utilized the contrast sensivity and luminance masking to calculate the visibility thresholds independently for each component i.e., Approximation, Horizontal, Vertical, and Diagonal. In a similar work, Liu and Chou [26] captured perceptual redundancy as the noise detection thresholds for each component, and accordingly, adjusted the quantization step sizes for each component. In their two papers [27, 28]

, Sreelekha and Sathidevi presented an adaptive quantization scheme which extended the luminance component visual modelling to the chrominance component. They used the contrast thresholds to initially eliminate insignificant components followed by a k-means clustering approach to quantize remaining coefficients. Wu et al.

[29, 30] proposed a mechanism that used the visual pruning and human vision modelling to remove visually insignificant coefficients in the JPEG2000 compressed image. More recently, Oh et al. [31] calculated the visibility thresholds using a distortion model that considers the deadzone uniform quantizer properties and the statistical features of wavelets coefficients to obtain visually lossless image compression in the standard.

Alternatively, Reichel et al. [32] further investigated the reversible Integer-to-Integer Wavelet Transform (IWT) by testing various uniform quantization schemes. They concluded that the application of DWT for the lossy compression in the standard results in far superior image quality for the small number of quantization step sizes, compared to reversible IWT. However, in the case of high compression ratio, both the transforms performed similarly in terms of MSE and visual quality. Long et al. [33] overcame the ineffective performance of reversible IWT and carried out the lossy image compression using 5/3 IWT and a uniform quantizer. The paper proposed two different selection of quantization step sizes for the uniform quantizer, both of them showing better performance in terms of PSNR. Reichel et al. independently proposed one of the quantization step sizes scheme presented in Long et al. paper. The two uniform quantizers can also be implemented in the standard with CDF 9/7 wavelet transformed image; however, their effects are unknown.

The current literature on visually lossy or lossless compression in the standard only produce better visual quality compared to the non-perceptual quantizers for a given bit rate. They lack the capability to directly correlate the quantization of wavelet coefficients with visual quality. Most importantly, in all the cases, the uniform quantization mechanism is used i.e., the uniform quantizer, deadzone uniform quantizer, and k-means clustering, to name a few. The perceptual quantization algorithms either re-adjusts the quantization step sizes and thresholds in the uniform quantizer, or reassigns the coefficient values through weights based on their visual significance.

Even though the perceptual thresholding using the deadzone uniform quantizer in the standard produces effective, efficient and better results, there are some major drawbacks when it is used in the Detail components of the wavelet transformed image:

  1. The uniform quantizer itself does not incorporate HVS. It relies on HVS algorithms for visually lossless and lossy compression.

  2. Except for the case where coefficient values lie nearby zero, the deadzone uniform quantizer treats the information provided by each fixed step size in the image as equally important. This assumption is flawed because the Detail components represent the high frequency coefficients in the horizontal, vertical, and diagonal directions of the image, implying that the coefficients lying farther from the origin are more important. These high frequency coefficients capture the edge information in the images, and hence, depict the overall structure of the image. Quantization with the large step sizes using the deadzone uniform quantizer poses the problem of enhanced structural deformation. On the other hand, if smaller step sizes are used, then the overall bitrate is increased.

  3. The deadzone uniform quantizer has a deadzone region where all the coefficient values near zero, are assigned the quantized value zero. Although these coefficients are least important carrying minimal information, they do provide some information about the majority of coefficients that require necessary quantized value/s to represent them.

  4. The number of the quantization step sizes required to maintain a certain quality of an image is not optimal because the same quality of image can be represented by a fewer number of step sizes.

A non-uniform quantizer can potentially overcome the above disadvantages of the deadzone uniform quantizer. A non-uniform quantizer in conformity with HVS and the information represented by the Detail components can be natural match for HVS algorithms, especially in selecting thresholds for each step size. Moreover, the combination of the non-uniform quantizer and HVS algorithms are most likely to perform better than the current approach. To effectively replace the deadzone uniform quantizer, the non-uniform quantizer is required to have following qualities:

  1. The quantized value within a quantization step size should have overall minimum error with the original coefficients.

  2. The number of the quantization step sizes should be minimum for a given acceptable error. This would allow higher compression for a given visual quality of the compressed image.

  3. The quantizer should be able to distinguish between the essential and non-essential coefficients, and accordingly, choose the appropriate step sizes for the coefficients.

  4. The quantizer should be adaptive so that the step sizes can be instantaneously decided based on the actual wavelet coefficients.

Figure 2: Block Diagram of the Proposed Quantization Scheme.

This paper presents a non-uniform quantizer for the Detail components of a wavelet transformed image. It uses variable step sizes with the range of the step sizes reducing as they approach the end of the histogram plot of each Detail component. The non-linear quantizer was first used in [34] for the image segmentation and object separation. Keeping the Approximation component unchanged, Srivastava and Panigrahi [35] recently applied the presented non-uniform quantizer on the Detail components of the wavelet transformed images obtained from DB series wavelets. The results showed that the reconstructed images have the high Peak Signal to Noise Ratio (PSNR) and Mean Structural Similarity Index Measure (MSSIM) [36], suggesting potential use in the standard. Building on the previous work [35], this paper applies the algorithm on the Detail components in the standard and compares it with the deadzone uniform quantizer results. The Approximation component is quantized with the deadzone uniform quantizer. The block diagram of the proposed method for the lossy image compression is shown in Fig. 2.

The paper is organized as follows. Section 2 describes the deadzone uniform quantizer currently used in the standard, followed by section 3 which illustrates the non-uniform quantizer used in this paper. In section 4, the experimental results of the deadzone uniform and non-linear quantizer are examined and compared. Consequently, the discussion on the experimental results and the non-uniform quantizer’s application in the standard is carried out in section 5. Lastly, section 6 concludes the paper with the future work.

2 Deadzone Uniform Quantizer

The part I of the standard uses the uniform scalar quantizer having a deadzone around the origin, or the deadzone uniform quantizer. The quantization step size is same throughtout a subband, but step sizes vary among the subbands. The step size reduces towards the subbands that are representing higher decomposition levels. The quantized values for the wavelet coefficients are calculated as,

(1)

where is the quantized value, is the input DWT coefficient, and is the quantization step size. The subscript represents the subband. For deadzone region, the quantization size is .

In the Part II of the standard, the deadzone region is devised to be flexible i.e., the deadzone region can be of variable length. The rest of the interval have fixed width . Similar to the Part I, the formula for calculating the quantized values is given by,

(2)

where is the parameter which varies the deadzone step size. The quantization step size of the deadzone region is .

The quantization step size () can be obtained from the following equation,

(3)

where is the predicted bitdepth of wavelet coefficient at subband , is the number of bits allocated to the exponent, and is the number of bits allocated to the mantessa. For irreversible wavelets (i.e., lossy compression), and are 5-bit and 11-bit integer, respectively.

In the deadzone uniform quantizer with fixed as well as variable deadzone width, the number of bits required to represent all the quantized step sizes is . For detailed information about the quantization in the standard, refer [5, 37].

3 Non-uniform Quantizer

The Detail components in a wavelet transformed image are the high frequency components of the image in the horizontal, vertical, and diagonal directions acquired from the low pass-high pass (LH), high pass-low pass (HL), and high pass-high pass (HH) filters, respectively. From human visual perspective, the high frequency components represent the structure of an image, mostly comprising visible edges in the image. The information about the structure and edges of the image is captured by the high coefficient values of the high frequency components. In other words, higher the value of the high frequency component, the higher information it contains about the structure and edges. Therefore, the high value coefficients need to be maximally preserved and should have minimum error during the quantization process. The coefficient values close to zero or within certain range around zero are least significant or perhaps insignificant in the context of human visual sensitivity and selectivity. Considering this fact, the deadzone uniform quantizer has a step size at and around zero two times (deadzone region) as large as the steps sizes for the rest of the high frequency coefficients. In lossy compression, the distortions that humans are unable to distinguish should be maximally exploited to increase image compression. Incorporating this factor, the presented non-uniform quantization algorithm starts with the large quantization step sizes close to zero, and then reduces the step sizes for every next set of coefficients to be quantized i.e., , where and are the quantization step sizes for the coefficients with the lowest and highest magnitude, respectively.

The non-uniform quantizer calculates the varying step sizes with the help of mean and standard deviation (). The formulae for obtaining boundaries of step sizes are given as,

(4)
(5)

where is the mean, is the ,

is the skewness parameter,

is the boundary of the quantization step size, and are the subscripts referring to the left and right part of the histogram plot from the mean of the coefficients, respectively. For example, and are the mean of coefficients lying the left and right side of the overall mean in the histogram plot.

As can be seen, the equations 4 and 5 select boundaries as the variation of coefficients from the mean in the left and right part of the histogram plot with the help of the standard deviation. This process naturally reduces the standard deviation values as we move towards the end of the histogram plot because there is a decrease in the number of coefficients with high magnitudes. The reduction in standard deviation would reduce the variation from the previous boundary point, resulting in smaller step size compared to the previous boundaries. Additionally, the real statistics of the coefficients would allow an adaptive step size based on the mean and standard deviation of the coefficients considered for the boundary determination. For instance, the step sizes would vary for different images with same resolution. Moreover, even for the same image, the step sizes belonging to the left and right part would vary unless the distribution of the coefficients is symmetric.

Here, is similar to the of the deadzone uniform quantizer with the variable deadzone. It allows the non-uniform quantizer to further vary the step size. In general, and can have different values, but it is suggested that as it serves the purpose in the most cases.

For the process of boundary selection, let be the coefficients of each Detail component and is the even number of quantization step sizes, in addition to the parameters mentioned above. The algorithm to obtain boundaries is given in the following steps:

  1. Input and

  2. and

  3. and

  4. Loop to with unit increment.

  5. and

  6. and

  7. and

  8. end loop

  9. and

In the most cases, the number of quantization step sizes are even, so

is even in the above boundary selection algorithm. However, the above algorithm can be modified to have odd number of quantization step sizes. For example, either the left or right part of the histogram plot can be neglected at any iteration of loop, merging that part with the coefficients to be considered in the next iteration.

The following two features of the above algorithm need to be noted:

  1. The boundaries are selected based on the mean and standard deviation of the considered part of the Detail component histogram plot, making it adaptive.

  2. For the higher number of quantization step sizes, only the leftmost and rightmost boundaries (excluding and ) are needed to find the next boundaries. Strictly speaking, none of the existing boundaries change with the increased number of step sizes; only new boundaries are added towards the ends of the histogram plot.

After implementing the above algorithm, the boundaries of the non-uniform quantizer can be defined as,

(6)

where,

All the values lying within their respective boundaries are quantized to their mean. Mean is the centroid or the center of mass of the data, and hence, it minimizes the total error within a single quantization step. The same approach is applied in the deadzone uniform quantizer by considering the midpoint of the two boundaries as the quantized value, but the minimum error can only be obtained if the coefficients are uniformly distributed which is unlikely in most of the cases. The quantized values from the step size boundaries are calculated as,

(7)

where is the quantized value, , and .

4 Experimental Results

Experiments were conducted on various standard test images obtained from the USC-SIPI Image Database (http://sipi.usc.edu/database/). However, the results are only shown for Lenna, Pepper, and Baboon images to include as much variety as possible in the limited space. As all the three test images are distinct, the effectiveness and the degree of effectiveness of the non-uniform quantizer over the deadzone uniform quantizer can be observed, tested, and evaluated. All the images were in gray scale having dimensions. The experiment can easily be extended to the color images by applying the non-uniform quantization for each luminance-chrominance color channel. The goal here is to compare the two quantizers. The results of one channel will also be replicated by the other color channels.

The results are divided into three sections. In Section 4.1, the quantized values of the Approximation and Detail components obtained from the deadzone uniform quantizer and non-uniform quantizer for all the three test images are shown and discussed. The results of the non-uniform quantizer with different step sizes are also examined in this section. Section 4.2 carries out the objective evaluation of the non-uniform quantizer at various quantization step sizes using MSE and MSSIM, and simultaneously compare them with the deadzone uniform quantizer. Lastly, the reconstructed lossy images from the deadzone uniform quantizer and non-uniform quantizer are displayed and perceptually examined in section 4.3 for subjective evaluation.

4.1 Assessment and Comparison of Quantized Values

As shown in Fig. 2, in the proposed methodology, the deadzone uniform quantizer is applied on the Approximation component and the non-uniform quantizer is applied on each Detail component. Figs. 3-5 show the histogram plot of the original coefficients values of each component, followed by the histogram plot of their quantized values from both the quantizers at decomposition level 1, for the test images , , and , respectively. For comparison, the deadzone uniform quantization step size is calculated using equation 3 with , , and . This results in the same quantized Approximation component for both the quantization schemes (see subfigs. b-d for each Fig. 3-5), allowing the comparison between the quantized values from the deadzone uniform quantizer and non-uniform quantizer for each Detail component. In other words, the comparison between the uniformly and non-uniformly quantized Detail component can only be conducted when the Approximation component is same for both of them. The number of quantization step sizes for the non-uniform quantizer is provided manually.

Additionally, the differences in the histogram plot of the quantized Detail component by the non-uniform quantizer at the quantization levels () 4 and 8 can be observed in Figs. 3-5. As mentioned in section 3, the increase of variable step sizes from 4 to 8 does not change the statistics of the quantized values which lie inside the leftmost and rightmost quantized value in the histogram plot. It is the leftmost and rightmost quantized values at that have been further divided into 6 quantized values at .

(a) Approximation (b) UQ, (c) UQ, (d) UQ, (e) Horizontal (f) UQ, (g) NUQ, (h) NUQ, (i) Vertical (j) UQ, (k) NUQ, (l) NUQ, (m) Diagonal (n) UQ, (o) NUQ, (p) NUQ,
Figure 3: Lenna (): (a,e,i,m) Histogram plots of the Approximation, Horizontal, Vertical, and Diagonal component, respectively; (b-d) Histogram plots of the Approximation component after the deadzone uniform quantization (UQ) and non-uniform quantization (NUQ); (f-h) Histogram plots of the Horizontal component after UQ and NUQ; (j-l) Histogram plots of the Vertical component after UQ and NUQ; (n-p) Histogram plots of the Diagonal component after UQ and NUQ. N is the number of quantized values.
(a) Approximation (b) UQ, (c) UQ, (d) UQ, (e) Horizontal (f) UQ, (g) NUQ, (h) NUQ, (i) Vertical (j) UQ, (k) NUQ, (l) NUQ, (m) Diagonal (n) UQ, (o) NUQ, (p) NUQ,
Figure 4: Pepper (): (a,e,i,m) Histogram plots of the Approximation, Horizontal, Vertical, and Diagonal component, respectively; (b-d) Histogram plots of the Approximation component after the deadzone uniform quantization (UQ) and non-uniform quantization (NUQ); (f-h) Histogram plots of the Horizontal component after UQ and NUQ; (j-l) Histogram plots of the Vertical component after UQ and NUQ; (n-p) Histogram plots of the Diagonal component after UQ and NUQ. N is the number of quantized values.

It can be seen in Figs. 3-5 (a,e,i,m) that the coefficient distribution in the histogram plot for the Approximation component and Detail components are very different. The Approximation component is more evenly distributed, whereas each Detail component has skewed distribution of the coefficients, centered to zero in their respective histogram plot. Only degree of skewness varies; Figs. 3 (e,m) are most skewed followed by Fig. 3(j) and Figs. 4(e,j), and then Fig. 4(m) and Figs. 5(e,j,m). The original histogram in each Detail component has a majority of coefficients in the vicinity of zero; only a few coefficients far from it. Based on their histogram distributions, separate quantization approaches for the Approximation component and Detail components are desirable. The deadzone uniform quantizer quantizes all the components with the same predefined fixed step size, leading to over quantization in the detail components and higher bitrates. Contrarily, the proposed methodology applies the non-uniform quantizer in each Detail component that maps all the coefficients into 4 and 8 quantized values using the variable quantization step sizes; see Figs. 3-5 (g,k,o) and Figs. 3-5 (h,l,p), respectively. The contribution of far away coefficients is low because they are extremely few in number. It also shows the capability and adaptability of the non-uniform quantizer to exploit the available statistics in choosing the appropriate step sizes. In this case, more importance is given to the coefficients lying in the vicinity of zero. However, in general, the degree of skewness should determine the number of quantization step sizes. Objective and subjective results for the same are discussed later in section 4.2 and 4.3.

(a) Approximation (b) UQ, (c) UQ, (d) UQ, (e) Horizontal (f) UQ, (g) NUQ, (h) NUQ, (i) Vertical (j) UQ, (k) NUQ, (l) NUQ, (m) Diagonal (n) UQ, (o) NUQ, (p) NUQ,
Figure 5: Baboon (): (a,e,i,m) Histogram plots of the Approximation, Horizontal, Vertical, and Diagonal component, respectively; (b-d) Histogram plots of the Approximation component after the deadzone uniform quantization (UQ) and non-uniform quantization (NUQ); (f-h) Histogram plots of the Horizontal component after UQ and NUQ; (j-l) Histogram plots of the Vertical component after UQ and NUQ; (n-p) Histogram plots of the Diagonal component after UQ and NUQ. N is the number of quantized values.

Another benefit of the non-uniform quantizer is that it can predetermine the value of , the number of quantized values, which is exactly the same as the number of quantization step sizes; whereas, the deadzone uniform quantizer can only decide on the quantization step size, having no control over the number of quantized values. Moreover, the quantization step size is same for all the Detail components (and also the Approximation component) in the deadzone uniform quantizer. In contrast, the non-uniform quantizer has the flexibility of choosing different number of quantization step sizes (or the quantized values) for each Detail component. See Figs. 3-5 (f,j,n), Figs. 3-5 (g,k,o), and Figs. 3-5 (h,l,p) for comparision.

4.2 Objective Analysis

The deadzone uniform quantizer and non-uniform quantizer are objectively evaluated and compared using MSE and MSSIM. MSE provides with the overall quantization error for each component after the quantizer. Only the Detail components are considered for MSE comparison because the two different quantizers are applied on them. As mentioned earlier, the Approximation component is quantized by the deadzone uniform quantizer in both the approaches, and hence, is not considered for MSE results. On the other hand, MSSIM shows the performance of both the quantizers in the overall reconstructed image. MSSIM is better metric than PSNR as former incorporates HVS in measurement [36, 38]. The quantized approximation coefficient values being same, MSSIM reflects the ability of the two quantizers and the extent to which they can visually reproduce the original image. The quantization step sizes for the deadzone uniform quantizer are obtained from equation 3 with , and that resulted in quantized values in the increasing order. For the non-uniform quantizer, values are manually inserted.

MSE and MSSIM are calculated from the following equations,

(8)

where is the component length, is the original component, and is the quantized component.

(9)

where is the number of local windows of the image, is the window number, is the original image, is the reconstructed image after quantization, is the mean, is the standard deviation, and and are the arbritary constants to avoid unstable results when and are near zero. For the calculations, (MSSIM of the entire image is considered without windows), , and .

Image Horizontal Vertical Diagonal
Name Uniform Non-uniform Uniform Non-uniform Uniform Non-uniform
MSE MSE MSE MSE MSE MSE
Lenna 3 16.84 4 4.05 4 41.86 4 10.57 2 34.07 4 7.29
() 6 16.46 6 2.66 8 39.95 6 6.07 4 33.97 6 5.27
12 15.12 8 2.41 17 35.68 8 5.55 8 33.10 8 4.90
23 11.74 10 2.39 34 26.96 10 5.51 15 29.83 10 4.84
44 5.88 12 2.38 65 12.87 12 5.50 29 22.58 12 4.84
84 0.30 14 2.38 125 0.30 14 5.50 53 10.95 14 4.84
Pepper 7 55.90 4 22.02 7 62.08 4 25.22 3 93.73 4 16.24
() 14 53.21 6 8.91 14 59.09 6 10.45 5 93.38 6 13.54
27 47.32 8 7.55 27 52.45 8 8.86 11 89.50 8 12.92
52 35.53 10 7.53 53 39.30 10 8.84 20 78.65 10 12.78
103 16.71 12 7.53 105 18.43 12 8.84 37 58.79 12 12.76
194 0.31 14 7.53 204 0.31 14 8.84 68 27.71 14 12.76
Baboon 9 574.24 4 108.91 7 196.88 4 39.80 5 445.61 4 84.56
() 17 539.11 6 79.09 13 185.78 6 27.97 9 434.24 6 62.54
34 471.13 8 76.74 26 162.80 8 26.72 18 408.04 8 60.00
68 348.72 10 76.56 51 120.99 10 26.61 36 356.72 10 59.74
136 158.78 12 76.56 97 55.87 12 26.60 70 264.27 12 59.73
264 0.32 14 76.56 185 0.32 14 26.60 132 120.85 14 59.73
Table 1: Comparison of MSE at various quantization levels by the uniform and proposed non-uniform quantizer for the Detail components of the three test images.

Table 1 displays MSE values obtained from the two quantizers with quantized values for each Detail component of all the three test images. It can be seen that the non-uniform quantizer outperforms the deadzone uniform quantizer in terms of MSE values. In general, MSE of the non-uniform quantizer is substantially less than the deadzone uniform quantizer, except in some cases when is very high for the deadzone uniform quantizer. For same , MSE of the deadzone uniform quantizer is many times higher than the non-uniform quantizer. Converse to the previous point, for a particular MSE value, the deadzone uniform quantizer requires far more quantized values than the non-uniform quantizer. In addition, MSE values of the non-uniform quantizer starts saturating at , whereas for the deadzone uniform quantizer, it saturates at very high values ranging from 84 to 264.

(a) Lenna (b) Pepper (c) Baboon
Figure 6: MSSIM of the test images at different quantized values of the Approximation, Horizontal, Vertical, and Diagonal components, using the deadzone uniform and proposed non-uniform quantizer. The x-axis is the number of quantized values in the Approximation component, while y-axis shows MSSIM value. The legends show the number of quantized values , , and in the Horizontal, Vertical, and Diagonal component, respectively, for the non-uniform quantizer. , , and for the deadzone uniform quantizer are provided on the top of the plot at each point.
(a) Original (b) (c) (d) (e) (f) (g) (h) (i)
Figure 7: Lenna: Comparison of the reconstructed images produced by the uniform ((b),(f)) and non-uniform quantizer ((c),(d),(e),(g),(h),(i)) at various quantization step sizes. The quantized values for each decomposition component is given in (,,,) format, where it represents the Approximation, Horizontal, Vertical, and Diagonal component, respectively.
(a) Original (b) (c) (d) (e) (f) (g) (h) (i)
Figure 8: Pepper: Comparison of the reconstructed images produced by the uniform ((b),(f)) and non-uniform quantizer ((c),(d),(e),(g),(h),(i)) at various quantization step sizes. The quantized values for each decomposition component is given in (,,,) format, where it represents the Approximation, Horizontal, Vertical, and Diagonal component, respectively.
(a) Original (b) (c) (d) (e) (f) (g) (h) (i)
Figure 9: Baboon: Comparison of the reconstructed images produced by the uniform ((b),(f)) and non-uniform quantizer ((c),(d),(e),(g),(h),(i)) at various quantization step sizes. The quantized values for each decomposition component is given in (,,,) format, where it represents the Approximation, Horizontal, Vertical, and Diagonal component, respectively.

In Fig. 6, MSSIM plots are shown after reconstructing the images from the quantized values of each component. To relate MSSIM values with the number of quantized values associated with the two quantizers, the number of quantized values in the Approximation component is marked as x-axis. It must be recalled that both the quantization schemes have same Approximation component. The number of quantized values for the non-uniform quantizer is shown in the figure legend having sequence in the order of the Horizontal, Vertical, and Diagonal components, while for the deadzone uniform quantizer, they are given at the top of their respective MSSIM value. For the test image Lenna, MSSIM values for both the quantizers are very high and close to each other. Within the non-uniform quantizer, MSSIM values among different quantized values are either equal or near equal. Like Lenna, the test image Pepper has high MSSIM values for both the quantizers, but the number of quantized values in the non-uniform quantizer above 4 are either slightly more or less to MSSIM values of the deadzone uniform quantizer. This shows that the non-uniform quantizer with few quantized values can replicate the results of the deadzone uniform quantizer with many quantized values.

However, for the test image Baboon, MSSIM values are somewhat different for both the quantizers, the non-uniform quantizer performs better when the Approximation component’s quantized values are less, whereas the deadzone uniform quantizer does better at more number of quantized Approximation values. This suggests that more number of non-uniform quantized values are needed to reach MSSIM level of the deadzone uniform quantizer. Still, they would be far less than the number of quantized values required by the deadzone uniform quantizer.

From the above results, it can be seen that the non-uniform quantizer not only captures the information of original coefficients better than (in terms of MSE) or at par with (in terms of MSSIM) the deadzone uniform quantizer but also requires few quantized values to achieve it.

4.3 Subjective Analysis

In this section, the reconstructed images are displayed after the application of both the quantizers. Figs. 7-9 exhibit the original and reconstructed images of Lenna, Pepper, and Baboon, respectively. As can be seen, at a few quantized values subfigs. (b-e), the reconstructed images from the non-uniform quantizer are much smoother and perceptually closer to the original image than the deadzone uniform quantizer. Images from the deadzone uniform quantizer are segmented, whereas segmentation in the non-uniformly quantized images is low on contrast. On the other hand, when the Approximation component has large number of quantized values (subfigs. (f-i)), the reconstructed images from both the quantizers are visually the same, in addition to being very close to the original image. Only difference is that the deadzone uniform quantizer uses far more quantized values compared to the non-uniform quantizer in the Detail components. Interestingly, MSSIM values of using the uniform quantizer at those quantized values are higher than the non-uniform quantizer, revealing limitations of objective measurement techniques for HVS.

Two conclusions can be drawn from the above results: (1) the non-uniform quantizer results in better visual image quality at a particular number of quantized values, and (2) At a perceived image quality, less number of quantized values are required by the non-uniform quantizer.

5 Discussion

The non-uniform quantizer preserves the edge information and also allows intrinsic preferential weighting to the high value coefficients by having the quantization step sizes with varying length. It is important to minimize quantization error as the quantization step size approaches the end of the histogram plot of the Detail component to avoid structural deformation. In attempt to preserve the structural information, the deadzone uniform quantizer over preserves the coefficients lying towards zero that represent minimal edge information. This comes at a cost of large number of quantized values (i.e., small quantization step sizes) leading to high bitrates. At low bitrates, the deadzone uniform quantizer is constrained to have large quantization step sizes (see equation 3), distorting edge information. Both of these problems are due to the fixed step sizes. The non-uniform quantizer with variable step size combines the advantages of large and small quantization step sizes and also overcomes their limitations.

The quantized values assigned in the non-uniform quantizer are the mean of all the coefficients present in their respective quantization step size. This minimizes the quantization error at every step size based on the real statistics of the Detail component for a given image. Unlike the deadzone uniform quantizer’s deadzone region, the non-uniform quantizer does not predeterminely assign any value to zero. Embedded coding in the standard assigns equal number of bits to zero as it assigns to the other quantized values, and therefore, replacing zero with a quantized value would not affect the overall compression.

The number of quantized values required by both the quantizers is in direct correlation with the skewness of each Detail component’s histogram plot. Skewer the histogram plot, lesser would be the number of quantized values required. The adaptive approach used by the non-uniform quantizer results in less number of quantization step sizes that are required for the histogram plots with different skewness, compared to the deadzone uniform quantizer, which does not consider coefficients while choosing the step sizes.

Experimental results show that the non-uniform quantizer produce visually improved lossy image compared to the deadzone uniform quantizer. Due to less number of quantized values in the Detail components and same number of quantized values in the Approximation component, the non-uniform quantizer would also achieve higher image compression. Therefore, its utilization in the standard would result in better image quality as well as higher compression in comparison to the deadzone uniform quantizer. Furthermore, it can easily be embedded in to the standard. Another advantage is that the non-uniform quantizer does not require inverse quantization at the decoder reducing time, resources, and computational complexity.

In case when bitrate is fixed, the use of the non-uniform quantizer would allow more bits to represent the Approximation component. In other words, the difference in bits used by the deadzone uniform and proposed non-uniform quantizer for the Detail components can be allocated to the Approximation component which uses the deadzone uniform quantizer in the proposed scheme. The Approximation component carry more information about the image than the Detail components because the Approximation component is not only the low resolution version of the original image but also represents the low pass frequency coefficients of the image which are of much more equitable significant compared to a Detail component’s coefficients.

Apart from the fixed bitrate applications, bits saved from the non-uniform quantizer can be transferred to the Approximation component in the visually lossless compressed images. This would allow HVS algorithms more flexibility in selecting the visibility thresholds to obtain the appropriate quantization step sizes from the deadzone uniform quantizer in the Approximation component. In addition, HVS algorithms can be incorporated in obtaining boundaries for the variable quantization step sizes in the non-uniform quantizer, especially in determining values in equations 4 and 5.

As the non-uniform quantizer is biased towards the coefficients at the ends of the histogram plot, it is incapable of obtaining the desired thresholds for effective quantization in the Approximation component. The non-uniform quantizer works well for the histogram plots of skewed distributions. In this paper, it is applied on the Detail components at decomposition level 1. However, it can be easily applied to the Detail components at the higher decomposition levels as long as they possess skewed histogram distributions of their respective coefficients.

6 Future Work

The future work would require further development of a comprehensive and robust theory for more quantitative analysis. One key challenge is to find the minimum or an optimal number of variable quantization step sizes required by the non-uniform quantizer for each Detail component to achieve a particular bitrate or visual quality. This would also determine the number of bits to be allocated to the Approximation component, eventually determining its quantization step sizes. Another important theoretical challenge would be to find the relationship among the quality factor, compression ratio, required number of step sizes, and impact of varying . Further, including current or developing HVS algorithms to adjust the boundaries of the step sizes in accordance with the human perception would be useful.

Acknowledgement

The authors would like to thank Aadhar Jain and Parinita Nene of Cornell University for their useful comments.

References

  • [1] T. Acharya and P.-S. Tsai. JPEG2000 Standard for Image Compression: Concepts, Algorithms and VLSI Architectures. Wiley-Interscience, 2004.
  • [2] A. Skodras, C. Christopoulos, and T. Ebrahimi. The JPEG 2000 still image compression standard. IEEE Signal processing Magazine, 18:36–58, 2001.
  • [3] D.S. Taubman and M.W. Marcellin. JPEG2000: standard for interactive imaging. Proceedings of the IEEE, 90(8):1336–1357, Aug 2002.
  • [4] M. Rabbani and R. Joshi. An overview of the JPEG 2000 still image compression standard. Signal Processing: Image Communication, 17(1):3 – 48, 2002.
  • [5] M.W. Marcellin, M.A. Lepley, A. Bilgin, T.J. Flohr, T.T. Chinen, and J.H. Kasner. An overview of quantization in JPEG 2000. Signal Processing: Image Communication, 17(1):73 – 84, 2002.
  • [6] C.E. Shannon. Coding theorems for a discrete source with a fidelity criterion. In IRE Nat. Conv. Rec., Pt. 4, pages 142–163. 1959.
  • [7] T. Berger and J.D. Gibson. Lossy source coding. IEEE Trans. Inform. Theory, 44:2693–2723, 1998.
  • [8] A.B. Watson, editor. Digital Images and Human Vision. MIT Press, Cambridge, MA, USA, 1993.
  • [9] H.R. Wu and K.R. Rao. Digital Video Image Quality and Perceptual Coding (Signal Processing and Communications). CRC Press, Inc., Boca Raton, FL, USA, 2005.
  • [10] Information technology – JPEG 2000 image coding system – part 1: Core coding system. Technical report, ISO/IEC 15444-1:2000, 2000.
  • [11] J. Li. Visual progressive coding. In Proc. SPIE, volume 3653, pages 1143–1154, 1998.
  • [12] W. Zeng, S. Daly, and S. Lei. An overview of the visual optimization tools in JPEG 2000. Signal Processing: Image Communication, 17(1):85–104, 2002.
  • [13] M.J. Nadenau and J. Reichel. Opponent color, human vision and wavelets for image compression. In In Proc. of the 7 tn Color Imaging Conference, pages 237–242, 1999.
  • [14] D.S. Taubman and M.W. Marcellin. JPEG 2000: Image Compression Fundamentals, Standards and Practice. Kluwer Academic Publishers, Norwell, MA, USA, 2001.
  • [15] W. Zeng, S. Daly, and S. Lei. Point-wise extended visual masking for JPEG-2000 image compression. In IEEE International Conference on Image Processing, volume 1, pages 657–660, 2000.
  • [16] A.B. Watson, G.Y. Yang, J.A. Solomon, and J. Villasenor. Visibility of wavelet quantization noise. IEEE Transactions on Image Processing, 6(8):1164–1175, Aug 1997.
  • [17] Z. Liu, L.J. Karam, and A.B. Watson. JPEG2000 encoding with perceptual distortion control. IEEE Transactions on Image Processing, 15(7):1763–1778, July 2006.
  • [18] M.-C. Larabi, P. Pellegrin, G. Anciaux, F.-O. Devaux, O. Tulet, B. Macq, and C. Fernandez. HVS-based quantization steps for validation of digital cinema extended bitrates. In Proc. SPIE, volume 7240, pages 72400V–72400V–9, 2009.
  • [19] M.G. Ramos and S.S Hemami. Perceptual quantization for wavelet-based image coding. In IEEE International Conference on Image Processing, volume 1, pages 645–648, 2000.
  • [20] M.G. Ramos and S.S. Hemami. Suprathreshold wavelet coefficient quantization in complex stimuli: psychophysical evaluation and analysis. J. Opt. Soc. Am. A, 18(10):2385–2397, Oct 2001.
  • [21] D.M. Chandler and S.S Hemami. Dynamic contrast-based quantization for lossy wavelet image compression. IEEE Transactions on Image Processing, 14(4):397–410, April 2005.
  • [22] M.D. Gaubatz, D.M. Chandler, and S.S. Hemami. Spatial quantization via local texture masking. In Proc. Human Vision and Electronic Imaging, 2005.
  • [23] M.D. Gaubatz, D.M. Chandler, and S.S. Hemami. Spatially-selective quantization and coding for wavelet-based image compression. In International Conference on Acoustics, Speech, and Signal Processing, pages 209–212, 2005.
  • [24] M. Gaubatz, S. Kwan, B. Chern, D. Chandler, and S.S. Hemami. Spatially-adaptive wavelet image compression via structural masking. In IEEE International Conference on Image Processing, pages 1897–1900, Oct 2006.
  • [25] M.G. Albanesi and F. Guerrini. An HVS-based adaptive coder for perceptually lossy image compression. Pattern Recognition, 36(4):997–1007, 2003.
  • [26] K.-C. Liu and C.-H. Chou. Locally adaptive perceptual compression for color images. IEICE Trans. Fundam. Electron. Commun. Comput. Sci., E91-A(8):2213–2222, August 2008.
  • [27] G. Sreelekha and P.S. Sathidevi. A wavelet-based perceptual image coder incorporating a new model for compression of color images. International Journal of Wavelets, Multiresolution and Information Processing, 07(05):675–692, 2009.
  • [28] G. Sreelekha and P.S. Sathidevi. An HVS based adaptive quantization scheme for the compression of color images. Digital Signal Processing, 20(4):1129 – 1149, 2010.
  • [29] D. Wu, D.M. Tan, M. Baird, J. DeCampo, C. White, and H.R. Wu. Perceptually lossless medical image coding. IEEE Transactions on Medical Imaging, 25(3):335–344, March 2006.
  • [30] D. Wu, D.M. Tan, and H.R. Wu. Perceptual coding at the threshold level for the digital cinema system specification. In 2010 IEEE International Conference on Multimedia and Expo (ICME), pages 796–801, July 2010.
  • [31] H. Oh, A. Bilgin, and M.W. Marcellin. Visually lossless encoding for JPEG2000. IEEE Transactions on Image Processing, 22(1):189–201, Jan 2013.
  • [32] J. Reichel, G. Menegaz, M.J. Nadenau, and M. Kunt. Integer wavelet transform for embedded lossy to lossless image compression. IEEE Transactions on Image Processing, 10(3):383–392, Mar 2001.
  • [33] M. Long, H.-M. Tai, and S. Yang. Quantisation step selection schemes in JPEG2000. Electronics Letters, 38(12):547–549, Jun 2002.
  • [34] M. Srivastava, S.K. Singh, and P.K. Panigrahi. A semi-automated statistical algorithm for object separation. Circuits, Systems, and Signal Processing, 32(6):3059–3078, 2013.
  • [35] M. Srivastava and P.K. Panigrahi. Non-uniform quantization of detail components in wavelet transformed image for lossy JPEG2000 compression. In ICPRAM, pages 604–607, 2013.
  • [36] Z. Wang, A.C. Bovik, H.R. Sheikh, and E.P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600–612, April 2004.
  • [37] R.C. Gonzalez, R.E. Woods, and S.L. Eddins. Digital Image Processing Using MATLAB. Prentice-Hall, Inc., Upper Saddle River, NJ, USA, 2003.
  • [38] Z. Wang and A.C. Bovik. Mean squared error: Love it or leave it? a new look at signal fidelity measures. IEEE Signal Processing Magazine, 26(1):98–117, Jan 2009.