Nowadays, data is abundant and exists everywhere, and the importance of data privacy is getting more and more attention, so the technology of data hiding is developing fast. The traditional way of protecting data is encryption, but it can expose the transmission way of additional data. Then, data hiding appeared to make up for this shortcoming, it can hide the additional data into cover medium which can be shared publicly, so the behavior of transferring additional data is invisible. As technology advances, some applications have more requirements for data hiding, they may need both the additional data and the cover be lossless. That is to say, the cover must be fully recovered after the additional data is extracted. This technology is named reversible data hiding (RDH) [IEEEexample:ni2003reversible, IEEEexample:1227616, IEEEexample:lin2008multilevel, IEEEexample:sachnev2009reversible, IEEEexample:tsai2009reversible, IEEEexample:chen2013reversible, IEEEexample:li2015efficient, IEEEexample:wang2017rate, IEEEexample:8283771], it can be employed in many fields, such as military, medicine, forensics and so on.
Image is a frequently used form of data in our daily life, so many RDH algorithms are designed on images. Furthermore, the Joint Photographic Experts Group (JPEG) is a widely used image format in the network, thus the research of RDH in JPEG images is very popular. The RDH in spatial domain is growing rapidly, but the corresponding algorithms cannot be directly utilized in JPEG images, that is because the RDH in spatial domain is designed using the redundancy of images and the JPEG images are obtained by compressing the redundancy of spatial domain images which means there is less redundant space in JPEG images. In addition, the RDH in spatial images needn’t consider the file size, but the RDH in JPEG images which are compressed format must take it into consideration. Here, table I gives the differences of RDH in spatial domain and JPEG domain. Obviously, the file size is also an important metric for RDH in JPEG images except for the image quality and the payload because the compression of images is to reduce the file size of images.
RDH in JPEG images has also developed in recent years. It can be divided into four categories. The first one is based on the lossless compression first proposed in [IEEEexample:fridrich2002lossless], this paper includes not only method based on lossless compression but also the data hiding approach through modifying quantization table according to their parity. The next method is based on the quantization table modification [IEEEexample:fridrich2002lossless, IEEEexample:wang2013high]. Wang proposed a RDH scheme that modifies the DCT coefficients to embed data while the corresponding values in the quantization table make changes [IEEEexample:wang2013high]. It can achieve good effects both in capacity and image quality, but the file size is relatively large. The third method is based on modifying the Huffman table [IEEEexample:qian2012lossless, IEEEexample:hu2013improved, IEEEexample:du2018improved] which can well keep the file size of JPEG image unchanged. However, its embedding capacity is rare. The fourth category is the method based on the modification of quantized DCT coefficients [IEEEexample:huang2016reversible, IEEEexample:wedaj2017improved, IEEEexample:hong2018improved, IEEEexample:xie2018reversible, IEEEexample:hou2018reversible, IEEEexample:liu2018reversible, IEEEexample:xuan2019minimum]. In 2016, Huang [IEEEexample:huang2016reversible] applied the histogram shifting (HS) in the RDH scheme for JPEG images, where some non-zero AC coefficients valued ’1’ and ’-1’ are used as the peak points to embed additional data, and other non-zero AC coefficients shift to make room for additional data while the remaining coefficients keep unchanged. And Huang took a block selection strategy which is based on the number of zero AC coefficients in the block to decide which block is chosen to embed data first. Good image quality and embedding capacity are realized in [IEEEexample:huang2016reversible], besides, the file size is kept well. Then Wedaj [IEEEexample:wedaj2017improved] proposed an improved RDH in JPEG images based on the new coefficient selection strategy, which is improved on the basis of Huang ’s work. The additional data is embedded according to the embedding cost in each position of the block. In addition, Hong [IEEEexample:hong2018improved] and Hou [IEEEexample:hou2018reversible] also made improvement on the basis of the work of Huang . Hou [IEEEexample:hou2018reversible] proposed a method based on DCT frequency and block selection. They considered the influence of the quantization step in quantified DCT coefficients’ change and simulated distortion in blocks before embedding, the scheme combines the selection strategy of [IEEEexample:huang2016reversible] and [IEEEexample:wedaj2017improved]. The method can keep good visual quality and also keep a small expansion in file size. Liu [IEEEexample:liu2018reversible] utilized difference expansion(DE) in RDH scheme for JPEG images which obtains high capacity but the quality of the image is not so good. Because this method modifies the quantized coefficients to a large extent and it causes bigger distortion.
From all of the above, we can see that image quality and embedding capacity are very important, and file size is equally important for RDH in JPEG images. However, the past all works haven’t taken the file size into consideration separately while designing methods. Here, we use file size expansion to better display the changes of the cover file size after embedding data. And Table II shows the considerations of RDH in JPEG images while designing the algorithm. It is clear that the state-of-the-art RDHs in JPEG images don’t take the file size expansion into account. Thus, we propose a scheme combining not only the rate and distortion performance but also the file size expansion. An optimization strategy is used to balance the two called multi-objective optimization which will achieve different effects according to different requirements. In addition, the distortion and file size expansion of JPEG images are well designed to better map to the image quality and file size. Experiment results show that the method outperforms the previous works in both image quality and file size expansion.
In this paper, several sections are displayed in the following. Section II gives the related works which explain some theory acknowledges and some other schemes. And the detail of the proposed scheme is shown in Section III. Section IV displays the experimental results, then the paper ends by summarizing our work in SectionV.
Ii Related works
In order to better display our proposed scheme, the prerequisite knowledge is shown first. The compression of the spatial images to the JPEG images can help to well design the RDH scheme by its characteristic or easily comprehend the principle of some RDH schemes in JPEG images. What’s more, in our scheme, a multi-objective optimization math model is used to explain the target of RDH in JPEG images: minimize the file size expansion and the distortion simultaneously. Thus, the overview of JPEG compression and the multi-objective optimization are demonstrated in this section. And the based schemes which we improve on are also displayed here.
Ii-a Overview of JPEG Compression
The essence of compression is to remove or reduce redundant information in images, so the file size of compressed images can be small. The procedure of JPEG compression is shown in Fig. 1. First, the image is divided into nonoverlapping 88 sized blocks. Next, do the DCT transformation on the divided blocks, and this process can be described by (1).
and represents the index of a pixel in the spatial image, is the pixel value in the position , is the index of a coefficient in JPEG images, and is the DCT coefficient value in the position .
After the DCT transformation, the DCT matrix is gained and the coefficients are called DCT coefficients. Moreover, in each block, the first coefficient is named DC coefficient and the other 63 coefficients are AC coefficients. Then, the quantization is performed on the DCT matrix using the predefined quantization table for different quality factor (QF), the specific operation is that the DCT matrix divides by the quantization table point by point. Fig. 2 is a standard quantization table when QF=50, every position in the 88 sized block is called a frequency. The process of quantization causes the loss from spatial images to JPEG images, and the QF represents the degree of compression, smaller the QF the more compression. Till now, we get the compressed data, quantized DCT coefficients, which our proposed method makes modification on in this paper. The compression is done, and the quantized DCT coefficients have to be coded. Due to the different characteristics of DC and AC coefficients, different coding methods are taken. Adopt Differential Pulse Code Modulation (DPCM) to encode the DC coefficients because of the correlation of DC coefficients between adjacent blocks, and Run Length Encoding (RLE) to encode AC coefficients, besides, the AC coefficients valued ’0’ don’t need to be encoded. After the DPCM and RLE, the Huffman encoding is further used to acquire the final binary coding of quantized DCT coefficients. And the file size is tightly close to the coding length of quantized DCT coefficients, so keeping the coding length is a good way to keep file size.
From the process of compressing the spatial images to the JPEG images, we can clearly see that the DCT procedure changes the distribution of coefficients, and the quantization achieves the essence of compression. Hence, the quantization table helps a lot in reducing the file size expansion while designing the RDH scheme in JPEG images.
Ii-B Multi-objective Optimization
Multi-objective optimization refers to maximizing or minimizing multiple objectives while designing work, and satisfying constraints. In general, the sub-goals of the multi-objective optimization problem are contradictory. The improvement of one sub-goal may cause the performance of the other sub-goals to be degraded, that is, to simultaneously optimize the multiple sub-goals is not possible, but they can be coordinated and compromised among themselves so that each sub-goal is optimized as much as possible. There are basically the following methods for solving multi-objective optimization: one is to reduce the multi-objective into a single objective which is easier to solve, such as main target method, linear weighting method, ideal point method, etc. The other is called the hierarchical sequence method, that is, the target is given a sequence according to its importance, and each time the next target optimal solution is found in the previous target optimal solution set until the common optimal solution is obtained. It can also be modified by the simplex method. Another method called analytic hierarchy process is a multi-objective decision-making and analysis method combining qualitative and quantitative methods, and it is more practical for the case where the target structure is complex and lacks necessary data.
The general multi-objective model is shown as (3) and (4), where is the independent variable, and is the -th objective that needed to be maximized or minimized, is the -th constraint, and there are variables, objectives and constraints in (3) and (4). The final goal is to maximize or minimize the objectives while satisfying constraints by changing the independent variable . Moreover, the number of variables, objectives and constraints will make great differences in the final result including the running time and the optimized result.
Ii-C Other Schemes
The multi-objective optimization is employed to minimize the distortion and the file size expansion in RDH for JPEG images, and it is adopted on the basis of other schemes in this paper. Therefore, the other schemes that we improve on are displayed here.
Ii-C1 Huang ’s scheme
Huang proposed a RDH scheme for JPEG images using the existing HS technology, they analyzed the characteristics of DC and AC coefficients and found that the DC coefficients are rather flat in histogram while the AC coefficients are sharper. And another reason they chose the AC coefficients to embed additional data is the changes in DC coefficients may make great distortion in JPEG images because of their containing more information of JPEG images. What’s more, it can be seen that in the histogram of AC coefficients, the amplitude of value ’0’ is the largest and the amplitude of absolute value ’1’ is the second largest. But for the sake of file size expansion, the AC coefficients valued ’0’ are not considered to embed data. And then, the AC coefficients valued ’1’ and ’-1’ are chosen to embed data combining HS. After the embedded coefficients are determined, which block is selected to embed data first should be taken into account. They selected the block according to the number of AC coefficients valued ’0’ in the block, the number is smaller, the order of block is later. In general, the block with more AC coefficients valued ’0’ means flat block, that’s to say, there will be less distortion in this block. Following the two stages, coefficients choice and block selection, additional data is embedded into the chosen coefficients in selected blocks. In addition, the embedding process of Huang ’s scheme is shown in Fig. 3.
The scheme achieves good image quality and low file size expansion. And the block selection strategy is proposed first for RDH in JPEG images in [IEEEexample:huang2016reversible]. Huang ’s work made a great push in RDH for JPEG images, but the scheme only considers the smoothness of blocks, and the number of AC coefficients valued ’0’ cannot exactly represent the smoothness. From the overview of JPEG compression, we can see that the coefficients in JPEG images are not the same as the one in spatial images, they are got by the process of DCT and quantization. The quantization table has influence on the distortion that distortion will be different for the same coefficients in different positions.
Ii-C2 Hou ’s scheme
Hou designed a RDH scheme for JPEG images on the basis of Huang ’s work, they made up for the shortcoming of Huang ’s work that the quantization table is not taken into consideration. First, this method calculates the average distortion of each frequency in the quantized DCT blocks. Second, select the top frequencies according to the descending order of the average distortion for embedding. Third, compute the distortion of each block with only frequencies. Fourth, select the frequencies of blocks with small distortion to embed data. Finally, do the cycle from second to fourth to choose different to search for the best PSNR of the stego image.
The stage of calculating the average distortion of each frequency in the quantized DCT blocks is the embodiment of considering the quantization step. And the average distortion value is evaluated by (5)-(7):
where represent the position of frequency, means the modification in the spatial image that caused by the operation in the JPEG domain, and it can be gained by (14), and indicates the frequency ’s corresponding distortion in the spatial domain. is the number of AC coefficients valued ’1’ and ’-1’, means the number of AC coefficients whose absolute values are bigger than 1, shows the total distortion of the frequency in all blocks, means the averge distortion of each frequency which is the basis for frequency sorting. What can be seen is that the quantization table is fully utilized in the compute of .
It is undeniable that Hou ’s method has achieved better effects in image quality and file size expansion. However, the methods of Huang and Hou consider only the distortion, and the file size is not considered separately. Besides, the block selection strategy also can be improved, both of the two methods embed data sequentially in descending order of distortion in blocks. But there may exist some situations that the combination of two blocks with large distortion is better than three combinations with less distortion in the result of distortion and file size. The multi-objective optimization can solve the two disadvantages by its model and the balance of distortion and file size expansion.
Iii Proposed scheme
In this section, the proposed scheme is detailed. The framework of the proposed algorithm is shown in Fig. 4. We can see from the figure that the how to get the optimized signal combination is the last step before embedding data and it is the core step of our proposed scheme, so the multi-objective optimization is applied in our method to get the optimized signal combination. And the math model suitable for our proposed scheme is described here. Moreover, the costs sets of RDH in JPEG images include three aspects: the embedding capacity, the distortion and the file size expansion. And the embedding capacity is fixed in our proposed scheme for using HS in [IEEEexample:huang2016reversible], besides, it is significant to better measure the distortion and the file size expansion of the cover. In the following of this section, the calculation of the three costs is described in detail.
Iii-a Math Model
The metrics of RDH in JPEG images are the payload, image quality, and file size. The ultimate goal of the algorithm is making the image quality best and minimizing the file size while keeping the payload. Thus, the goal in RDH in JPEG images with multi-objective optimization are the image quality and file size, the constraint is the payload. According to the characteristics of objectives in our problems that the file size may decrease as the image quality improve, this paper uses the main target method to solve the multi-objective optimization.
In order to facilitate the visual display of the multi-objective optimization formula of RDH in JPEG images, several mathematical parameters are used here to represent the various information of the JPEG image. The cover is divided into equal-sized blocks to form a signal set , that , here, represents the -th signal. Correspondingly, is a set of embedding capacity of each signal, , similarly denotes the capacity of the -th signal. is the image distortion cost set, , and means the distortion of the -th signal after embedding data. is the file size expansion cost set, , also indicates the -th signal’s file size expansion after embedding data. After JPEG image is represented by these mathematical parameters, the math model can be designed as (8):
where is the decision variable, expressed in the form of , besides, , and if equals 0, the -th signal will not be used to embed data, in the contrast, if equals 1, the -th signal will be selected for embedding. In addition, the represents the total file size expansion, the sum of the file size expansion in the selected signals. And means the total distortion, the sum of distortion after embedding in the selected signals. In addition, is the minimization function making our goal minimized, is the length of the additional data and represents the total embedding capacity of the selected signals, and is the constraint that the selected signals must satisfy the payload.
To better solve this multi-objective problem and reduce the computational complexity, we convert one of the goals into a constraint like (9). The model in (9) means that we convert the file size expansion into a constraint and minimize the distortion, where is the optimal value of the target file size expansion, and the is the limit of file size expansion in the model and the weight means the degree of constraint that the file size expansion needs to satisfy. Applying (9), the multi-objective optimization problem in the proposed scheme can be solved very well.
Iii-B Costs Getting
Iii-B1 embedding capacity
The embedding capacity of each signal is the number of data each signal can embed, and it is depending on the embedding method. If the adopted embedding method is HS in AC coefficients, then the embedding capacity is the number of AC coefficients which are selected to embed, usually valued ’1’ and ’-1’. All the coefficients that can embed data are embeddable coefficients. In our proposed scheme, HS is utilized to embed data as Huang ’s scheme, so the signal embedding capacity is the number of AC coefficients valued ’1’ and ’-1’ in each signal.
The embedding capacity of each signal is computed by (10), where represents the -th coefficient in the -th signal, and means whether is embeddable or not.
Iii-B2 distortion designing
Image quality is a criterion to measure the performance of the RDH in JPEG images, so the distortion function is designed to reflect the image quality which is inversely proportional to it. And the image quality is reflected in the spatial domain, but we modify the coefficients in the DCT domain to embed data. Therefore, if we want to detect the distortion in spatial images using the modification in quantized DCT coefficients, we must establish a link from the DCT domain to the spatial domain for distortion. We can analyze from the generation process of JPEG images shown in section II-A that the inverse quantization and the inverse DCT transformation should be executed in order on the modification of the quantized DCT coefficients to exactly reflect the modification in spatial images. And the inverse process is displayed in Fig. 5. According to the theoretical analysis, the distortion function in [IEEEexample:hou2018reversible] is designed very well, so we use it as our distortion function to measure the image quality, and it is shown following:
Assume that is the current signal that emulates embedding the additional data, is the original signal, represents the modification of quantized DCT coefficients due to the embedding of additional data. The operator means multiplying point by point. displays the quantization step, and is the process of inverse quantization. IDCT is the function that does inverse DCT transformation, the compute of IDCT transformation is as (14):
where represents the coefficient in quantized DCT matrix, is the index of position of the coefficient in quantized DCT matrix, and , is the same meaning as that in (2). Besides, is the index of position of the pixel in the spatial domain. After two inverse transformations, the changes in the DCT domain are well mapped to the spatial domain, and then it can well reflect the quality of stego images.
Iii-B3 file size designing
The significance of file size expansion during embedding is obvious from the analysis in section I and III-A. But the file size has never been considered separately in the design of existing RDH in JPEG images. How to design the file size expansion of a signal is very important.
The file size expansion can be calculated by (15), where is the -th signal in the cover, and is the function to count the coding length of the signal . And means the ratio of the stego signal file size to the original signal file size in each signal , which is the percentage increase of the file size after embedding. We use the percentage increase of the file size instead of the file size to measure the effects of the scheme, that is because the percentage can intuitively reflect the file size changes in the original signal.
Iii-C Data Embedding and Extraction
There will give the processes of data embedding and extraction, and the embedding method is the HS which is the same as the method in [IEEEexample:huang2016reversible], where the AC coefficients valued ’1’ and ’-1’ are used to embed additional data, if the data is ’0’, the embeddable coefficients keep unchanged, if the data is ’1’, the embeddable coefficients move to a direction with a large absolute value by 1. The other AC coefficients whose absolute values are bigger than the embeddable coefficients shift to a direction with a large absolute value by 1, and the AC coefficients valued ’0’ is immovable.
Iii-C1 Data embedding
Get the quantized DCT coefficients from the original JPEG image by decoding.
Divide the DCT coefficients into some non-overlapping signals of the same size according to different schemes. The size of the signal in this paper is 88, but it can be any size.
Adopt the multi-objective optimization proposed in section III-A to generate the decision variable matrix used to guide which signal is selected to embed data into. Notice that the multi-objective optimization function must satisfy the given payload, meanwhile, the distortion and file size expansion are optimal.
Embed the additional data into the signals according to the decision variable matrix .
Encode the modified JPEG image to get the stego image.
What has to be aware of is that the decision variable matrix is needed to be compressed, and then it replaces the LSBs of some special signals in the signal set like the first or the last signal, next, the LSBs of these signals are embedded following the additional data.
Iii-C2 Data extraction
First, the quantized DCT coefficients are gained from the stego JPEG image in the same way as the first step in section III-C1.
Perform the same block processing on the quantized DCT coefficients as embedding to obtain signals of the same size that do not overlap.
Extract the data and recover the signals according to the decision variable matrix . And the decision variable matrix is extracted first from the LSBs used in the embedding process.
Encode the recovered signals again to get the recovered JPEG image which is the same as the original JPEG image.
Iv Experimental results
In this section, we display the better performance of our proposed scheme in the image quality and the file size expansion based on Huang ’s scheme and Hou ’s scheme. Four gray images with size of 512512, Airplane, Baboon, Lena, Peppers shown in Fig. 6 are used to produce the cover signal set in our experiment. And they are compressed to JPEG images with different quality factors (QF=30,50,70,90), and also 96 gray images [IEEEexample:g512] sized 512512 are applied to test the average performance of experiments. Besides, the weight in (9) we used is set to 1, if it is too small, the optimal decision variable matrix cannot be gained, because the file size expansion is limited to a too small range, and if the weight is large, the optimal decision variable matrix will keep unchanged. Therefore, we choose 1 as the optimal weight according to the best result of our experiments on different weights. We discuss the performance of the algorithm as following.
Iv-a Influence of quantization step
The impact of quantization step has been mentioned in the previous section. Fig. 7 gives the influence of quantization step in different quantization tables, different QFs correspond to different standard quantization tables. The abscissa indicates the position of the quantization step in quantization table, and it changes from 1 to 64, which means there are 64 quantization steps in the 88 DCT matrix. The ordinate represents the corresponding change in the spatial domain if there is one-bit change of DCT coefficient in one position. We test the influence of quantization step using the modification in the spatial domain which can be mapped by the corresponding change in the DCT domain. The DCT coefficient of only one position is changed by one bit at a time, and then the corresponding change in the spatial domain is calculated by the inverse quantization and IDCT transformation which is shown in (12)-(14). The influences of the quantization step are clearly to be seen from Fig. 7, the low amplitude means fewer changes in the spatial domain, and it indicates the less distortion. The different influences of different positions are evident, so the quantization step is applied in our distortion function designing to well reflect the changes in spatial domain which are caused by embedding additional data in the DCT coefficients. From the general trend, we can observe that the lower the position, the smaller the distortion will be.
|Huang ’ method[IEEEexample:huang2016reversible]||PSNR(dB)||44.73||40.77||38.12||36.1||34.26|
|Huang ’ method[IEEEexample:huang2016reversible]||PSNR(dB)||42.69||40.75||38.98||37.37||35.89|
|Huang ’ method[IEEEexample:huang2016reversible]||PSNR(dB)||49.21||45.36||42.42||39.84||37.25|
|Huang ’ method[IEEEexample:huang2016reversible]||PSNR(dB)||47.88||45.93||44.10||42.31||40.57|
|Huang ’ method[IEEEexample:huang2016reversible]||PSNR(dB)||36.79||33.06||30.57||28.75||27.25|
|Huang ’ method[IEEEexample:huang2016reversible]||PSNR(dB)||36.62||33.64||31.41||29.70||28.32|
|Huang ’ method[IEEEexample:huang2016reversible]||PSNR(dB)||50.28||39.62||34.27||31.01||28.85|
|Huang ’ method[IEEEexample:huang2016reversible]||PSNR(dB)||40.02||37.22||35.18||33.56||32.24|
|Huang ’ method[IEEEexample:huang2016reversible]||PSNR(dB)||45.06||39.28||36.81||35.05||33.67|
|Huang ’ method[IEEEexample:huang2016reversible]||PSNR(dB)||41.63||39.60||38.01||36.56||35.36|
|Huang ’ method[IEEEexample:huang2016reversible]||PSNR(dB)||53.24||46.89||42.74||39.93||37.51|
|Huang ’ method[IEEEexample:huang2016reversible]||PSNR(dB)||50.31||47.78||45.21||42.76||40.81|
|Huang ’ method[IEEEexample:huang2016reversible]||PSNR(dB)||39.16||37.81||36.68||35.77||34.92|
|Huang ’ method[IEEEexample:huang2016reversible]||PSNR(dB)||43.08||41.04||39.21||37.68||36.30|
|Huang ’ method[IEEEexample:huang2016reversible]||PSNR(dB)||49.09||45.53||42.84||40.73||38.47|
|Huang ’ method[IEEEexample:huang2016reversible]||PSNR(dB)||57.81||48.19||44.29||41.59||39.28|
The comparisons with [IEEEexample:huang2016reversible] and [IEEEexample:hou2018reversible] are both based on the average results of 96 images for better contrast. Farther, the results of the four test images are also shown in Table III and Table IV to clearly see the improvement. The reason why the proposed method compares with the other two schemes individually is that the proposed method is carried on the basis of each scheme.
Iv-B Comparison with Huang ’s scheme
As we can see from the process of Huang ’s scheme, the operation is done in units of 88 sized DCT block. Therefore, we combine the multi-objective optimization with Huang ’s method on the basis of taking the set of 88 sized DCT blocks as cover in our proposed scheme.
In this part, the comparison with Huang ’s scheme is shown as Fig. 8. All the results are gained by the average values of 96 images. And the x-axis represents the payload, the y-axis represents the Peak Signal to Noise Ratio (PSNR) and the file size expansion respectively in the left and the right column. The PSNR can reflect the visual quality of the stego cover, the higher it represents the smaller the image distortion caused in the embedding process. In addition, the file size expansion expresses the increas of the file size after embedding data, which is expected very small. And the proposed scheme is named Huang -P in the experimental results. We can obverse that the results of our proposed scheme is higher than results of Huang ’s scheme in the left graph and the results of our proposed scheme is lower than results of Huang ’s scheme in the right and in the left, the higher line means the good image quality of the stego image and in the right, the lower line indicates the small file size expansion. So, it is obvious that the proposed scheme does better than the scheme of Huang both in the image quality and file size expansion.
What’s more, the improvement of our proposed multi-objective optimization is getting smaller as the QF of JPEG images increases. The closer result shows that the advantage of the multi-objective optimization is weakening in the low compression image. It is because that the bigger QF means the redundant space of the image is more and.
Furthermore, the specific values of the four test images with four different QFs are shown in Table III. In the case of each QF for each image, the first line is the embedding capacity, the second and third lines are respectively the PSNR and increased file size of the stego cover generated by Huang ’s method, and the fourth and fifth lines are the PSNR and file size expansion of our generated stego cover.
|Hou ’ method[IEEEexample:hou2018reversible]||PSNR(dB)||44.78||40.76||38.29||36.14||34.35|
|Hou ’ method[IEEEexample:hou2018reversible]||PSNR(dB)||42.83||40.97||39.14||37.53||35.97|
|Hou ’ method[IEEEexample:hou2018reversible]||PSNR(dB)||49.85||45.54||42.69||39.95||37.28|
|Hou ’ method[IEEEexample:hou2018reversible]||PSNR(dB)||47.92||45.95||44.13||42.36||40.61|
|Hou ’ method[IEEEexample:hou2018reversible]||PSNR(dB)||37.13||33.47||30.96||28.96||27.28|
|Hou ’ method[IEEEexample:hou2018reversible]||PSNR(dB)||36.90||33.95||31.76||29.91||28.50|
|Hou ’ method[IEEEexample:hou2018reversible]||PSNR(dB)||50.77||39.85||34.46||31.10||28.86|
|Hou ’ method[IEEEexample:hou2018reversible]||PSNR(dB)||40.03||37.26||35.19||33.60||32.25|
|Hou ’ method[IEEEexample:hou2018reversible]||PSNR(dB)||45.30||40.59||37.81||35.67||33.88|
|Hou ’ method[IEEEexample:hou2018reversible]||PSNR(dB)||42.74||40.50||38.37||36.89||35.37|
|Hou ’ method[IEEEexample:hou2018reversible]||PSNR(dB)||53.81||47.81||43.80||40.40||37.60|
|Hou ’ method[IEEEexample:hou2018reversible]||PSNR(dB)||50.84||47.98||45.32||42.79||40.83|
|Hou ’ method[IEEEexample:hou2018reversible]||PSNR(dB)||39.68||38.45||37.29||36.25||35.22|
|Hou ’ method[IEEEexample:hou2018reversible]||PSNR(dB)||43.34||41.48||39.83||38.19||36.45|
|Hou ’ method[IEEEexample:hou2018reversible]||PSNR(dB)||49.89||46.00||43.40||41.16||38.56|
|Hou ’ method[IEEEexample:hou2018reversible]||PSNR(dB)||59.02||48.82||44.54||41.64||39.18|
Iv-C Comparison with Hou ’s scheme
It can be seen from the analysis in section II that the cover of Hou ’s scheme is 88 sized blocks which are different from original 88 sized DCT blocks. The cover is a set of blocks that choose values in some frequencies from the original blocks and set values in the other frequencies as 0. The multi-objective optimization is performed combined with the Hou ’s scheme in this experiment to show the improvement of the proposed scheme in Fig. 9.
The meanings of x-axis and y-axis are the same as the description in section IV-B. And the result of our experiment is marker as Hou -P in this experiment. What’s more, not only the image quality gained by our scheme is higher than that of Hou , which is obvious in the left graph in Fig. 9 that the red line is higher than the blue one as the payload increases, but also the file size expansion of our method is lower than the expansion of Hou , expressed as the red line lower than the blue line in the right graph in Fig. 9. In addition, the specific values of the four test images compared with the result of Hou ’s method are shown in Table IV. And the meaning of each line in the table is almost the same as that in Table III, expect that the second and third lines under each QF for each image are respectively the PSNR and increased file size of the cover generated by Hou ’s method.
A novel RDH scheme in JPEG images is proposed in this paper which applies multi-objective optimization. Most state-of-the-art RDH schemes in JPEG images only consider the image quality and ignore the file size expansion in designing methods. However, the file size is an important aspect for JPEG images, so multi-objective optimization is used in our proposed scheme to take consideration about the two objectives: the image quality and the file size expansion. The strategy of multi-objective optimization is carried based on other schemes, it selects the optimized combination of signals according to different division of cover in different schemes and then the additional data is embedded into the selected signals. The signal of this paper is sized 88 DCT coefficients, and it can also be other sizes. Higher image quality and lower file size expansion under a given payload constraint can be achieved by using the multi-objective optimization strategy to choose the combination of signals for embedding. Experimental results show that the proposed method can yield better performance than some state-of-the-art methods [IEEEexample:huang2016reversible] and [IEEEexample:hou2018reversible]. What’s more, the cover can be any format data of the JPEG image, such as the coding domain, in the future, we will make improvement on the coding domain combined the multi-objective optimization.
This research work is partly supported by National Natural Science Foundation of China (61872003, U1636206, 61860206004).