Minimizing Embedding Distortion with Weighted Bigraph Matching in Reversible Data Hiding

12/18/2017 ∙ by Hanzhou Wu, et al. ∙ 0

For a required payload, the existing reversible data hiding (RDH) methods always expect to reduce the embedding distortion as much as possible, such as by utilizing a well-designed predictor, taking into account the carrier-content characteristics, and/or improving modification efficiency etc. However, due to the diversity of natural images, it is actually very hard to accurately model the statistical characteristics of natural images, which has limited the practical use of traditional RDH methods that rely heavily on the content characteristics. Based on this perspective, instead of directly exploiting the content characteristics, in this paper, we model the embedding operation on a weighted bipartite graph to reduce the introduced distortion due to data embedding, which is proved to be equivalent to a graph problem called as minimum weight maximum matching (MWMM). By solving the MWMM problem, we can find the optimal histogram shifting strategy under the given condition. Since the proposed method is essentially a general embedding model for the RDH, it can be utilized for designing an RDH scheme. In our experiments, we incorporate the proposed method into some related works, and, our experimental results have shown that the proposed method can significantly improve the payload-distortion performance, indicating that the proposed method could be desirable and promising for practical use and the design of RDH schemes.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Motivation

Unlike steganography [1], reversible data hiding (RDH) [2] allows both the hidden information and the host content to be perfectly reconstructed for a receiver, which is applicable to sensitive scenarios that require no degradation of the host data, such as military and remote sensing. Both steganography and RDH expect to minimize the embedding distortion when subjected to a fixed payload. Since there has no need to recover the original content, for steganography, one could use such as syndrome-trellis codes (STCs) [3] and Gibbs construction [4] to minimize or simulate the embedding impact. However, due to the requirement of reversibility, these optimization methods suited to steganography could not be directly applied to RDH, which has motivated us to study the distortion optimization of the RDH in this paper.

As an efficient embedding strategy, histogram shifting (HS) [2] has been widely utilized in the reported RDH works [5, 6]. For most of the HS-based RDH methods, the data hider should process the cover pixels by using a well-designed pixel prediction and selection rule, so that the generated difference (or prediction-error) histogram is sharply distributed [7, 8], which can benefit for data embedding. Since the pixel prediction and selection procedure often relies heavily on the carrier-content characteristics, the payload-distortion behavior will vary due to the diversity of natural images, which, to a certain extent, has limited the practical use of these RDH methods.

Fig. 1: Two examples of histogram shifting used in RDH.

On the other hand, with a generated histogram, one should choose suitable peak bins (i.e., the bins usually with maximum occurrences) to embed the secret data. Moreover, some of the other histogram bins should be shifted to ensure reversibility. It is quite desirable to choose such peak bins that they can carry the secret data while keep the distortion low. Though the bins shifted for reversibility do not carry the secret data, they often introduce larger distortion than the peak bins. For a single-layer embedding, one may easily find the optimal shifting strategy, since a pixel is increased or decreased by at most one. However, when to adopt multi-layer embedding, since the existing works shift the bins along the corresponding direction with a fixed step value, the cover pixels may change greatly, resulting in significant degradation of the image quality.

Based on the above-mentioned perspective, instead of designing a detailed HS-based RDH (that relies heavily on image content), we hope to model the HS-based embedding operation as a general framework. Specifically, we are to optimize the shifting operation when to use multi-layer embedding, so that the distortion can be significantly reduced. In our work, we model the shifting operation on a weighted bipartite graph, in which the vertices represent the histogram bins to be modified and edges indicate the shifting-relationship among the vertices. All edges are assigned with a weight (or cost) to specify the corresponding shifting-distortion. By solving a standard graph matching problem called minimum weight maximum matching (MWMM), we can finally find the optimal histogram shifting strategy, which ensures the minimum distortion.

The rest of this paper are organized as follows. The problem of shifting operation is formulated in Section II. In Section III, we introduce the proposed optimization model of minimizing embedding distortion with weighted bigraph matching. Some experimental results and analysis are provided in Section IV. Finally, we conclude this paper in Section V.

Ii Problem Formulation

Before we formulate the optimization problem, let us start with an example. As shown in Fig. 1, we use two peak bins “0” and “1” to hide a message. The traditional RDH method, i.e., Fig. 1 (a), first shifts the other bins along the corresponding direction by a step value of 1. Then, the embedding space reserved by “-1” and “2” can be exploited to carry the message by shifting “0” and “1”, respectively. This fixed empirical shifting-pattern may be not optimal when a multi-layer embedding is adopted since the prediction-error (PE) of a pixel will become larger. And, there has no work to explicitly demonstrate that this operation will still introduce the lowest distortion. For example, Fig. 1 (b) may be the optimal shifting operation (or that outperforms the traditional one) for a higher-layer embedding. In this paper, we will optimize this shifting operation to reduce the distortion for a multi-layer embedding.

We call the cover image after embedded with times. Obviously, is the original image without any hidden bits. For simplicity, let be an -pixel cover image with the pixel range , e.g., for 8-bit grayscale images. For a payload, we will use and to generate the marked image with HS operation. Our goal is to minimize the distortion between and . We here limit ourselves to an additive as:

(1)

where exposes the cost of changing to . In RDH, we often use the squared error to evaluate the distortion, which can be generalized by Eq. (1). So, in default, we will use mean squared error (MSE) as the measure, i.e.,

(2)

In RDH, we need to predict the pixels to be embedded in , and generate the corresponding pixel prediction-error histogram (PEH). Without the loss of generality, let be the occurrence of the PEH bin with a value of . Here, we have , where represents the size of a set. To hide a message, with the generated PEH, one should shift some PEs to vacate empty bin-positions, and then embed the secret bits by shifting the peak bins into the empty bin-positions. Mathematically, let and denote a set including all PEH bins and that contains all non-zero occurrence bins, respectively. It means that and . For a peak-bin set , we first find such two injective functions and that, and , where . Then, another injective function is also required. Suppose the bit-size of message is no more than , for data embedding, according to , we first shift all PEH bins in into some bin-positions of . Thereafter, since the bin-positions of are empty (i.e., with zero occurrence), one can easily embed the secret bits by shifting the bins in into the bin-positions of .

Fig. 2: An example of weighted bipartite graph.

We here take Fig. 1 for explanation. It can be inferred that, and . In both cases, we have . However, in Fig. 1 (a), we have , while in Fig. 1 (b), . In Fig. 1 (a), the injective function maps {-3, -2, -1, 2, 3, 4} to {-4, -3, -2, 3, 4, 5}, respectively; and in Fig. 1 (b), it maps {-3, -2, -1, 2, 3, 4} to {-4, -1, -3, 4, 2, 5}, respectively.

Obviously, when , and are fixed, it is quite desirable to find the best such that can be minimized. Once this optimization problem is solved, one can enumerate , and to minimize the global distortion for a payload since is often small, e.g., . We will study along this direction.

Let , represent all the cover pixels to be embedded. For compactness, we sometimes consider and as the pixel set containing all the cover pixels to be embedded and the -th pixel with a value of , respectively. Similarly, we denote the prediction of and its marked version by and . Thus, we can find the PEs between and by

(3)

The relationship of and can be described as:

(4)

Here, is the -th (current) bit to be embedded.

We use to denote the original pixel values of in . It is pointed out that, for the pixels not belonging to , the distortion can be roughly considered as fixed since we will not embed secret data into these pixels (though we may alter some pixels prior to embedding, e.g., to empty some LSBs to store the secret key). Therefore, for -layer () embedding (i.e., to generate ), our optimization task is

(5)

where is a constant. With Eq. (4), we have

(6)

For RDH, the secret bits can be orderly embedded into since can be generated by a key or some specified rule. When , and are fixed, with the secret message, one can consider as fixed. Actually, even without the message, one can use a random bitstring to simulate it so that,

can be roughly estimated. It is obvious that,

is fixed as well. Therefore, we have

(7)

which actually requires us to minimize

(8)

Now we need to propose an efficient algorithm to find such that Eq. (8) can be minimized for fixed , and . Thereafter, by enumerating , and , we can find the optimal , , and for based on and .

Iii Minimizing Embedding Distortion with Weighted Bigraph Matching

In this section, we will introduce the method called weighted bigraph matching to minimize in Eq. (8).

Iii-a Model Derivation

Without the loss of generality, we rewrite as:

(9)

where has no need to be injective. Therefore, we have

(10)

In RDH, to avoid the underflow/overflow problem, we need to adjust the pixels with boundary values into the reliable range in advance. Therefore, should be bounded at the very beginning so that one can process the boundary pixels in advance, i.e.,

where is a positive integer threshold, e.g., . Actually, and should be bounded as well. For simplicity, it is considered that and .

Let denote all the elements in . Eq. (10) can be therefore rewritten as:

(11)

where

where if , otherwise ; and,

(12)

Now, our problem is to find the best injective function for Eq. (11), which can be addressed by applying the weighted bigraph matching method introduced in the following.

Iii-B Weighted Bigraph Matching

For RDH, every element in should be matched by exactly one element (unique) in according to . Note that, in RDH, we often have . On the other hand, we expect to find such an optimal matching scheme that Eq. (11) can be minimized. Accordingly, our optimization task is finally generalized as:

(13)

subject to

(14)

Obviously, with Eq. (12), all possible can be easily determined in advance. Without loss of generality, we will use to represent the elements in . We are to model the optimization problem of Eq. (13) on a weighted bipartite graph. A bipartite graph, or bigraph, is a graph whose vertices can be partitioned into such two disjoint sets and that all edges connect a vertex in and one in . If all edges in a bipartite graph are assigned to a weight, it is named as a weighted bipartite graph (or weighted bigraph).

To build a weighted bipartite graph, we first denote the two disjoint sets by and . With Eq. (14), for every possible index-pair , if , we assign an edge between and in the bipartite graph. It indicates that, it is possible that . Meanwhile, all edges will be assigned with the corresponding weights. Specifically, if there exists an edge between and , the assigned weight should be , meaning that, if , the corresponding shifting-distortion should be .

Fig. 3: An example of maximum matching for Fig. 2.

We take Fig. 1 for example. Suppose that , and , we have and . The weighted bigraph can be therefore built as Fig. 2. In Fig. 2, the weight of each edge can be determined according to Eq. (12), e.g., the weight between “-3” (in ) and “-5” (in ) is .

A matching of a bigraph is a set of non-adjacent edges, i.e., there has no two edges in sharing a common vertex. A maximum matching of a bigraph is such a matching that it is not a subset of any other matching. In other words, a matching of a bigraph is maximum if every edge in the bigraph has a non-empty intersection with one edge in .

As shown in Fig. 3, we show an example of maximum matching for the bigraph built in Fig. 2, e.g., in Fig. 3, “-2” is matched by “-1”. Obviously, in a bigraph, there may be many candidates of maximum matching. A maximum matching guarantees that, there has no two edges sharing the same vertex, and the total number of edges in the matching is maximum. For any maximum matching of a bigraph, there must be .

As is injective, we can infer that, in the corresponding weighted bigraph, corresponds to such a matching that , where is also a maximum matching since , i.e.,

Proposition 1. corresponds to such a maximum matching that , where .

Moreover, according to Eq. (13), requires that, the sum of edge-weights in should be the minimum. Therefore, to find , we have to determine

(15)

Namely,

Proposition 2. has the minimum sum of edge-weights.

In graph theory, for a weighted bipartite graph, a minimum weight maximum matching (MWMM) is defined as a maximum matching where the sum of the weights associated to edges in the matching has a minimum value, which can be solved by Hungarian algorithm optimized with a time complexity of [9]. Therefore, we can determine in the weighted bigraph with Hungarian algorithm, and then easily construct with . We will not introduce the Hungarian algorithm in detail. We here refer a reader to [9].

When to produce (based on and ), the traditional HS operation only corresponds to a maximum matching, it may not ensure the minimum distortion. Therefore, in theory, the payload-distortion of an RDH scheme equipped with our optimization method will not be worse than that with the traditional operation. If the traditional HS strategy is optimal in some cases, our method will find it out.

Iii-C Complexity Analysis

We have introduced the method to find the best for fixed , and . To find the global-optimal strategy, we have to further enumerate all possible combinations between , and . Since in applications, is often small, one can easily find all possible . For example, if , the time complexity is , where (e.g., and ) since the generated PEH is often sharply distributed (centered at zero-bin). Actually, as should be no less than the bit-length of required payload, the total number of usable could be significantly reduced during enumeration.

For a fixed , we need to enumerate all possible and . As and , the time complexity to enumerate and is . This requires us to choose small and/or , since the time complexity has the exponential form. Note that, for some , there has no difference between and since the original message is always encrypted before embedding. That is why we use here, rather than .

From an empirical (or say heuristic) point of view, one can set

(i.e., ), which has been utilized in traditional HS strategy. Thus, the time complexity to enumerate and is reduced as , which is significantly lower than the original one, yet still high. Actually, if we set , our task is to find optimal and , which can be merged into the above optimization model. More general, once and are fixed, we can find the optimal and out by calling the introduced weighted bigraph matching approach. Specifically, we will update and as and , respectively. Thus, the number of vertices in the corresponding weighted bigraph is . Suppose that and , For every possible index-pair , if and , we then add an edge between and , and the weight is determined as according to Eq. (12). Otherwise, if and , we then add an edge between and , and the weight is determined as:

(16)

where means the k-th bit to be embedded. Note that, the difference between Eq. (12) and Eq. (16) is that, the PEH bins in Eq. (12) are shifted to ensure reversibility, while the PEH bins in Eq. (16) are shifted to hide message bits.

We take Fig. 2 for example. Let . Fig. 4 (a) shows the new weighted bipartite graph, from which we can find new vertices and edges are added. Fig. 4 (b) shows an example of maximum matching. If it is optimal, then it means and . And, the rest elements in are also matched. Therefore, in applications, one can also enumerate all possible and heuristically set , e.g., , the optimal and can be then found by using the weighted bigraph matching method with a relatively low time complexity. Note that, for fixed and , one can find the optimal and out. Thereafter, he/she should further determine the global-optimal , , and with Eq. (5, 6).

Iii-D Reversibility

For reversibility, the data hider should self-embed the information of optimal , , and . As is small, the space to store , and will be small. Note that, self-embedding , and means to embed all required integer-pairs, e.g., tell us to self-embed (“3”,“4”). To self-embed , one can first sort all elements in in an increasing order, where the difference between any two adjacent elements in the ordered sequence is often small (e.g., “-1” in most cases) since the PEH is sharply distributed centered at zero-bin. This indicates that, we can use the run-length encoding (RLE) to lossless compress the differences. Meanwhile, the corresponding elements in should be recorded as well. Let be , where . we can compress by RLE or other efficient lossless algorithms since these differences are all bounded by a well-tuned , e.g, .

There are different methods to achieve self-embedding. For example, the data hider can choose a part of pixels (not in ) to store the above-mentioned auxiliary data. The LSBs of these pixels will be kept unchanged throughout the specified-layer embedding such that one can successfully extract the hidden bits and recover the image content. The original LSBs will be considered as a part of the secret data.

Fig. 4: An example to find both and for fixed and : (a) the weighted bipartite graph, (b) a maximum matching.

Iv Experimental Results and Analysis

We incorporate the proposed optimization model into three state-of-the-art RDH algorithms, i.e., PC-HS [5], GF-HS Algorithm 1 [6] and DCSPF [7], to evaluate the payload-distortion performance. In our experiments, for an RDH algorithm, we only optimize the data embedding operation with the proposed method, meaning that, the others such as pixel prediction, pixel selection and local-complexity function are all the same as the original ones. Since both PC-HS and DCSPF use PEH bin-pairs to carry the message bits, for fair comparison, we will set for their optimized versions, denoted by “PC-HS opt” and “DCSPF opt”, respectively. For simplicity, is set to be 2 for “GF-HS Algorithm 1 opt” as well. Therefore, one can set any small (e.g, ) since (to ensure that, a maximum matching can be always found).

In the PC-HS and GF-HS Algorithm 1, the data-hider has to use non-overlapped pixel-blocks to carry the message bits. Here, the block-size is set to be for both methods, which is the same as described in the two methods. In the GF-HS Algorithm 1, before data embedding, the authors use a pixel selection parameter (that relies on the local-complexity function) to take advantage of smooth pixels as much as possible. In our simulation, when to use the proposed optimization method, there has no need to determine directly since we can sort the local-complexities in an increasing order such that smooth pixels can be utilized for data embedding, which is equivalent to using . In the DCSPF, the data-hider needs to set two important parameters, namely the pixel-blocking rate and the number of selection-layers. As recommended in the method, we enumerate the pixel-blocking rate from 10% to 90% with a step of 10%, and vary the number of selection-layers from 3 to 6 with a step of 3.

During data embedding, the multiple-pass embedding strategy [6] is applied for both PC-HS and GF-HS Algorithm 1. Additionally, for a required payload, a given image may be embedded several times (namely called multi-layer embedding). For multi-layer embedding, since it is free to set the payload size of each layer, in default, we will embed message bits into a specified layer as much as possible with a payload-step until it cannot carry additional bits, and a higher-layer embedding is then applied. Note that, this strategy may be not optimal.

Fig. 5: The payload-distortion performance comparison for different RDH algorithms with/without the proposed optimization method.

We here take four grayscale images Airplane, Lena, Baboon and Sailboat sized for experiments. The MSE defined in Eq. (2) is used as the distortion measure. Fig. 5 shows the payload-distortion performance for different RDH algorithms with/without the proposed optimization method. It can be seen from Fig. 5 that, our optimization method has the property to significantly reduce the introduced distortion due to data embedding, implying that, the proposed method could be promising for both practical use and the RDH design. It is noted that, for DCSPF, when the embedding rate is lower than 0.5 bpp, the performance improvement is not significant for Airplane, Lena and Sailboat since the authors also use an efficient approximation algorithm to find near-optimal PEH bin-pairs. It indicates that, when , to a certain extent, the approximation algorithm proposed by Wu et al. [7] can be used as an approximate solution of the proposed model.

V Conclusion and Discussion

In this paper, we have proven that, the traditional HS operation corresponds to a maximum matching in the corresponding bigraph. To reduce the embedding distortion, based on and , we model the HS operation as a minimum weighted matching problem, and use the MWMM technique to find the best HS strategy for RDH. We incorporate our optimization model into some related works and experimental results have shown that the optimization method can improve the payload-distortion performance. For the proposed optimization model, in applications, the time complexity to enumerate all usable would be still very high for a large . In the future, we expect to study heuristic algorithms to find near-optimal .

References

References

  • [1] J. Fridrich, “Steganography in digital media: principles, algorithms, and applications,” Cambridge Univ. Press, New York, 2010.
  • [2] Z. Ni, Y. Shi, N. Ansari and W. Su, “Reversible data hiding,” IEEE Trans. Circuits Syst. Video Technol., vol. 16, no. 3, pp. 354-362, Mar. 2006.
  • [3] T. Filler, J. Judas and J. Fridrich, “Minimizing additive distortion in steganography using syndrome-trellis codes,” IEEE Trans. Inf. Forensics Security, vol. 6, no. 3, pp. 920-935, Sept. 2011.
  • [4] T. Filler and J. Fridrich, “Gibbs construction in steganography,” IEEE Trans. Inf. Forensics Security, vol. 5, no. 4, pp. 705-720, Sept. 2010.
  • [5] P. Tsai, Y. Hu and H. Yeh, “Reversible image hiding scheme using predictive coding and histogram shifting,” Signal Process., vol. 89, no. 6, pp. 1129-1143, Jun. 2009.
  • [6] X. Li, B. Li, B. Yang and T. Zeng, “General framework to histogram-shifting-based reversible data hiding,” IEEE Trans. Image Process., vol. 22, no. 6, pp. 2181-2191, Jun. 2013.
  • [7] H. Wu, H. Wang and Y. Shi, “Dynamic content selection-and-prediction framework applied to reversible data hiding,” In: IEEE Int. Workshop Inf. Forensics Security, online available, Dec. 2016.
  • [8] H. Wu, H. Wang and Y. Shi, “PPE-based reversible data hiding,” In: Proc. ACM Workshop Inf. Hiding Multimed. Security, pp. 187-188, Jun. 2016.
  • [9] F. Roberts and B. Tesman, “Applied combinatorics,” The CRC Press, Edition, Taylor & Francis Group, 2005.