In the last years, the new field of signal processing on graphs has gained increasing attention . Differently from classical signal processing, this new emerging field considers signals that lie on irregular domains, where the signal values are defined on the nodes of a weighted graph and the edge weights reflect the pairwise relationship between these nodes. Particular attention has been given to the design of flexible graph signal representations, opening the door to new structure-aware transform coding techniques, and eventually to more efficient signal and image compression frameworks. As an illustrative example, an image can be represented by a graph, where the nodes are the image pixels and the edge weights capture the similarity between adjacent pixels. Such a flexible representation permits to go beyond traditional transform coding by moving from classical fixed transforms such as the discrete cosine transform (DCT)  to graph-based transforms that are better adapted to the actual signal structure, such as the graph Fourier transform (GFT) . Hence, it is possible to obtain a more compact representation of an image, as the energy of the image signal is concentrated in the lowest frequencies. This provides a strong advantage compared to the classical DCT transform especially when the image contains arbitrarily shaped discontinuities. In this case, the DCT transform coefficients are not necessarily sparse and contain many high frequency coefficients with high energy. The GFT, on the other hand, may lead to sparse representations and eventually more efficient compression.
However, one of the biggest challenges in graph-based signal compression remains the design of the graph and the corresponding transform. A good graph for effective transform coding should lead to easily compressible signal coefficients, at the cost of a small overhead for coding the graph. Most graph-based coding techniques focus mainly on images, and they construct the graph by considering pairwise similarities among pixel intensities [4, 5] or using a lookup table that stores the most popular GFTs . It has been shown that these methods could provide a significant gain in the coding of piecewise smooth images. Instead, in the case of natural images, the cost required to describe the graph often outweighs the coding gain provided by the adaptive graph transform, and often leads to unsatisfactory results. The problem of designing a graph transform stays critical and may actually represent the major obstacle towards effective compression of signal that live on an irregular domain.
In this work, we build on our previous work , and introduce a new graph-based signal compression scheme and apply it to image coding. First, we propose a novel graph-based compression framework that takes into account the coding of the signal values as well as the cost of transmitting the graph. Second, we introduce an innovative way for coding the graph by treating its edge weights as a graph signal that lies on the dual graph. We then compute the graph Fourier transform of this signal and code its quantized transform coefficients. The choice of the graph is thus posed as a rate-distortion optimization problem. The cost of coding the signal is captured by minimizing the smoothness of the graph signal on the adapted graph. The transmission cost of the graph itself is controlled by penalizing the sparsity of the graph Fourier coefficients of the edge weight signal that lies on the dual graph. The solution of our optimization problem is a graph that provides an effective tradeoff between the sparsity of the signal transform and the graph coding cost.
We apply our method to two different types of signals, namely natural images and piecewise smooth images. Experimental results on natural images confirm that the proposed algorithm can efficiently infer meaningful graph topologies, which eventually lead to improved coding results compared to non-adaptive methods based on classical transforms such as the DCT. Moreover, we show that our method can significantly improve the classical DCT on piecewise smooth images, and it even leads to comparable results to the state-of-the-art graph-based depth image coding solutions. However, in contrary to these dedicated algorithms, it is important to underline that our framework is quite generic and can be applied to very different types of signals.
The outline of the paper is as follows. We first discuss related work in Section II. We then introduce some preliminary definitions on graphs in Section III. Next, we present the proposed graph construction problem in Section IV. The application of the proposed graph construction algorithm to image coding and the entire compression framework are described in Section V. Then, the experimental results on natural images and piecewise smooth images are presented in Section VI and VII, respectively. Finally we draw some conclusions in Section VIII.
2 Related work
In this section, we first provide a brief overview of transform coding. Then, we focus on graph-based coding and learning methods, that are closely related to the framework proposed in this work.
2.1 Transform coding
Lossy image compression usually employs a 2D transform to produce a new image representation that lies in the transform domain . Usually, the obtained transform coefficients are approximately uncorrelated and most of the information is contained in only a few of them. It is proved that the Karhunen-Loève transform (KLT) can optimally decorrelate a signal that has Gaussian entries . However, since the KLT is based on the eigendecomposition of the covariance matrix, this matrix or the transform itself has to be sent to the receiver. For this reason, the KLT is not practical in most circumstances . The most common transform in image compression is the DCT 
, which employs a fixed set of basis vector. It is known that the DCT is asymptotically equivalent to the KLT for signals that can be modelled as a first-order autoregressive process. Nevertheless, this model fails to capture the complex and nonstationary behavior that is typically present in natural images. In the light of the above, transform design is still an active research field and in the last years many signal adaptive transforms have been presented. In this paper, we focus on a specific type of adaptive transforms, namely graph-based transforms.
2.2 Graph-based image coding
In the last years, graph signal processing has been applied to different image coding applications, especially for piecewise smooth images. In [5, 4], the authors propose a graph-based coding method where the graph is defined by considering pairwise similarities among pixel intensities. Another efficient graph construction method for piecewise smooth images has been proposed in , where the authors use a lookup table that stores the most popular graphs. Then, for each signal, they perform an exhaustive search choosing the best GFT in rate-distortion terms. Furthermore, a new graph transform, called signed graph Fourier transform, has been presented in . This transform is targeted for compression of depth images and its underlying graph contains negative edges that describe negative correlations between pixels pairs.
Recently, a number of methods using a graph-based approach have also been proposed for transform coding of inter and intra predicted residual blocks in video compression. A novel graph-based method for intra-frame coding has been presented in , which introduces a new generalized graph Fourier transform. A graph-based method for inter predicted video coding has been introduced in , where the authors design a set of simplified graph templates capturing the basic statistical characteristics of inter predicted residual blocks. Furthermore, a few separable graph-based transforms for residual coding have also been introduced. In , for example, the authors propose a new class of graph-based separable transforms for intra and inter predictive video coding. The proposed transform is based on two separate line graphs, where the edge weights are optimized using a graph learning problem. Another graph-based separable transform for inter predictive video coding has been presented in 
. In this case, the proposed transform, called symmetric line graph transform, has symmetric eigenvectors and therefore it can be efficiently implemented.
Finally, a few graph-based methods have also been presented for natural image compression. In , a new technique of graph construction targeted for image compression is proposed. This method employs innovative edge metrics, quantization and edge prediction technique. Moreover, in , a new class of transforms called graph template transforms has been introduced for natural image compression, focusing in particular on texture images. Finally, a method for designing sparse graph structures that capture principal gradients in image code blocks is proposed in . However, in all these methods, it is still not clear how to define a graph whose corresponding transform provides an effective tradeoff between the sparsity of the transform coefficients and the graph coding cost.
2.3 Graph construction
Several attempts to learn the structure and in particular a graph from data observations have been recently proposed, but not necessarily from a compression point of view. In [19, 20, 21], the authors formulate the graph learning problem as a precision matrix estimation with generalized Laplacian constraints. The same method is also used in [14, 15], where the authors use a graph learning problem in order to find the generalized graph Laplacian that best approximates residual video data. Moreover, in [22, 23], a sparse combinatorial Laplacian matrix is estimated from the data samples under a smoothness prior. Furthermore, in , the authors use a graph template to impose on the graph Laplacian a sparsity pattern and approximate the empirical inverse covariance based on that template.
Even if all the methods presented above contain some constraints on the sparsity of the graph, none of them explicitly takes into account the real cost of representing and coding the graph. In addition, most of them do not really target images. Instead, in this paper, we go beyond prior art and we fill this gap by defining a new graph construction problem that takes into account the graph coding cost. Moreover, we show how our generic framework can be used for image compression.
3 Basic definitions on graphs
For any graph where and represent respectively the node and edge sets with and , we define the weighted adjacency matrix where is the weight associated to the edge connecting nodes and . For undirected graphs with no self loops, is symmetric and has null diagonal. The graph Laplacian is defined as , where is a diagonal matrix whose -th diagonal element is the sum of the weights of all the edges incident to node . Since
is a real symmetric matrix, it is diagonalizable by an orthogonal matrix
where is the eigenvector matrix of that contains the eigenvectors as columns, and
is the diagonal eigenvalue matrix, with eigenvalues sorted in ascending order.
In the next sections, we will use also an alternative definition of the graph Laplacian that uses the incidence matrix , which is defined as follows
where an orientation is chosen arbitrarily for each edge. Let be a diagonal matrix where if . Then, we can define the graph Laplacian as
It is important to underline that the graph Laplacian obtained using (1) is independent from the edge orientation in .
3.1 Graph Fourier Transform
A graph signal in the vertex domain is a real-valued function defined on the nodes of the graph , such that , is the value of the signal at node . For example, for an image signal we can consider an associated graph where the nodes of the graph are the pixels of the image. Then, the smoothness of on can be measured using the Laplacian 
Eq. (2) shows that a graph signal is considered to be smooth if strongly connected nodes have similar signal values. This equation also shows the importance of the graph. In fact, with a good graph representation the discontinuities should be penalized by low edge weights, in order to obtain a smooth representation of the signal. Finally, the eigenvectors of the Laplacian are used to define the graph Fourier transform (GFT)  of the signal as follows:
The graph signal can be easily retrieved from by inversion, namely . Analogously to the Fourier transform in the Euclidean domain, the GFT is used to describe the graph signal in the Fourier domain.
3.2 Comparison between KLT and GFT
As we have said in Section 2, the KLT is the transform that optimally decorrelates a signal that has Gaussian entries. In this section, we discuss the connection of the graph Fourier transform with the KLT, showing that the GFT can be seen as an approximation of the KLT.
Let us consider a signal that follows a Gaussian Markov Random Field (GMRF) model with respect to a graph , with a mean and a precision matrix
. Notice that the GMRF is a very generic model, where the precision matrix can be defined with much freedom, as long as its non-zero entries encode the partial correlations between random variables, and as long as their locations correspond to the edges of the graph. It has been proved that, if the precision matrixof the GMRF model corresponds to the Laplacian , then the KLT of the signal is equivalent to the GFT .
As shown before, the graph Laplacian has a very specific structure where the non-zero components correspond to the edges of the graph, and, for this reason, it is a sparse matrix, since typically . Since the precision matrix in general does not have such fixed structure, we now study the KLT of a signal whose model is a GMRF with a generic precision matrix . In this case, the GFT does not correspond to the KLT anymore and the GFT should be considered as an approximation of the KLT, where the precision matrix is forced to follow this specific structure. In order to find the GFT that best approximates the KLT, we introduce a maximum likelihood estimation problem, using an approach similar to the one presented in . The density function of a GMRF has the following form 
The log-likelihood function can then be computed as follows
Given observations of the signal , we find the Laplacian matrix that best approximates by solving the following problem
where is the matrix whose columns are the column vectors and is the pseudo-determinant (since is singular). The optimization problem in (5) defines the graph whose GFT best approximates the KLT. The advantage of using the GFT instead of the KLT is that we force the precision matrix to follow the specific sparse structure defined by the Laplacian. In this way, the transform matrix can be transmitted to the decoder in a more compact way. In the next section, we will highlight the connection between the proposed graph construction problem and the maximum likelihood estimation problem presented in (5).
4 Graph-transform optimization
Graph-based compression methods use a graph representation of the signal through its GFT, in order to obtain a data-adaptive transform that captures the main characteristics of the signals. The GFT coefficients are then encoded, instead of the original signal values themselves. In general, a signal that is smooth on a graph has its energy concentrated in the low frequency coefficients of the GFT, hence it is easily compressible. To obtain good compression performance, the graph should therefore be chosen such that it leads to a smooth representation of the signal. At the same time, it should also be easy to encode, since it has to be transmitted to the decoder for signal reconstruction. Often, the cost of the graph representation outweighs the benefits of using an adaptive transform for signal representation. In order to find a good balance between graph signal representation benefits and coding costs, we introduce a new graph construction approach that takes into consideration the above mentioned criteria.
We first pose the problem of finding the optimal graph as a rate-distortion optimization problem defined as
where is the distortion between the original signal and the reconstructed one and is defined as follows
where and are respectively the original and the reconstructed signal via its graph transform on . The total coding rate is composed of two representation costs, namely the cost of the signal transform coefficients and the cost of the graph description . Each of these terms depends on the graph characterized by and on the coding scheme. We describe them in more details in the rest of the section.
4.1 Distortion approximation
The distortion is defined as follows
where and are respectively the original and the reconstructed signal, and and are respectively the transform coefficients and the quantized transform coefficients. The equality holds due to the orthonormality of the GFT. Considering a uniform scalar quantizer with the same step size for all the transform coefficients, if is small the expected value of the distortion can be approximated as follows 
With this high-resolution approximation, the distortion depends only on the quantization step size and it does not depend on the chosen . For simplicity, in the rest of the paper we adopt this assumption. Therefore, the optimization problem (6) is reduced to finding the graph that permits to minimize the rate terms.
4.2 Rate approximation of the transform coefficients
where and are respectively the -th eigenvalue and -th eigenvector of . Therefore, is an eigenvalue-weighted sum of squared transform coefficients. It assumes that the coding rate decreases when the smoothness of the signal over the graph defined by increases. In addition, (7) relates the measure of the signal smoothness with the sparsity of the transform coefficients. The approximation in (7) does not take into account the coefficients that corresponds to (i.e., the DC coefficients). Thus, (7) does not capture the variable cost of DC coefficients in cases where the graph contains a variable number of connected components. However, in our work we ignore this cost as we impose that the graph is connected.
It is also interesting to point out that there is a strong connection between (7) and (5). In fact, if we suppose that and if we consider as the only observation of the signal , then the second term of the log-likelihood in (5) is equal to . For this reason, we can say that the solution of our optimization problem can be seen as an approximation of the KLT.
4.3 Rate approximation of the graph description
The graph description cost depends on the method that is used to code the graph. Generally, a graph could have an arbitrary topology. However, in order to reduce the graph transmission cost, we choose to use a fixed incidence matrix for the graph and to vary only the edge weights. Therefore, the graph can be defined simply by a vector , where with is the weight of the edge . Then, by using (1) we can define the graph Laplacian .
In order to compress the edge weight vector , we propose to treat it as a graph signal that lies on the dual graph . Given a graph , we define its dual graph as an unweighted graph where each node of represents an edge of and two nodes of are connected if and only if their corresponding edges in share a common endpoint. An example of a dual graph is shown in Fig. 1. We choose to use this graph representation for the edge weight signal because consecutive edges often have similar weights, since the signals have often smooth regions or smooth transitions between regions. The latter is generally true in case of images. In this way, the dual graph can provide a smooth representation of . We can define the graph Laplacian matrix of the dual graph and the corresponding eigenvector and eigenvalue matrices and such that . We highlight that, since is an unweighted graph, it is independent of the choice of , and by consequence also and are independent from .
Since can be represented as a graph signal, we can compute its GFT as
Therefore, we can use to describe the graph and we evaluate the cost of the graph description by measuring the coding cost of . It has been shown that the total bit budget needed to code a vector is proportional to the number of non-zero coefficients , thus we approximate the cost of the graph description by measuring the sparsity of as follows
We highlight that we use two different types of approximations for and , even if both of them are treated as graph signals. This is due to the fact that the two signals have different characteristics. In the case of an image signal , we impose that the signal is smooth over , building the graph with this purpose. Instead for , even if we suppose that consecutive edges usually have similar values, we have no guarantees that is smooth on , since is fixed and it is not adapted to the image signal. Therefore, in the second case using a sparsity constraint is more appropriate for capturing the characteristics of the edge weight signal .
To be complete, we finally note that the dual graph has already been used in graph learning problems in the literature. In particular, in  the authors propose a method for joint denoising and contrast enhancement of images using the graph Laplacian operator, where the weights of the graph are defined through an optimization problem that involves the dual graph. Moreover,  presents a graph-based dequantization method by jointly optimizing the desired graph-signal and the similarity graph, where the weights of the graph are treated as another graph signal defined on the dual graph. The approximation of presented in (8) may look similar to the one used in . The main difference between the two formulations is that in (8) we minimize the sparsity of in the GFT domain in order to lossy code the signal ; instead, in , the authors minimize the differences between neighboring edges in order to optimize the graph structure without actually coding it.
4.4 Graph construction problem
where is a weighting constant parameter, that allows us to balance the contribution of the two terms.
Building on the rate-distortion formulation of (9), we design the graph by solving the following optimization problem
where and are two positive regularization parameters and denotes the constant one vector. The inequality constraint has been added to guarantee that all the weights are in the range , which is the same range of the most common normalized weighting functions . Then, the logarithmic term has been added to penalize low weight values and to avoid the trivial solution. In addition, this term guarantees that , , so that the graph is always connected. A logarithmic barrier is often employed in graph learning problems . In particular, it has further been shown that a graph with Gaussian weights can be seen as the result of a graph learning problem with a specific logarithmic barrier on the edge weights.
The problem in (10) can be cast as a convex optimization problem with a unique minimizer. To solve this problem, we write the first term in the following form
where denotes the trace of a matrix, is the vectorization operator, and is a matrix that converts the vector into . Then, we can rewrite problem (10) as
5 Graph-based image compression
We now describe how the graph construction problem of the previous section can be applied to block-based image compression. It is important to underline that the main goal of this section is to present an application of our framework. Therefore, we do not present an optimization of the full coding process, but we mainly focus on the transform block.
As pointed out in the previous sections, given an image block we have two different types of information to transmit to the decoder: the transform coefficients of the image signal and the description of the graph . The image coefficients are quantized and then coded using an entropy coder. Under the assumption of high bitrate, the optimal entropy-constrained quantizer is the uniform quantizer 
. Moreover, it has been proved that, under the assumption that all the transform coefficients follow the same probability distribution, the transform code is optimized when the quantization steps of all coefficients are equal. For these reasons, we quantize the image transform coefficients using an uniform quantizer with the same step size for all the coefficients. Then, since we assume that the non-zero coefficients are concentrated in the low frequencies, we code the quantized coefficients until the last non-zero coefficient using an adaptive bitplane arithmetic encoder  and we transmit the position of the last significant coefficient.
The graph itself is transmitted by its GFT coefficients vector , which is quantized and then transmitted to the decoder using an entropy coder. In order to reduce the cost of the graph description, we reduce the number of elements in by taking into account only the first coefficients, which usually are the most significant ones, and setting the other coefficients to zero. The reduced signal is quantized using the same step size for all its coefficients and then coded with the same entropy coder used for the image signal.
Given an image signal, we first solve the optimization problem in (11) obtaining the optimal solution . To transmit to the decoder, we first compute its GFT coefficients and the reduced vector , then we quantize and code it using the entropy coder described above. It is important to underline that, since we perform a quantization of , the reconstructed signal is not strictly equal to the original and its quality depends on the quantization step size used. The graph described by is then used to define the GFT transform for the image signal.
Since it is important to find the best tradeoff between the quality of the graph and its transmission cost, for each block in an image we test different quantization step sizes for a given graph represented by . To choose the best quantization step size, we use the following rate-distortion problem
where is the rate of , the coefficient vector quantized with , and and are respectively the distortion and the rate of the reconstructed image signal obtained using the graph transform described by . We point out that the choice of depends on the quantization step size used for the image transform coefficients . In fact, at high bitrate (small ) we expect to have a smaller and thus a more precise graph, instead at low bitrate (large ) we will have a larger that corresponds to a coarser graph approximation. We also underline that, in (12), we evaluate the actual distortion and rate without using the approximation introduced previously in (6), (7), (8). The actual coding methods described above are used to compute the rates and . The principal steps of the proposed image compression method are summarized in Fig. 2.
6 Experimental results on natural images
In this section, we evaluate the performance of our illustrative graph-based encoder for natural images. We first describe the general experimental settings, then we present the obtained experimental results.
6.1 Experimental setup
First of all, we subdivide the image into non-overlapping 1616 pixel blocks. For each block, we define the edge weights using the graph learning problem described in the previous sections. The chosen topology of the graph is a 4-connected grid: this is the most common graph topology for graph-based image compression, since its number of edges is not too high, and thus the coding cost is limited. In a 4-connected square grid with nodes, we have edges. In all our experiments on natural images, we use possible quantization step sizes for and we set , which is the length of the reduced coefficient vector . In order to set the value of the parameter in (11), we first have to perform a block classification. In fact, we recall that the parameter in (11) is related to the -norm of , where are the GFT coefficients of the signal
that lies on the dual graph. As we have explained previously, the motivation for using the dual graph is that consecutive edges usually have similar values. However, this statement is not always true, but it depends on the characteristics of the block. In smooth blocks nearly all the edges will have similar values. Instead, in piecewise smooth blocks there could be a small percentage of edges whose consecutive ones have significantly different values. Finally, in textured blocks this percentage may even increase in a significant way. For this reason, we perform a priori a block classification using a structure tensor analysis, as done in. The structure tensor is a matrix derived from the gradient of an image patch, and it is commonly used in many image processing algorithms, such as edge detection , corner detection [38, 39]40]. Let and be the two eigenvalues of the structure tensor, where
. We classify the image blocks in the following way:
Class 1: smooth blocks, if ;
Class 2: blocks with a dominant principal gradient, if ;
Class 3: blocks with a more complex structure, if and are both large.
Fig. 3 shows an example of block classification. For each block class, we have set the values of parameters and by fine tuning. We set for blocks that belong to the first class, for blocks that belong to the second class and for blocks that belong to the third class. For all the three classes, we set the same value for the other optimization parameter, i.e., .
|Class 1||Class 2||Class 3||Total|
|Image||Learned graph||Gaussian graph||Learned graph||Gaussian graph||Learned graph||Gaussian graph||Learned graph||Gaussian graph|
We compare the performance of the proposed method to a baseline coding scheme built on the classical DCT transform. In order to obtain comparable results, we code the transform coefficients of the image signal using the same entropy coder for the graph-based method and for the DCT-based encoder. In the first case, in addition to the bitrate of , we count the bitrate due to the transmission of and additional bits per block to transmit the chosen quantization step size for . For both methods, we vary the quantization step size of the transform coefficients to vary the encoding rates. In addition, in our method, for each block, we compare the RD-cost of the GFT and the one of the DCT. Then, we eventually code the block with the transform that has the lowest RD-cost and we use 1 additional bit per block to signal if we are using the GFT or the DCT.
In order to show the advantages of the proposed graph construction problem, we compare our method with a classical graph construction technique that uses a Gaussian weight function  to define the edge weights
where is a gaussian parameter that we defined as . In order to have comparable results, we use the coding scheme described in Sec. V also for the Gaussian graph.
The experiments are performed on six classical grayscale images (House, Lena, Boat, Peppers, Stream, Couple) . This dataset contains different types of natural images, for example some of them have smooth regions (e.g. House and Peppers), others instead are more textured (e.g. Boat, Lena, Couple and Stream). In Table 1, we show the obtained performance results in terms of average gain in PSNR compared to DCT, evaluated through the Bjontegaard metric . Moreover, in Fig. 4 we show the rate-distortion curves for the image Peppers. Instead, in Fig. 5 we show a visual comparison between the DCT and the proposed method for the image Peppers. We see that, in the second and third classes, the proposed method outperforms DCT providing an average PSNR gain of 0.6 dB for blocks in the second class and 0.64 dB for blocks in the third class. It should be pointed out that there is not a significant difference in performance between the second class and the third one. This probably is due to the fact that the proposed graph construction method is able to adapt the graph and its description cost to the characteristics of each block. Instead, in the first class, which corresponds to smooth blocks, the gain is nearly 0, as DCT in this case is already optimal. Finally, we notice that, in the classes where the DCT is not optimal, the learned graph always outperforms the Gaussian graph.
7 Experimental results on piecewise smooth images
In this section, we evaluate the performance of the proposed method on piecewise smooth images, comparing our method with classical DCT and the state-of-the-art graph-based coding method of . We first describe the specific experimental setting used for this type of signals, then we present the obtained results.
7.1 Experimental setup
We choose as piecewise smooth signals six depth maps taken from [43, 44]. Similarly to the case of natural images, we split them into non-overlapping 1616 pixel blocks and the chosen graph topology is a 4-connected grid. In addition, we keep for the same setting as the one used for natural images. Then, to define the parameters and we again subdivide the image blocks into classes using the structure tensor analysis. In , the authors have identified three block classes for piecewise smooth images: smooth blocks, blocks with weak boundaries (e.g., boundaries between different parts of the same foreground/background) and blocks with sharp boundaries (e.g., boundaries between foreground and backgound). In our experiments, since we have observed that the first two classes have a similar behavior, we decided to consider only two different classes:
Class 1: smooth blocks and blocks with weak edges, if .
Class 2: blocks with sharp edges, if ,
where and , with , are the two eigenvalues of the structure tensor. An example of block classification is shown in Fig. 6. As done for natural images, for each class we set parameters and by fine tuning. For the first class, we set and . For the second class, we set and .
With this type of signals, we have observed that the coefficients of the learned graph are very sparse, as shown in Fig. 7. For this reason, we decided to modify the coding method used for . As done for natural images, we reduce the number of elements in by taking into account only the first coefficients (in this case we set ). Then, we use an adaptive binary arithmetic encoder to transmit a significance map that signals the non-zero coefficients. In this way, we can use an adaptive bitplane arithmetic encoder to code only the values of the non-zero coefficients. This allows a strong reduction of the number of coefficients that we have to transmit to the decoder.
Similarly to the case of natural images, we compare our method to a transform coding method based on the classical DCT. However, in the specific case of depth map coding it has been shown that graph-based methods significantly outperforms the classical DCT. For this reason, we also propose a comparison with a graph-based coding scheme that is specifically designed for piecewise smooth images. The method presented in  achieves the state-of-the-art performance in graph-based depth image coding. This method uses a table-lookup based graph transform: the most popular GFTs are stored in a lookup table, and then for each block an exhaustive search is performed to choose the best GFT in rate-distortion terms. In this way, the side information that has to be sent to the decoder is only the table index. Moreover, the method in  incorporates a number of coding tools, including multiresolution coding scheme and edge-aware intra-prediction. Since in our case we are interested in evaluating the performance of the transform, we only focus on the transform part and we use as reference method a simplified version of the method in  that is similar to the one used in . The simplified version of  that we implemented employs 1616 blocks and it does not make use of edge-aware prediction and multiresolution coding. Since the transform used in  is based on a lookup table, we use 40 training depth images to build the table as suggested in . In the training phase, we identify the most common graph transforms. As a result, the obtained lookup table contains 718 transforms. Then, in the coding phase each block is coded using one of the transforms contained in the lookup table or the DCT. The coding method used for the table index is the same as in . Instead for the transform coefficients , in order to have comparable results, we use the coding method described in Sec. 5.
|Image||Class 1||Class 2||Total|
|Image||Class 1||Class 2||Total|
The first coding results on depth maps are summarized in Table 2, where we show the average gain in PSNR compared to DCT. Instead, in Table 3 we show the Bjontegaard average gain in PSNR between the proposed method and the reference method described previously. Moreover, in Fig. 8 we show the rate-distortion curves for the image Dolls. Finally, Fig. 9 shows an example of a decoded image obtained using the proposed method.
The results show that the proposed technique provides a significant quality gain compared to DCT, displaying a behavior similar to other graph-based techniques. Moreover, it is important to highlight that the performance of the proposed method are close to that of the state-of-the-art method , although our method is not optimized for piecewise smooth images, but it is a more general method that can be applied to a variety of signal classes. In particular, for the blocks belonging to the second class, in 4 out of 6 images (namely Cones, Art, Dolls and Moebius) we are able to outperform the reference method, reaching in some cases a quality gain larger than 1 dB (see Table 3). Overall, with our more generic compression framework, we outperform the reference method in approximately half of the test images. In general, we observe that the proposed method outperforms the reference one in blocks that have several edges or edges that are not straight. This is probably due to the fact that, in these cases, it is more difficult to represent the graph using a lookup table. It is also worth noting that our method shows better performance at low bitrate, as it is possible to see in Fig. 8.
In this paper, we have introduced a new graph-based framework for signal compression. First, in order to obtain an effective coding method, we have formulated a new graph construction problem targeted for compression. The solution of the proposed problem is a graph that provides an effective tradeoff between the energy compaction of the transform and the cost of the graph description. Then, we have also proposed an innovative method for coding the graph by treating the edge weights as a new signal that lies on the dual graph. We have tested our method on natural images and on depth maps. The experimental results show that the proposed method outperforms the classical DCT and, in the case of depth map coding, even compares to the state-of-the-art graph-based coding method.
We believe that the proposed technique participates to opening a new research direction in graph-based image compression. As future work, it would be interesting to investigate other possible representation for the edge weights of the graph, such as graph dictionaries or graph wavelets. This may lead to further improvements in the coding performance of the proposed method.
This work was partially supported by Sisvel Technology.
D. Shuman, S. Narang, P. Frossard, A. Ortega, and P. Vandergheynst, “The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains,”IEEE Signal Process. Mag., vol. 30, no. 3, pp. 83–98, 2013.
-  N. Ahmed, T. Natarajan, and K. Rao, “Discrete cosine transform,” IEEE Trans. Computers, vol. C-23, no. 1, pp. 90–93, 1974.
-  D. K. Hammond, P. Vandergheynst, and R. Gribonval, “Wavelets on graphs via spectral graph theory,” Applied and Computational Harmonic Analysis, vol. 30, no. 2, pp. 129–150, 2011.
-  G. Shen, W. S. Kim, S. K. Narang, A. Ortega, J. Lee, and H. Wey, “Edge-adaptive transforms for efficient depth map coding,” in Proc. Picture Coding Symposium (PCS), 2010, pp. 2808–2811.
-  W. Kim, S. K. Narang, and A. Ortega, “Graph based transforms for depth video coding,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2012, pp. 813–816.
-  W. Hu, G. Cheung, A. Ortega, and O. C. Au, “Multiresolution graph fourier transform for compression of piecewise smooth images,” IEEE Trans. on Image Process., vol. 24, no. 1, pp. 419–433, 2015.
-  G. Fracastoro, D. Thanou, and P. Frossard, “Graph transform learning for image compression,” in Proc. Picture Coding Symposium (PCS), 2016, pp. 1–5.
-  K. Sayood, Introduction to data compression. Newnes, 2012.
-  V. K. Goyal, J. Zhuang, and M. Vetterli, “Transform coding with backward adaptive updates,” IEEE Transactions on Information Theory, vol. 46, no. 4, pp. 1623–1633, 2000.
-  A. K. Jain, “A sinusoidal family of unitary transforms,” IEEE Transactions on Pattern Analysis and Machine Intelligence, no. 4, pp. 356–365, 1979.
-  W.-T. Su, G. Cheung, and C.-W. Lin, “Graph fourier transform with negative edges for depth image coding,” arXiv preprint arXiv:1702.03105, 2017.
-  W. Hu, G. Cheung, and A. Ortega, “Intra-prediction and generalized graph fourier transform for image coding,” IEEE Signal Process. Lett., vol. 22, no. 11, pp. 1913–1917, 2015.
-  H. E. Egilmez, A. Said, Y.-H. Chao, and A. Ortega, “Graph-based transforms for inter predicted video coding,” in Proc. IEEE International Conference on Image Processing (ICIP), 2015, pp. 3992–3996.
-  H. E. Egilmez, Y.-H. Chao, A. Ortega, B. Lee, and S. Yea, “Gbst: Separable transforms based on line graphs for predictive video coding,” in Proc. IEEE International Conference on Image Processing (ICIP), 2016, pp. 2375–2379.
-  K.-S. Lu and A. Ortega, “Symmetric line graph transforms for inter predictive video coding,” in Proc. Picture Coding Symposium (PCS), 2016.
-  G. Fracastoro and E. Magli, “Predictive graph construction for image compression,” in Proc. IEEE International Conference on Image Processing (ICIP), 2015, pp. 2204–2208.
-  E. Pavez, H. E. Egilmez, Y. Wang, and A. Ortega, “GTT: Graph template transforms with applications to image coding,” in Proc. Picture Coding Symposium (PCS), 2015, pp. 199–203.
-  I. Rotondo, G. Cheung, A. Ortega, and H. E. Egilmez, “Designing sparse graphs via structure tensor for block transform coding of images,” in Proc. Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2015, pp. 571–574.
-  E. Pavez and A. Ortega, “Generalized laplacian precision matrix estimation for graph signal processing,” in Proc. IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2016, pp. 6350–6354.
-  H. E. Egilmez, E. Pavez, and A. Ortega, “Graph learning from data under structural and laplacian constraints,” arXiv preprint arXiv:1611.05181, 2016.
-  E. Pavez, H. E. Egilmez, and A. Ortega, “Learning graphs with monotone topology properties and multiple connected components,” arXiv preprint arXiv:1705.10934, 2017.
-  X. Dong, D. Thanou, P. Frossard, and P. Vandergheynst, “Learning laplacian matrix in smooth graph signal representations,” IEEE Trans. Signal Process., vol. 64, no. 23, pp. 6160–6173, 2016.
V. Kalofolias, “How to learn a graph from smooth signals,” in
Proc. International Conference on Artificial Intelligence and Statistics (AISTATS), 2016, pp. 920–929.
-  J. Gallier, “Elementary spectral graph theory applications to graph clustering using normalized cuts: a survey,” arXiv preprint arXiv:1311.2492, 2013.
-  D. Zhou and B. Schölkopf, “A regularization framework for learning from graph data,” in Proc. ICML Workshop on Statistical Relational Learning and its Connections to other Fields, 2004, pp. 132–137.
-  C. Zhang and D. Florêncio, “Analyzing the optimality of predictive transform coding using graph-based models,” IEEE Signal Process. Lett., vol. 20, no. 1, pp. 106–109, 2013.
-  H. Rue and L. Held, Gaussian Markov random fields: theory and applications. CRC Press, 2005.
-  R. M. Gray and D. L. Neuhoff, “Quantization,” IEEE Trans. Inf. Theory, vol. 44, no. 6, pp. 2325–2383, 1998.
-  S. Mallat and F. Falzon, “Analysis of low bit rate image transform coding,” IEEE Trans. on Signal Process., vol. 46, no. 4, pp. 1027–1042, 1998.
-  X. Liu, G. Cheung, and X. Wu, “Joint denoising and contrast enhancement of images using graph laplacian operator,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2015, pp. 2274–2278.
-  W. Hu, G. Cheung, and M. Kazui, “Graph-based dequantization of block-compressed piecewise smooth images,” IEEE Signal Process. Lett., vol. 23, no. 2, pp. 242–246, 2016.
-  L. J. Grady and J. Polimeni, Discrete calculus: Applied analysis on graphs for computational science. Springer Science & Business Media, 2010.
-  S. Boyd and L. Vandenberghe, Convex optimization. Cambridge university press, 2004.
-  T. Wiegand and H. Schwarz, Source coding: Part I of fundamentals of source and video coding. Now Publishers Inc, 2010.
-  S. Mallat, A wavelet tour of signal processing: the sparse way. Academic press, 2008.
-  I. H. Witten, R. M. Neal, and J. G. Cleary, “Arithmetic coding for data compression,” Communications of the ACM, vol. 30, no. 6, pp. 520–540, 1987.
U. Köthe, “Edge and junction detection with an improved structure
Joint Pattern Recognition Symposium. Springer, 2003, pp. 25–32.
-  C. Harris and M. Stephens, “A combined corner and edge detector.” in Alvey vision conference, vol. 15, no. 50. Manchester, UK, 1988, pp. 10–5244.
-  C. S. Kenney, M. Zuliani, and B. Manjunath, “An axiomatic approach to corner detection,” in Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, vol. 1. IEEE, 2005, pp. 191–197.
-  W. Förstner, “A feature based correspondence algorithm for image matching,” International Archives of Photogrammetry and Remote Sensing, vol. 26, no. 3, pp. 150–166, 1986.
-  USC-SIPI, “Image database, volume 3: Miscellaneous,” http://sipi.usc.edu/database/database.php?volume=misc.
-  G. Bjontegaard, “Calculation of average PSNR differences between RD-curves,” Doc. VCEG-M33 ITU-T Q6/16, Austin, TX, USA, 2-4 April 2001, 2001.
-  D. Scharstein and R. Szeliski, “High-accuracy stereo depth maps using structured light,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, 2003, pp. 195–202.
-  D. Scharstein and C. Pal, “Learning conditional random fields for stereo,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2007, pp. 1–8.
-  D. Zhang and J. Liang, “Graph-based transform for 2d piecewise smooth signals with random discontinuity locations,” IEEE Trans. on Image Process., vol. 26, no. 4, pp. 1679–1693, 2017.