Scalable image coding based on epitomes

06/28/2016 ∙ by Martin Alain, et al. ∙ 0

In this paper, we propose a novel scheme for scalable image coding based on the concept of epitome. An epitome can be seen as a factorized representation of an image. Focusing on spatial scalability, the enhancement layer of the proposed scheme contains only the epitome of the input image. The pixels of the enhancement layer not contained in the epitome are then restored using two approaches inspired from local learning-based super-resolution methods. In the first method, a locally linear embedding model is learned on base layer patches and then applied to the corresponding epitome patches to reconstruct the enhancement layer. The second approach learns linear mappings between pairs of co-located base layer and epitome patches. Experiments have shown that significant improvement of the rate-distortion performances can be achieved compared to an SHVC reference.



There are no comments yet.


page 2

page 4

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The latest HEVC standard [1] is among the most efficient codec for image and video compression [2]. However, the ever increasing spatial and/or temporal resolution, bit depth, or color gamut of modern images and videos, coupled with the heterogeneity of the distribution networks, calls for scalable coding solutions. Thus, a scalable extension of HEVC named SHVC was developed [3, 4], which can encode enhancement layers with the scalability features mentioned above by using the appropriate inter-layer processing. Experiments demonstrate that SHVC outperforms simulcast as well as the previous scalable standard SVC [5]. In this paper, we focus on spatial scalability and we propose a novel scalable coding scheme based on the concept of epitome, first introduced in [6, 7]. The epitome in [6]

is defined as patch-based appearance and shape probability models learned from the image patches. The authors have shown that these probability models, together with appropriate inference algorithms, are useful for content analysis, inpainting or super-resolution. A second form of epitome has been introduced in

[8] which can be seen as a summary of the image. This epitome is constructed by searching for self-similarities within the image using methods such as the KLT tracking algorithm. This type of epitome has been used for still image compression in [9] where the authors propose a rate-distortion optimized epitome construction method. The image is represented by its epitome together with a transformation map as well as a reconstruction residue. A novel image coding architecture has also been described in [10] which, instead of the classical block processing in a raster scan order, inpaints the epitome with in-loop residue coding.

We describe in this paper a novel spatially scalable image coding scheme in which the enhancement layer is only composed of the input image epitome. This factorized representation of the image is then used by the decoder to reconstruct the entire enhancement layer using single-image super-resolution (SR) techniques. Single-image SR methods can be broadly classified into two main categories: the interpolation-based methods

[11, 12, 13] and the example-based methods [14, 15, 16, 17, 18, 19, 20, 21, 22] which we consider here, focusing on two different techniques based on neighbor embedding [16] and linear mappings [20]. The epitome patches transmitted in the enhancement layer (EL) and the corresponding base layer (BL) patches form a dictionary of pairs of high-resolution and low-resolution patches.

The first method based on neighbor embedding assumes that the BL and EL patches lie on two low and high resolution manifolds which share a similar local geometrical structure. In order to reconstruct an EL patch not belonging to the epitome, a local model of the corresponding BL patch is learned as a weighted combination of its nearest neighbors in the dictionary. The restored EL patch is then obtained by applying this weighted combination to the corresponding EL patches in the dictionary. The second approach based on linear mappings rely on a similar assumption, but directly models a projection function between BL patches and the corresponding EL patches in the dictionary. The projection function is learned using multivariate regression and is then applied to the current BL patch in order to obtain its restored EL version. This super-resolution step reconstructs the full enhancement layer while we only transmit the epitome. The proposed scheme thus allows reaching significant bit-rate reduction compared to traditional scalable coding schemes such as SHVC.

This paper is organized as follows. In section II we review the background on epitomic models. Section III describes the proposed scheme, the epitome generation and encoding at the encoder side, and the epitome-based restoration at the decoder side. Finally, we present in section IV the results compared with SHVC.

Ii Background on epitomes

The concept of epitome was first introduced by N. Jojic and V. Cheung in [6, 7]. It is defined as the condensed representation (meaning its size is only a fraction of the original size) of an image signal containing the essence of the textural properties of this image. This original epitomic model is based on a patch-based probabilistic approach. It was shown to be of high “completeness” in [23], but introduces undesired visual artifacts, which is defined as a lack of “coherence”. In fact since the model is learned by compiling patches drawn from the input image, patches that were not in the input image can appear in the epitome. The original epitomic model was also extended into a so-called Image-Signature-Dictionary (ISD) optimized for sparse representations [24].

The aforementioned epitomic models have been successfully applied to segmentation, de-noising, recognition, indexing or texture synthesis. The model of [6, 7] was also used in [25] for intra coding. However, this epitomic model is not designed for image coding applications, and thus have to be coded losslessly, which limits the compression performances.

Epitome Reconstructed image
34 % of input image PSNR = 35.53 dB
Fig. 1: Epitome of a Foreman frame (left) and the corresponding reconstruction (right).

Thus, the work presented in this paper is derived from the approach introduced in [9]. This epitomic model is dedicated to image coding, and was inspired by the factorized image representation of Wang et al [8]. In this approach the input image is factored in an epitome , which is composed of disjoint texture pieces called epitome charts (see Fig. 1). The input image is divided into a regular grid of non-overlapping blocks (block-grid) and each block is reconstructed from an epitome patch. A so-called assignation map links the patches from the epitome to the reconstructed image blocks. This epitomic model is obtained through a two-step procedure which first searches for the self-similarities within the input image, and then iteratively grows the epitome charts. The second step for creating the epitome charts is notably based on a rate-distortion optimization (RDO) criterion, which minimizes the distortion between the input and the reconstructed image together with the rate of the epitome, evaluated as its number of pixels.

A still image coding scheme based on this epitomic model is also described in [9], where the epitome and its associated assignation map are encoded. The reconstructed image can thus be used as a predictor, and the corresponding prediction residue is further encoded. The results show that the scheme is efficient against H.264 Intra. However, the coding performances of the assignation map are limited, which reduces the overall rate-distortion (RD) gains.

A novel coding scheme was thus proposed in [10], where only the epitome is encoded. The blocks not belonging to the epitome are then predicted in an inpainting fashion, together with an in-loop encoding of the residue. The prediction tools notably include efficient template-based neighbor embedding techniques such as the Locally Linear Embedding (LLE) [26, 27]. The results show that significant bit-rate savings are achieved with respect to H.264 Intra.

In the next section, we describe the proposed scheme, which can be seen as an extension of the latter work to scalable coding.

Iii Epitomic enhancement layer for scalable image coding

In this section, we describe a scalable coding scheme in which the enhancement layer consists in an epitome of the input image. Consequently, at the decoder side, the EL patches not contained in the epitome are missing, but the corresponding BL patches are known. We thus propose to restore the full enhancement layer by taking advantage of the known representative texture patches available in the EL epitome charts. The proposed scheme is shown in Fig. 3.

We first summarize below the two-step procedure for constructing the epitome, as well as its encoding process. Then, we explain in details how to perform the restoration, using local learning-based techniques.

Iii-a Epitome generation and encoding

The epitome generation method used in this paper is overall derived from [9], and consists first in a self-similarities search step followed by an epitome charts creation step. However, the self-similarities search step is here performed based on a fast two-step method that we proposed in [28]. Moreover, we choose here to use an error minimization criterion for the epitome chart creation instead of the RDO criterion of [9], as we noticed that in practice this RDO criterion has a limited impact on our application.

Fig. 2: Two steps clustering-based self-similarities search.
Fig. 3: Proposed scheme for scalable image coding. At the encoder side, an epitome of the input image is generated, and encoded as the enhancement layer. At the decoder side, the enhancement layer patches not contained in the epitome are reconstructed from the base layer.
Fig. 4: Epitome chart extension process with inferred blocks.

Iii-A1 Self-similarities search

The goal of this step is to find for each block a list of matching patches , such that the mean square error (MSE) between a block and its matching patches is below a matching threshold . This parameter eventually determines the size of the epitome, and several values are considered in the experiments (see Table II). In this paper, the lists of matching patches are obtained through a two steps clustering-based approach illustrated in Fig. 2. The first step consists in grouping together similar blocks into clusters, so that the distance from a block to the centroid of its cluster is below an assignation threshold . In practice, this threshold is set to . In the second step, a list of matching patches is computed for each cluster by finding patches whose MSE is inferior to with respect to the block closest to the cluster centroid. This list of matching patches is then assigned to all the blocks in the cluster.

From all the lists of matching patches , we can then compute reverse lists which indicate the set of image blocks that can be represented by each matching patch. Next, we describe how these lists are used to build the epitome charts.

Iii-A2 Epitome charts creation

The epitome charts are iteratively grown, and both the initialization and the iterative extension of an epitome chart are based on a criterion which minimizes the error between the input image and the reconstructed image .

Formally, we denote a set of candidate regions to add to the epitome, where is the number of candidates. When initializing a new epitome chart, a valid candidate region is a matching patch which is not yet in an epitome chart and is spatially disconnected from any existing epitome chart. On the contrary, when extending an epitome chart , a valid candidate region is a matching patch which is not yet in an epitome chart and overlaps with . The actual region added to the epitome is obtained by minimizing the following criterion:


where is the reconstructed image when the candidate region is added to the epitome, and the function computes the mean square error between and the source image . The reconstructed image comprises the blocks reconstructed from the existing epitome charts and the new blocks contained in the list . During the extension of an epitome chart, additional reconstructed blocks can be obtained by considering the so-called inferred blocks, which are the potential matching patches that can overlap between the current chart and the extension (see Fig. 4). Note that for the pixels of which are not reconstructed, the MSE can not be computed directly. In our implementation, we assign the maximal MSE value to these pixels. This tends to favor the selection of a candidate region which reconstructs large regions in , and thus speed up the epitome chart creation.

The extension of an epitome chart stops when no more valid candidate regions can be found. A new epitome chart is then initialized at a new location. The global process stops when the whole image is reconstructed. Note that the epitome charts in

are originally obtained at a pixel accuracy, but for coding purposes they are then padded to be aligned with the block structure of the encoder.

Iii-A3 Epitome encoding

The epitomes are encoded with a scalable scheme as an enhancement layer. The blocks not belonging to the epitome are directly copied from the decoded base layer, thus their rate-cost is practically non-existent.

Iii-B Epitome-based restoration

Fig. 5: The nearest neighbors of the current BL patch are found in search windows (SW) corresponding to the epitome charts. The BL/EL pairs of patches can then be exploited to restore the current missing EL patch.

The non-epitome part of the enhancement layer is processed by considering overlapping patches, separated by a step of

pixels in both rows and columns. After restoration, when several estimates are obtained for a pixel, they are averaged in order to obtain the final estimate. Note that before performing the restoration, the BL image is up-sampled to the resolution of the EL using the inter-layer processing filter.

The restoration methods described below are derived from local learning-based SR methods [16, 17, 21, 20, 22], and can be summarized in the three following steps: -NN search, learning step, and processing step. These steps are shown in Fig. 5, and described in details below.

Iii-B1 K-NN search

If we consider the current patch to be processed , we first search for its -NN BL patches, within search windows corresponding to the epitome charts locations (see Fig. 5). The -NN BL patches are then stored in a matrix

which contains in its columns the vectorized patches. For each neighbor

, we have a corresponding EL patch in the epitome, which is stored in a matrix . We thus obtain BL/EL pairs of training patches. In classical SR applications, the pairs of training patches are obtained from a dictionary, which construction is a critical step [14, 15, 16, 19, 22, 18, 21]. Since here the patches in the epitome are representative of the full image, we can consider that they constitute a suitable dictionary to perform the local learning-based restoration.

Next, we present the learning and processing steps, which exploit the correlation between the pairs of training patches to perform learning-based restoration. We describe two methods to restore the missing EL patches, inspired by SR techniques based on neighbor embedding (NE) [16, 17, 21] and linear mappings [20, 21, 22], but any other learning-based method could be included in the proposed scheme. Note that many SR methods can be improved using iterative back-projection [29], which enforces the high resolution reconstructed image to be consistent with the input low resolution image. However, this technique will not be considered in the proposed scheme, as it tends to propagate quantization noise from the BL image to the EL reconstruction.

Iii-B2 Epitome-based Locally Linear Embedding

a) Learning: estimate the current BL patch as linear combination of its -NN BL patches.

b) Processing: Apply the weights of the linear combination learned previously on the -NN corresponding EL patches to obtain the restored EL patch.

Fig. 6: Two steps local learning-based restoration based on LLE.

First, we describe a method relying on LLE, denoted “epitome-based Locally Linear Embedding” (E-LLE). Similarly to other NE-based restoration techniques, we assume that the local geometry of the manifolds in which lie the BL and EL patches is similar (see Fig. 6). Using LLE, we first learn the linear combination of the -NN BL patches which best approximate the current patch, and then apply this linear combination to the corresponding EL patches in order to obtain a good estimate of the missing EL patch.

Let be the vector containing the combination weights . The weights are obtained by solving the following equation:


The weights vector is computed as:


The term denotes the local covariance matrix (i.e., in reference to ) of the -NN stacked in , and is the column vector of ones. In practice, instead of an explicit inversion of the matrix , the linear system of equations is solved, then the weights are rescaled so that they sum to one.

The restored EL patch is finally obtained as:


In practice, several versions of this method can be derived, e.g. by using another NE-based technique such as non-negative matrix factorization [30], or by adapting the weights computation as in the non-local mean algorithm (exponential weights) [31]

. However, with such methods the weights are only computed based on the BL patches. In the next section, we propose a method which aims at better exploiting the correlation between the pairs of training patches, based on linear regression.

Iii-B3 Epitome-based Local Linear Mapping

a) Learning: a mapping function is learned between the pairs of BL/EL patches using multivariate linear regression.

b) Processing: the function learned previously is applied on the current BL patch in order to obtain the restored EL patch.

Fig. 7: Two steps local learning-based restoration based on linear regression.

We describe here a method based on linear regression, that we denote “epitome-based Local Linear Mapping (E-LLM)”. We want to further exploit the correlation between the pairs of training patches by directly learning a function mapping the BL patches to the corresponding EL patches (see Fig. 7). This function can then be applied on the current patch to restore the EL patch.

The mapping function is learned using multivariate linear regression. The problem is then to search for the function P minimizing:


which is of the form (corresponding to the linear regression model ). The minimization of Eq. (5) gives the least squares estimator


We finally obtain the restored EL patch as:


Now that we have formally defined the proposed methods, we study their performances in the next section.

Iv Simulations and results

Iv-a Experimental conditions

The experiments are performed on the test images listed in Table I, obtained from the HEVC test sequences. The base layer images are obtained by down sampling the input image with a factor , using the SHVC down-sampling filter available with the SHM software (ver. 9.0) [32]. The BL images are encoded with HEVC, using the HM software (ver. 15.0) [33].

We then use the SHM software (ver. 9.0) [32] to encode the corresponding enhancement layers. Thanks to the hybrid codec scalability feature of SHVC, the decoded BL images are first up-sampled using the separable 8-tap SHVC filter , and directly used as input to the SHM software. Both layers are encoded with the following quantization steps: QP = 22, 27, 32, 37.

Class Image Size Epitome
block size
B BasketballDrive
B Cactus
B Ducks
B Kimono
B ParkScene
B Tennis
B Terrace
C BasketballDrill
C Keiba
C Mall
C PartyScene
D BasketballPass
D BlowingBubbles
D RaceHorses
D Square
E City
TABLE I: Test images

For each input image, 3 to 4 matching threshold values are selected in order to generate epitomes which sizes range from 30% to 90% of the input image sizes. The threshold values to reach such sizes vary depending on the input image, and were manually selected. The selected matching thresholds and corresponding epitome sizes are shown in Table II.

Matching threshold
Image 9 16 25 49 100 225
BasketballDrive 90.62 64.10 49.33 32.34
Cactus 79.85 71.24 60.66 48.33
Ducks 89.63 77.41 48.28
Kimono 90.13 75.53 59.36 35.34
ParkScene 86.58 73.55 61.99 47.18
Tennis 64.49 50.44 43.12 32.22
Terrace 78.46 66.39 53.31 43.50
BasketballDrill 87.05 59.94 42.63 28.53
Keiba 93.59 81.28 63.53 40.77
Mall 92.95 76.28 66.15 50.26
PartyScene 94.82 81.12 67.56 49.13
BasketballPass 77.76 66.60 56.41 42.31
BlowingBubbles 87.56 73.33 58.85 36.92
RaceHorses 91.03 79.23 58.14 36.67
Square 80.77 71.41 61.09 48.72
City 91.59 82.44 66.81 39.52
TABLE II: Epitome sizes (as % of input images)

The post-processing is performed using overlapping patches, with an overlapping step . We set the number of nearest neighbors to .

Iv-B Rate-distortion performances

We assess in this section the performances of the proposed scheme against the SHVC reference EL. The distortion is evaluated using the PSNR of the decoded EL, while the rate is calculated as the sum of both BL and EL rates. The RD performances are computed using the Bjontegaard rate gains measure (BD-rates) [34] over the four QP values.

We show in Fig. 8 the BD-rates averaged over all sequences depending on the epitome size. The complete results are given in Table IV. Overall, we can see that significant bit-rate reduction can be achieved compared to SHVC, up to about 15% bit-rate reduction in average, and 20% for images like BaskballDrive, Ducks, or Tennis. The best performances are achieved with the biggest epitomes. In fact, smaller epitomes provide a reduced set of BL/EL patches, while more BL patches need to be processed. Eventually, the post-processing step can not effectively compensate for the quality loss. Overall, the E-LLE performs better than the E-LLM method, however, for the best performances (biggest epitomes), both methods perform similarly.

Fig. 8: Average RD performances of the different restoration methods against SHVC depending on the epitome size.

In order to better understand the performances of the proposed methods, we show in Fig. 9 the RD curve of the City image for its best performances (biggest epitome), which behavior is representative of the set of test images. We can see that at high bit-rates (QP=22), the bit-rate of the proposed scheme is especially reduced compared to the SHVC reference EL. However, even with the proposed post-processing, we observe a loss of quality. At low bit-rates (QP=37), the bit-rate of the proposed method is less reduced compared to the SHVC reference EL, but the post-processing yields a better quality. This behavior explains the overall significant bit-rate reduction we can achieve with the proposed scheme.

Fig. 9: RD performances of the City image for both E-LLE and E-LLM methods, epitome size = 91.59% of input image.

In addition, we show in Fig. 10 the RD curves of the LLE-based restoration for the PartyScene image depending on the epitome size. We can see that for smaller epitome sizes, at high bit-rates (QP=22), even though the bit-rate is considerably reduced compared to the SHVC reference EL, the quality loss does not allow an improvement in the RD performances, which corroborates our previous analysis.

Fig. 10: RD performances of the City image using the E-LLE method, with different epitome sizes.

We give in Figs. 13 and 14 visual examples of the enhancement layers for the City and Cactus images. Note that these examples were chosen for their visual clarity, and do not necessarily correspond to the best RD performances. In order to demonstrate the relevance of the post-processing step, we show on the top row the epitome EL before restoration, and on the bottom row the corresponding EL after applying the E-LLE and E-LLM methods. Before restoration, the blocks not belonging to the epitome are particularly visible, as they are directly copied from the up-sampled decoded BL, and clearly lack high frequency details. An obvious improvement of the quality can be observed after restoration for the high-frequency pseudo-periodic textures, such as the building of Fig. 13, or the calendar of Fig. 14. Although the E-LLM usually yields lower PSNR than the E-LLE method, we can see that it can perform visually better on high-frequency stochastic textures such as in the highlighted red rectangle of Fig. 14.

Iv-C Elements of complexity

We give in this section some indications about the complexity, evaluated as the running time of the proposed methods. We evaluate the complexity depending on the epitome size and input image size. The results are averaged for each image class (which corresponds to an image size, see Table I). Note that the epitome generation algorithm was implemented in C++, while the restoration methods were implemented in Matlab.

We give in Fig. 11 the complexity of the epitome generation. We can see that the epitome generation running time mainly varies depending on the input image size, while the epitome size has a limited impact. For the biggest epitomes, which correspond to the best RD results, we observed that in average 50% to 90% of the complexity is dedicated to the self-similarities search step. As this step is highly parallelizable, the total running time could be reduced by using a parallel implementation, e.g. on GPU.

Fig. 11: Epitome generation running time depending on the epitome size for different image classes.

We show in Fig. 12 the complexity of the post-processing step. The processing time is similar for both E-LLE and E-LLM methods, and obviously increases with the size of the image. However, we can observe that overall the complexity is reduced for the biggest epitomes, which interestingly corresponds to the best RD performances. In fact, when transmitting bigger epitomes, less patches not belonging to the epitome have to be processed.

The simulations showed that in average, about 95% of the post-processing complexity is dedicated to the -NN search. The -NN search was performed with the Matlab knnsearch function, based on -tree [35, 36]. In order to reduce the total running time, the complexity of the -NN search could be further reduced by using more advanced approximate nearest neighbor search [37, 38, 39, 40] or parallel implementation, possibly on GPU [41].

Fig. 12: Post-processing running time of the different restoration methods depending on the epitome size for different image classes.

Iv-D Extension to scalable video coding

The work presented in this paper is dedicated to scalable single image coding, however a straight extension to scalable video coding can be considered by applying the proposed method to each frame of the sequence. Preliminary experiments are conducted on a set of 3 test sequences, consisting of 9 frames with a CIF resolution in order to limit the computation time. The epitomes are generated using one matching threshold value . In order to exploit the temporal redundancies, the -NN search step is performed in the epitomes of the two closest frames in addition to the current one.

We show the RD performances measured with the Bjontegaard rate gains in Table III. These preliminary results indicate that the proposed scheme is also expected to bring significant bit-rate reduction when extended to full video sequences. These results are not obvious to predict, since the inter layer prediction is here also competing with inter frame prediction modes, which are much more efficient than the intra prediction modes.

Sequence Epitome size (%) BD rate gains (%)
(averaged over all frames) E-LLE E-LLM
City 56.66 -26.56 -26.59
Macleans 79.93 -2.24 -2.15
Mobile 82.22 -10.12 -10.20
TABLE III: Bjontegaard rate gains against SHVC

V Conclusion and future work

We propose in this paper a novel scheme for scalable image coding, based on an epitomic factored representation of the enhancement layer and appropriate restoration methods at the decoder side. Significant bit-rate reduction is achieved for the spatial scalable application when compared to the SHVC reference EL. These achievements were possible because of the specific epitomic model we used, which provides relevant texture information and is especially suitable for scalable encoding. Note that improvements of the standard tools have been recently proposed, such as the generalized inter-layer residual prediction [42, 43, 44], or enhanced in-loop prediction mechanism for the EL [45, 46]. The proposed approach is compatible with such improvement of scalable coding schemes, as the coding of the epitome as an enhancement layer would be improved as well.

The proposed scheme could be improved by studying alternative restoration approaches. For instance, different regression could be considered for the E-LLM instead of the direct least-square approach, such as (Kernel) Ridge Regression

[21, 47]. Alternatively, the restoration step at the decoder side could be considered as an inpainting problem with prior knowledge on the “holes” to be filled in the form of low resolution patches. Inpainting has been extensively studied over the last decades (see [48] and reference therein for more details), and the exemplar-based multi-scale approaches [49, 50] are well suited in our context.

Future work also includes the adaptation of the proposed scheme for scalable video coding, as preliminary results indicate promising RD performances. The epitomes were here generated separately for each frame. The RD performances would benefit from an epitomic model which takes into account the temporal redundancies. Furthermore, the image self-similarities in the current epitome are found using a single block matching algorithm, while the application at the decoder side is based on multi-patches techniques. New epitomic models have been designed to take into account the epitome application in the generation process. For example, an epitome is proposed in [51] for multi-patches super-resolution, which showed that a more compact representation can be obtained for a similar image reconstruction quality. Such model could thus be considered in the proposed scheme in order to improve the RD performances, at the cost of an increased complexity. In addition, the distortion minimization criterion of Eq. 1 used for the epitome charts creation could be changed into a rate-distortion optimization criterion, as in [9]. Ideally, the distortion would be directly computed on the restored EL instead of the reconstructed image, and the rate directly evaluated as the EL rate.

Finally, the proposed scheme can be extended to other scalable applications, such as color gamut or LDR/HDR scalabilities. Even though we use LLE in this paper for super-resolution, it has been proven efficient for many different applications such as de-noising [52], image prediction [27], or inpainting [48]. We can thus expect the LLE-based restoration methods to be efficient for different scalable applications.

Epitome EL before restoration
Epitome EL + E-LLE Epitome EL + E-LLM
Fig. 13: City enhancement layer encoded with QP=22. The epitome size is 39.52% of the input image. Before restoration, we can clearly notice in the red rectangle the blocks not belonging to the epitome from their blurry aspect. The quality of these blocks is obviously improved after restoration.
Epitome EL before restoration
Epitome EL + E-LLE Epitome EL + E-LLM
Fig. 14: Cactus enhancement layer encoded with QP=22. The epitome size is 48.33% of the input image. Before restoration, we can clearly notice in the red rectangle and in the calendar on the bottom right the blocks not belonging to the epitome from their blurry aspect. The E-LLM gives visually superior results compared to the E-LLE for the stochastic texture highlighted in the red rectangle. Both methods improve the quality of the straight lines in the calendar.
Epitome size BD rate gains (%) Epitome size BD rate gains (%)
(% of input image) E-LLE E-LLM (% of input image) E-LLE E-LLM
BasketballDrive BasketballDrill
90.62 -20.07 -19.61 87.05 -6.52 -5.50
64.10 -15.94 -13.70 59.94 -2.82 1.08
49.33 -13.66 -8.69 42.63 -1.62 4.44
32.34 -9.70 -0.08 28.53 3.18 13.08
Cactus Keiba
79.85 -18.19 -16.46 93.59 -6.71 -6.42
71.24 -17.67 -15.14 81.28 -3.69 -1.99
60.66 -16.33 -13.01 63.53 3.24 7.75
48.33 -11.63 -7.42 40.77 16.06 23.52
Ducks Mall
89.63 -19.52 -19.07 92.95 -18.20 -16.76
77.41 -16.71 -14.21 76.28 -0.50 -2.13
48.28 2.88 10.28 66.15 -4.13 3.04
50.26 6.75 27.54
Kimono PartyScene
90.13 -21.75 -21.42 94.82 -5.44 -4.29
75.53 -15.98 -12.63 81.12 -1.18 7.20
59.36 -17.37 -15.08 67.56 8.83 25.96
35.34 -15.82 -12.31 49.13 26.89 57.15
ParkScene BasketballPass
86.58 -16.88 -16.45 77.76 -16.17 -13.15
73.55 -15.10 -13.84 66.60 -14.07 -7.25
61.99 -10.69 -7.50 56.41 -0.53 10.48
47.18 -3.94 2.89 42.31 5.72 28.21
Tennis BlowingBubbles
64.49 -23.04 -21.91 87.56 -6.33 -3.29
50.44 -22.03 -19.61 73.33 -2.73 3.65
43.12 -19.90 -16.59 58.85 3.27 13.95
32.22 -18.42 -13.13 36.92 16.81 44.03
Terrace RaceHorses
78.46 -13.27 -12.49 91.03 -16.08 -15.67
66.39 -11.32 -9.57 79.23 -4.49 -3.03
53.31 -6.81 -3.01 58.14 6.69 20.64
43.50 -0.50 9.03 36.67 23.45 63.23
City Square
91.59 -10.05 -8.76 80.77 -6.45 -2.97
82.44 -6.24 -1.59 71.41 -5.88 1.08
66.81 3.27 17.75 61.09 -2.00 4.75
39.52 28.00 59.96 48.72 9.80 31.68
TABLE IV: Bjontegaard rate gains against SHVC depending on the epitome size.
(For each image, the best rate saving is indicated in bold.)


  • [1] G. J. Sullivan, J. R. Ohm, W. J. Han, and T. Wiegand, “Overview of the high efficiency video coding (HEVC) standard,” IEEE Trans. Circuits Syst. Video Technol., vol. 22, no. 12, pp. 1649–1668, 2012.
  • [2] J. R. Ohm, G. J. Sullivan, H. Schwarz, T. K. Tan, and T. Wiegand, “Comparison of the coding efficiency of video coding standards-including high efficiency video coding (HEVC),” IEEE Trans. Circuits Syst. Video Technol., vol. 22, no. 12, pp. 1669–1684, 2012.
  • [3] G. J. Sullivan, J. M. Boyce, Y. Chen, J.-R. Ohm, C. A. Segall, and A. Vetro, “Standardized Extensions of High Efficiency Video Coding ( HEVC ),” IEEE J. Sel. Top. Signal Process., vol. 7, no. 6, pp. 1001–1016, 2013.
  • [4] Y. Ye and P. Andrivon, “The Scalable Extensions of HEVC for Ultra-High-Definition Video Delivery,” IEEE Multimed., vol. 21, no. 3, pp. 58–64, 2014.
  • [5] A. Kessentini, T. Damak, M. A. Ben Ayed, and N. Masmoudi, “Scalable high efficiency video coding (SHEVC) performance evaluation,” in World Congr. Inf. Technol. Comput. Appl., 2015, pp. 1–4.
  • [6] N. Jojic, B. Frey, and A. Kannan, “Epitomic analysis of appearance and shape,” IEEE Int. Conf. Comput. Vis., pp. 34–, 2003.
  • [7] V. Cheung, B. J. Frey, and N. Jojic, “Video epitomes,” Int. J. Comput. Vis., vol. 76, no. 2, pp. 141–152, 2008.
  • [8] H. Wang, Y. Wexler, E. Ofek, and H. Hoppe, “Factoring repeated content within and among images,” ACM Trans. Graph., vol. 27, p. 1, 2008.
  • [9] S. Cherigui, C. Guillemot, D. Thoreau, P. Guillotel, and P. Perez, “Epitome-based image compression using translational sub-pel mapping,” in IEEE Int. Work. Multimed. Signal Process., 2011, pp. 1–6.
  • [10] S. Cherigui, M. Alain, C. Guillemot, D. Thoreau, and P. Guillotel, “Epitome inpainting with in-loop residue coding for image compression,” in IEEE Int. Conf. Image Process., 2014, pp. 5581–5585.
  • [11] X. Li and M. T. Orchard, “New edge-directed interpolation,” IEEE Trans. Image Process., vol. 10, no. 10, pp. 1521–1527, 2001.
  • [12] M. F. Tappen, B. C. Russell, and W. T. Freeman, “Exploiting the sparse derivative prior for super-resolution and image demosaicing,” in IEEE Int. Workshop Statist. Comput. Theories Vis., 2003.
  • [13] R. Fattal, “Image upsampling via imposed edge statistics,” ACM Trans. Graph., vol. 26, no. 3, p. 95, 2007.
  • [14] W. T. Freeman, E. C. Pasztor, O. T. Carmichael, S. Hall, and F. Ave, “Learning Low-Level Vision,” Int. J. Comput. Vis., vol. 40, no. 1, pp. 25–47, 2000.
  • [15] W. T. Freeman, T. R. Jones, and E. C. Pasztor, “Example-based super-resolution,” IEEE Comput. Graph. Appl., vol. 22, no. 2, pp. 56–65, 2002.
  • [16] H. Chang, D.-Y. Yeung, and Y. Xiong, “Super-resolution through neighbor embedding,” in

    IEEE Conf. Comput. Vis. Pattern Recognit.

    , 2004, pp. 275–282.
  • [17] W. Fan and D. Y. Yeung, “Image hallucination using neighbor embedding over visual primitive manifolds,” in IEEE Conf. Comput. Vis. Pattern Recognit., 2007, pp. 1–7.
  • [18] D. Glasner, S. Bagon, and M. Irani, “Super-resolution from a single image,” IEEE Int. Conf. Comput. Vis., pp. 349–356, 2009.
  • [19] J. Yang, J. Wright, T. S. Huang, and M. Yi, “Image Super-Resolution Via Sparse Representation,” IEEE Trans. Image Process., vol. 19, no. 11, pp. 2861–2873, 2010.
  • [20] J. Yang, Z. Lin, and S. Cohen, “Fast Image Super-Resolution Based on In-Place Example Regression,” in IEEE Conf. Comput. Vis. Pattern Recognit., 2013, pp. 1059–1066.
  • [21] M. Bevilacqua, A. Roumy, C. Guillemot, and M.-L. Alberi Morel, “Single-Image Super-Resolution via Linear Mapping of Interpolated Self-Examples,” IEEE Trans. Image Process., vol. 23, no. 12, pp. 5334–5347, 2014.
  • [22] K. Zhang, D. Tao, X. Gao, X. Li, and Z. Xiong, “Learning Multiple Linear Mappings for Efficient Single Image Super-Resolution,” IEEE Trans. Image Process., vol. 24, no. 3, pp. 846–861, 2015.
  • [23] D. Simakov, Y. Caspi, E. Shechtman, and M. Irani, “Summarizing visual data using bidirectional similarity,” in IEEE Conf. Comput. Vis. Pattern Recognit., 2008, pp. 1–8.
  • [24] M. Aharon and M. Elad, “Sparse and Redundant Modeling of Image Content Using an Image-Signature-Dictionary,” SIAM J. Imaging Sci., vol. 1, no. 3, pp. 228–247, 2008.
  • [25] Q. Wang, R. Hu, and Z. Wang, “Intracoding and refresh with compression-oriented video epitomic priors,” IEEE Trans. Circuits Syst. Video Technol., vol. 22, no. 5, pp. 714–726, 2012.
  • [26] S. T. Roweis and L. K. Saul, “Nonlinear dimensionality reduction by locally linear embedding,” Science, vol. 290, no. 5500, pp. 2323–2326, 2000.
  • [27] M. Türkan and C. Guillemot, “Image prediction based on neighbor embedding methods.” IEEE Trans. Image Process., vol. 21, no. 4, pp. 1885–98, 2012.
  • [28] M. Alain, C. Guillemot, D. Thoreau, and P. Guillotel, “Clustering-based Methods for Fast Epitome Generation,” in Eur. Signal Process. Conf., 2014, pp. 211–215.
  • [29] M. Irani and S. Peleg, “Improving resolution by image registration,” CVGIP Graph. Model. Image Process., vol. 53, no. 3, pp. 231–239, 1991.
  • [30] M. Bevilacqua, A. Roumy, C. Guillemot, and M.-L. Alberi Morel, “Low-Complexity Single-Image Super-Resolution based on Nonnegative Neighbor Embedding,” in Br. Mach. Vis. Conf., 2012, pp. 1–10.
  • [31] A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image denoising,” in IEEE Conf. Comput. Vis. Pattern Recognit., vol. 2, no. 0, 2005, pp. 60–65.
  • [32] “SHM Software, Ver. 9.0. Available:,” [Accessed: 07- Jun- 2016].
  • [33] “HM Software, Ver. 15.0. Available:,” [Accessed: 07- Jun- 2016].
  • [34] G. Bjontegaard, “Calculation of average PSNR differences between RD-curves,” Doc. VCEG-M33, ITU-T VCEG Meet., 2001.
  • [35] J. L. Bentley, “Multidimensional binary search trees used for associative searching,” Commun. ACM, vol. 18, no. 9, pp. 509–517, 1975.
  • [36] J. H. Freidman, J. L. Bentley, and R. A. Finkel, “An algorithm for finding best matches in logarithmic expected time,” ACM Trans. Math. Softw., vol. 3, no. 3, pp. 209–226, 1977.
  • [37] C. Barnes, E. Shechtman, D. B. Goldman, and A. Finkelstein, “The generalized PatchMatch correspondence algorithm,” Lect. Notes Comput. Sci., vol. 6313, pp. 29–43, 2010.
  • [38] N. Dowson and O. Salvado, “Hashed nonlocal means for rapid image filtering,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 3, pp. 485–499, 2011.
  • [39] A. Cherian, S. Sra, V. Morellas, and N. Papanikolopoulos, “Efficient Nearest Neighbors via Robust Sparse Hashing,” IEEE Trans. Image Process., vol. 23, no. 8, pp. 3646–3655, 2014.
  • [40] W. Zhu, W. Ding, J. Xu, Y. Shi, and B. Yin, “2-D Dictionary Based Video Coding for Screen Contents,” Data Compression Conf., pp. 43–52, 2014.
  • [41] V. Garcia, E. Debreuve, and M. Barlaud, “Fast k nearest neighbor search using GPU,” in Conf. Comput. Vis. Pattern Recognit. Work., 2008, pp. 1–6.
  • [42] X. Li, J. Chen, K. Rapaka, and M. Karczewicz, “Generalized inter-layer residual prediction for scalable extension of HEVC,” in IEEE Int. Conf. Image Process., 2013, pp. 1559–1562.
  • [43] T. Laude, X. Xiu, J. Dong, Y. He, Y. Ye, and J. Ostermann, “Scalable extension of HEVC using enhanced inter-layer prediction,” in IEEE Int. Conf. Image Process., 2014, pp. 3739–3743.
  • [44] A. Aminlou, J. Lainema, K. Ugur, M. M. Hannuksela, and M. Gabbouj, “Differential Coding Using Enhanced Inter-Layer Reference Picture for the Scalable Extension of H.265/HEVC Video Codec,” IEEE Trans. Circuits Syst. Video Technol., vol. 24, no. 11, pp. 1945–1956, 2014.
  • [45] X. HoangVan, J. Ascenso, and F. Pereira, “Improving enhancement layer merge mode for HEVC scalable extension,” in Pict. Coding Symp., 2015, pp. 15–19.
  • [46] ——, “Improving SHVC Performance with a Joint Layer Coding Mode,” in IEEE Int. Conf. Acoust. Speech Signal Process., 2016.
  • [47] K. Kim and Y. Kwon, “Single Image Super-Resolution Using Sparse Regression and Natural Image Prior,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 6, pp. 1127–1133, 2010.
  • [48]

    C. Guillemot and O. Le Meur, “Image Inpainting : Overview and Recent Advances,”

    IEEE Signal Process. Mag., vol. 31, no. 1, pp. 127–144, 2014.
  • [49] I. Drori, D. Cohen-Or, and H. Yeshurun, “Fragment-based image completion,” ACM Trans. Graph., vol. 22, no. 3, p. 303, 2003.
  • [50] O. Le Meur, M. Ebdelli, and C. Guillemot, “Hierarchical super-resolution-based inpainting,” IEEE Trans. Image Process., vol. 22, no. 10, pp. 3779–3790, 2013.
  • [51] M. Turkan, M. Alain, D. Thoreau, P. Guillotel, and C. Guillemot, “Epitomic image factorization via neighbor-embedding,” in IEEE Int. Conf. Image Process., 2015, pp. 4141–4145.
  • [52] I.-F. Shen, “Image Denoising through Locally Linear Embedding,” Int. Conf. Comput. Graph. Imaging Vis., pp. 147–152, 2005.