I Introduction
The vast majority of current consumer digital cameras have Color Filter Arrays (CFA) placed on the light sensing units, to capture only one of the three primary color components at each pixel [1]. Fig.1 shows the most frequently used CFA layout named Bayer Pattern: in each subblock, the diagonal sensing units response to the green wavelength component and the antidiagonal ones response to the red and blue wavelength components of light rays. Recovering the missing primary color values to form standard RGB color images, is called Demosaicking.
A faithful demosaicking algorithm not only enables obtaining good quality images with low hardware cost, but also provides a potential solution to image compression. Therefore, demosaicking has been of intense interest in both academic research and industry. Similar to many illposed image recovery problems, what demosaicking research really aims to solve is demosaicking at nonregular regions such as edge and textures. Thus the numerous existing demosaicking methods commonly focus on how to accurately detect the least variation direction from the CFA sampled image data. They differ in the aspects of: 1) the domain to conduct finite differencing; 2) the measurement of directional variation by differencing; 3) the strategy of steering interpolation along local dominant direction; 4) the exploitation of the inter and intra channel correlation for higher accuracy; 5) the procedure to enhance the quality of a fully interpolated RGB image.
Differencing Domain As a natural indicator of variation, magnitude of the first or secondorder finite differencing can be computed in each CFA sampled channel (e.g., [2][3][4] by ShaoRehman); or across different channels (e.g. [5][6]
). Alternatively, differencing can be conducted in each tentatively estimated color channel (e.g.,
[7]), or their colordifference planes (e.g., [8][9]). Many recent works perform differencing in the residual planes, which are the difference between the CFA samples and intermediately interpolated channels, and have shown promising results (e.g., [10][11][12][13]).Measuring Variation To measure directional variation, the HamiltonAdams (HA) method combines the first and secondorder differencing magnitude in two channels at the current single pixel [2]. This method is also adopted in subsequent works such as [6] and [14]. A more robust approach is to accumulate the directional differencing magnitude in a local neighbourhood (e.g., [15][10][5][13]), or further at a mixture of scales (e.g., [16]).
Edge Directed Interpolation HA compares the horizontal and vertical variation and selects the smoother direction to perform interpolation. In case of ties, interpolations in both directions are averaged [2]
. Su extended this idea by fusing the horizontal and vertical interpolation using machine learned weights
[17]; ChungChan selected the local dominant direction based on the variance of directional color differences
[14]; In [18], local dominant direction is determined by voting for the horizontal or vertical or none edge hypotheses; Wu et al. relaxed the strict prerequisite for noneedge judgment to approximate equality [6]. More methods estimate missing values by weighted summation of the estimation from the north, south, west and east directions, where the weights are obtained from the directional variation (e.g.[15][10][5][13][19]) and spatial distance (e.g.[3] [20]). Ref.[21] tests multiple direction hypotheses and chooses the one that shows the highest consistency among all channels. Deciding edge direction by maximizing a posteriori is also adopted in [22] and [23].Exploitation of CrossChannel Correlation The color channels of a natural image generally show strong correlation, meaning that the color or edge information of one channel is also implied by other channels. Hence crosschannel priors are extensively explored for demosaicking. For example, HA assumes that the color difference planes are locally bilinear. As this assumption fails at edges, Ref.[12] proposes compensating interchannel interpolation by intrachannel interpolation if evidence of nonlinearity presents; Ref.[21] assumes that the three channels have consistent edge directions; Regularization is investigated to formulate the inter and intra channel correlation (e.g., [24] [8]). Ref.[25]
assumes that the local region in one channel is a linear transform of another channel in the intermediate estimation step.
Quality Enhancement Due to the existence of crosschannel correlation, many works refine each channel by other channels’ reconstruction alternatively and iteratively (e.g., [26][27][28][17][29][30]). Postprocessing techniques such as nonlocal regularization and median filtering have also been widely employed to suppress spurious high frequency components [31][15][8][23]
. It is known that nonlocal regularization and median filtering are essentially a series of iterative linear filtering, robust to outliers but computationally heavy
[32]. For efficiency, Wu et al. proposed a postprocessing technique based on machine learned regression priors.Deep Learning Demosaicing
Convolutional Neural Networks (CNN) recently have attracted the attention of demosaicing research (
[33][34][35][36]). Some of these works have achieved the top demosaicing accuracy on benchmarks yet is faster than many classical methods (e.g., [34]). An implicit cost for CNN is the nontrivial memory required to store the trained model (e.g., [34],[36]) .The accuracy of recent demosaicking methods keeps increasing, and so is the associated computational cost (see Sec.IV). For real time visualization, sophisticated demosaicking may entail expensive processing hardware and high power supply, against the intention of using CFAs. In the vast literature on demosaicing, the HA method is extremely simple. Seemingly such simplicity may only yield baseline accuracy. However, it performs surprisingly well. Buades et al. tested the HA algorithm and wellknown methods on images selected from McM dataset [21], and the HA algorithm is shown to achieve the least Mean Square Error (MSE) [15]. Gharbi et al. compared the HA algorithm with highimpact demosaicking methods of the literature (till year of 2016) [34] . While HA is the second fastest (only slower than Bilinear Interpolation), its Peak Signal to Noise Ratio (PSNR) accuracy on benchmark datasets Kodak [37] and McM is higher than methods.
Although the HA algorithm does not put much effort on edge detection, it is highly effective. Based on a close look at the HA algorithm, we propose a highquality fast edgesensing image demosaicking scheme that adopts the HA pipeline. Particularly, we recover the green channel first, and then the greenred and greenblue color difference planes. For adaptive edgesensing, we replace HA’s green channel selective directional interpolation by blending the directional estimation, using a logistic functional of the difference between directional variations. We extend this edgesensing strategy to the greenred and greenblue colour difference planes. This extension is not straightforward, since Bayer CFA samples the green channel twice as many the red or blue channels. Our approach is to derive a logistic functional to blend the diagonal and antidiagonal estimation, leveraging the diagonal symmetry of the Bayer pattern. Then the green channel interpolation scheme is applicable to computing the rest missing values in the greenred and greenblue difference planes. The proposed demosaicking process is highly parallelable: although the red and blue channels have to be estimated subsequently to the green channel estimation, the restoration in each step at a pixel is independent of the restoration of other pixels. This feature means that our method is very suitable for Graphics Processing Units (GPU) and Field Programmable Gate Arrays (FPGA) implementation, achieving instant image visualization in real applications.
The rest of the paper is organized as follows. SectionII analyzes the strength and weakness of the HA algorithm. Subsequently, Section III formulates a new fast edgesensing demosaicking technique. Section IV compares the efficiency and accuracy of our proposed method with the stateoftheart methods by extensive experiments. Section V concludes our work.
Ii HamiltonAdams Demosaicking
Assuming the mosaicked image has rows and columns, let
be the set of all pixel positions. According to the Bayer mosaicking pattern (Fig.1), we define
to be the sets of positions where the green, red and blue values are originally available respectively. Hence their complementary sets , and are the sets of positions where the green, red and blue values are to be recovered respectively. We use , and to denote the original RGB components of a pixel , and denote its estimated color components by adding a “hat” symbol to the corresponding notation. For example, if a pixel , we write its true RGB values as and its estimated RGB values as .
Iia HA Green Channel Demosaicking
Let . The HA algorithm first computes its horizontal and vertical intensity variation, then selects the less variation direction to perform interpolation. In particular, at pixel , the horizontal first and second order partial derivatives (denoted by and ), as well as the vertical first and second order differential (denoted by and ) of , are computed by
(1) 
Note that in Eq.1, pixels that are one unit away from are in , and pixels that are two units away are in the same set as .
The HA algorithm defines the horizontal variation and vertical variation as
(2) 
Let and be the average of the neighbouring green values in the horizontal and vertical directions respectively, i.e.,
(3) 
Finally, is estimated by
(4) 
IiB HA Red and Blue Channel Demosaicking
The HA demosaicking method utilizes the recovered green plane to regulate the blue and red recovery. Particularly for the Bayer CFA pattern, this is a typical
times superresolution problem, with available subsamples evenly spaced at every other row and column. Instead of directly enlarging the red plane
and blue plane , the HA algorithm enlarges the colour difference planes and , based on the observation that and are generally smoother than and respectively. The magnification is simply performed by a bilinear interpolation(5a)  
(5b) 
where index the pixels that are in the local neighbourhood with (in Eq.5a) or (in Eq.5b) values available for bilinear interpolation; and are the corresponding bilinear interpolation coefficients, determined by the spatial distance between and .
Finally the missing red and blue values are recovered by
(6) 
and
(7) 
IiC Advantages and limitations of the HA algorithm
The high effectiveness of the HA method is due to the wisdom of taking full advantage of the green channel, which is sampled more densely than the other two channels. The green channel is recovered first based on available samples. It is then used to regulate the recovery of the red and blue channels. In other words, it trusts the sampling frequency more than edge detection. Such a strategy is suitable for today’s digital CFA cameras, the resolution of which is generally several megapixels. This high sampling frequency means that the intensity at each pixel is highly correlated to its local neighbours; whereas to perform edge detection in a nonlocal neighbourhood, especially when information at each pixel is lost, could be time consuming.
Nevertheless, HA’s smoothness assumption is over simplified. In real applications, due to the existence of noise, the HA scheme restores the green component at a pixel exclusively from either its horizontal or vertical neighbours, as the variation in the two directions and are hardly equal (see Eq.2 and Eq.4). This is disadvantageous in smooth regions, where more neighbours should be used to smooth out random noises. Moreover, its red and blue channel recovery assumes that the color difference plane is locally bilinear, which is seriously violated at edge or texture area and results in the “false color” artifacts [31]. In the next section, we propose a more adaptive and flexible edgesensing demosaicking scheme, which lifts the HA demosaicking accuracy to stateoftheart comparable methods, while still being fast.
Iii EdgeSensing Demosaicking by Logistic Functional of Difference between Directional Variations
Iiia Green Channel Demosaicking
The green channel demosaicking process of the HA algorithm, as shown in Eq.4, can be rewritten as
(8) 
where
(9) 
In practice, even in very flat region, and are rarely equal because of noise. A more practical solution is to relax the strict equality requirement to the approximate equality , which can be expressed by the inequality , where is the allowed noise level. That is,
(10) 
Although defined by Eq.10 is more flexible than by Eq.9, the value of has to be carefully defined for each image, as a small bias in may lead to an opposite interpolation decision. Desirably, should be a continuous function, which smoothly blends the estimation from both directions, thus a small bias does not cause the demosaicking result to vary abruptly. In particular, rather than using the step function defined by Eq.10, we seek for a smooth function that meets the criteria:

, when ;

, when ;

, if ;
Note that, and should have the same form. That is, if there is a function , such that , then should hold. In other words,
(11) 
It can be shown that the logistic function
(12) 
where is a positive real number adjusting the convergence of , fulfills all requirements on . Thus we define
(13) 
It can be verified that
(14) 
Algorithm 1 summarizes the green channel demosaicking pipeline. To examine the influence of hyperparameter on demosaicking performance, we run the algorithm on high quality natural images from Waterloo Exploration Database [38][39], with varying from to at a step size of . We observe that yields the highest PSNR (averaged over the training images), hence we fix to be in this work. Fig.2 plots the function curve of . Note that, the high pass filtering involved in the interpolation scheme does not preserve energy. Consequently, might be out of the range of , hence we clip such values to be either or , whichever is closer, at the final step of the algorithm.
IiiB Red and Blue Channels Demosaicking
We transform and estimation to and interpolation. We treat the two channels in the same fashion, hence this section only articulates the red channel demosaicking. Its blue channel counterpart can be derived by simply exchanging the positions of red and blue in the algorithm.
To respect edges and textures, we apply our edgesensing strategy also to the red channel. This is cannot be done by a straightforward extension from the green to the red channel. Due to Bayer CFA color sensors arrangement, in the green channel, at a pixel , its horizontal and vertical neighbours all have original green values available. In contrast, if , at most two of its horizontal and vertical neighbours have greenred difference values available. Our approach is to leverage the diagonal symmetry of the red sensors’ positions. We first derive the edgesensing interpolation scheme for , where , using its diagonal and antidiagonal neighbours. This makes the greenred difference values available at the horizontal and vertical neighbours for each of the rest pixels. We then infer from and the estimated .
IiiB1 Estimating red values at
As shown in Fig.3, the nearest available red values around a pixel are , , , and , located in the diagonal and antidiagonal directions. To obtain edge information, we compute the difference between the diagonal and antidiagonal intensity variation (in the mosaicked image plane ). We then use the logistic function value of this difference to weight the contribution of at , , , and to restore .
In particular, we compute the first and second order diagonal and antidiagonal partial derivatives of at by
(15) 
The local intensity variation in the diagonal and antidiagonal directions are computed as
(16) 
Let be the logistic function value of , i.e.,
(17) 
where hyperparameter is fixed to , as described in Section IIIA for green channel recovery.
Define
(18) 
which compute the diagonal mean and antidiagonal mean of at . Furthermore, the second order partial derivatives in the colourdifference plane at are approximated by the central differencing scheme,
(19) 
We infer by fusing the directional estimation using , that is,
(20) 
which recovers by
(21) 
for .
IiiB2 Estimating red values at
Once is available, can be estimated from its horizontal and vertical neighbours, as shown in Fig. 3. Note that in this step, for each , either or values have been already computed at the four nearest neighbours ,, and . For notation simplicity, we denote them uniformly by . In the greenred difference plane at pixel , we compute its horizontal and vertical average values and by
(22) 
Moreover, we approximate the second order partial derivatives of at by the central differencing scheme as
(23) 
where , , and are further approximated by central differencing
Subsequently, is given by
(25) 
where is computed by the same formula as in Eq.1, Eq.2 and Eq.13. Finally, is restored by Eq.21 for . Algorithm 2 summarizes the estimation process of the missing red components.
At image boundaries where pixels required for central differencing or averaging are unavailable, we simply restore the missing colour components by linear interpolation or nearestneighbour interpolation.
Iv Experimental Results
We experimentally evaluate the proposed algorithm, which we name Logistic EdgeSensing Demosaicking (LED). To examine the pure effectiveness of steering demosaicking by logistic functional of the difference between directional variation, we do not enhance the image restoration quality by any postprocessing or refinement technique.
Datasets Following the literature convention, we first test LED on traditional benchmarks Kodak [37] and McM [8]. The Kodak dataset contains 24 images of size and the McM dataset contains 18 images of size . As each of these test images has fewer than Mega pixels, whereas current consumer camera resolution typically has several Mega pixels, experiments on traditional McM and Kodak datasets may not fully reflect real applications. To examine the potential performance of LED in real practice, we further test it on the (about Mega pixels) version of the Kodak dataset [40], the resolution of which is comparable to the Mega pixels resolution of Iphone6. We term this modern resolution Kodak dataset as MR Kodak.
Comparison and Metrics Beside comparing to the HA algorithm, we extensively compare LED with 28 existing demosaicking methods by running their publicly available source codes. The performance comparison is conducted in terms of demosaicking accuracy and efficiency. We measure the accuracy of the demosaicked images by Peak Signal to Noise Ratio (PSNR), Structural SIMilarity (SSIM) and SCIELAB [41]. In the case that the competing methods have source codes in MATLAB, we measure the demosaicking efficiency by timing the particular demosaicking process, which outputs the final RGB image from the Bayer mosaicked input, on McM images. More specifically, if the demosaicking process is implemented by a single function in the source code, we add the MATLAB timing function timeit to record its running time; Whereas if the demosaicking process consists of multiple functions, we use the MATLAB timing function tic and toc. In the case that the competing methods have source codes in C, we use the time library functions clock.
Due to the “warm up” factor, the demosaicking generally takes longer on the first test image than the other images, hence we report the median running time from the 2nd to the 18th McM images as the final time measurement. All experiments are conducted on a 2.8GHz Intel i74900MQ CPU with 8GB RAM, unless otherwise specified.
Parameter Settings The only hyperparameter of our method to set is the logistic function steepness coefficient in Eq.13, which is fixed to (see SectionIIIA) in all experiments. Many existing works shave off image boundaries of various width from measuring demosaicking accuracy (for example, pixels in Ref.[42] and pixels in Ref.[8]), and we also shave off pixels wide image boundaries. If the source code of a competing method does not specify the shaveoff boundary width, we also set it to . For methods that simultaneously address demosaicking and denoising, we set the additional noise level to zero in their source codes. We leave other parameters (including boundary shavedoff size) as their default values in the original code, since they may lead to the optimal performance. Nevertheless, we suggest that future research to take image boundaries into account, as the demosaicked image should not shrink in real applications.
Iva Numerical Evaluation on Low Resolution Images
Table I and Table II present the demosaicking accuracy, quantitatively measured by PSNR for each channel, PSNR for the whole image (a.k.a., cPSNR), SSIM and SCIELAB, of the proposed LED algorithm on each individual image from Kodak and McM respectively. Table III compares the accuracy and computation time of the LED against the traditional HA method under the same implementation settings. Significantly, LED improves HA by dB and dB in PSNR, and in SSIM, and in SCIELAB on Kodak and McM respectively, at an extra cost of merely seconds for pixels.
Kodak  McM  
time  
cPSNR  SSIM  SCIELAB  cPSNR  SSIM  SCIELAB  (sec)  
HA  0.971  1.246  33.49  0.958  1.622  0.024  
LED (ours)  38.31  0.982  0.979  35.23  0.968  1.308  0.062 
We compare LED with previous demosaicking methods by running their available source codes, mostly found according to [43] and [44] ^{1}^{1}1Deep Convolutional Neural Network based method proposed in [34] has Matlab code available online. However, as deep learning methods trade memories for computation time and accuracy, they are in a very different vein from our method, and hence Ref.[34] is not included in this experiment.. They are: Alternating Projection (AP) [26] (using the implementation by Y. M. Lu in [29]); High Quality Linear Interpolation (HQLI) [7] (using the MATLAB buildin function demosaic); Primary Consistent Soft Decision (PCSD) [21]; Adaptive HomogeneityDirected (AHD) [28]; Directional Linear Minimum Mean SquareError Estimation (DLMMSE) [45]; Weighted Edge and Color Difference (WECD) [17]; Total Least Square (TLS) [46]; Directional Filtering and A Posteriori Decision (DFAPD) [22]; Wavelet Analysis of Luminance Component (WALC) [47]; HeterogeneityProjection HardDecision (HPHD) [48]; Regularization Approach (RA) [24]; SelfSimilarity Driven (SSD) [15]; Contour Stencils (CS) [49]; One Step Alternating Projections (OSAP) [29]; Local Directional Interpolation and Nonlocal Adaptive Thresholding (LDINAT) [8]; Directional Filtering and Weighting (DFW) [50]; Residual Interpolation (RI) [10]; Multiscale Gradient (MSG) [5]; Least Square LumaChroma Demultiplexing and Noise Estimation (LSLCDNE)[42]; Flexible Image Processing Framework (FlexISP) [51] (using the implementation by Tan et al. in [52]^{2}^{2}2The original implementation of FlexISP in [52] computes PSNR for each channel first, then averages them as cPSNR. We modified this computation, by using the mean squared error over all pixels and all channels to compute cPSNR.); Intercolor Correlation [12]; Adaptive Residual Interpolation (ARI) [11]; Multidirectional Weighted Interpolation (MDWI) [53] (using the implementation found in Github); Directional Difference Regression (DDR) [13]; MinimizedLaplacian Residual Interpolation (MLRI) [25]; Sequential Energy Minimization [54]; and Alternating Direction Method of Multipliers (ADMM) [52]^{3}^{3}3Same modification as we did for FlexISP.. Web addresses of these source codes are provided along with the bibliography of this work.
For the clearance of comparison, Table IV only shows the accuracy measured by cPSNR and efficiency measured by running time of each competing method on the McM dataset, which is more challenging than the Kodak dataset [19]. Evidently, it is observed that:

None of the competing methods outperforms the proposed method by both higher cPSNR and lower computation cost; Whereas the proposed LED clearly outperforms out of methods by both cPSNR and running time.

OSAP and LSLCDNE have similar running time to the proposed method, but their cPSNRs are about dB and dB lower respectively. SSD and FlexIP have similar (slightly superior) cPSNRs to the proposed method, but they are about and times slower.

The proposed LED has lower cPSNR than RI, ICC, MLRI, CS, DDR, MDWI, ARI and LDINAT, but is about , , , , , , and times faster than them respectively.
method  time (sec)  cPSNR  shave width 

HQLI [7]  0.002  34.34  4 
HA [2]  0.024  33.49  4 
OSAP [29]  0.04  33.26  10 
LSLCDNE [42]  0.05  34.08  11 
WECD [17]  0.13  32.19  4 
HPHD [48]  0.14  34.75  10 
PCSD [21]  0.14  34.93  3 
AP [26]  0.29  33.27  10 
RA [24]  0.30  34.29  4 
DFAPD [22]  0.45  34.28  4 
DFW [50]  0.52  34.58  6 
WALC [47]  0.65  33.85  4 
RI [10]  0.75  36.50  10 
AHD [28]  0.97  33.52  4 
ICC [12]  1.03  36.79  10 
MLRI [25]  1.65  36.91  10 
CS [49]  1.68  35.59  4 
DLMMSE [45]  1.88  34.40  20 
SSD [15]  4.32  35.38  4 
MSG [5]  7.29  34.72  10 
DDR [13]  7.32  37.17  4 
MDWI [53]  17.09  36.16  10 
ARI [11]  25.23  37.49  10 
TLS [46]  151.08  30.67  4 
FlexISP [51]  158.06  35.45  4 
LDINAT [8]  264.10  36.18  20 
SEM [54]  568.69  34.19  7 
ADMM [52]  587.97  32.25  4 
LED (ours)  0.06  35.23  4 
IvB Visual Performance on Low Resolution Images
Fig.4Fig.6 show examples that LED works visually favorably to stateoftheart methods. Fig.4 shows a local region taken from the 1st image of McM. Demosaicking by ICC and MLRI in this region suffers noticeable “false color” artifacts, whereas DDR and LED recoveries look more natural. Fig.5 shows another example taken from the 9th McM image. MLRI incompletely recovers the black lines in the example region, whereas LED and ICC both slightly blur the black lines with the red background, but their recovery is visually more acceptable. In the example shown by Fig.6, DDR produces obvious “smearing” artifacts. In contrast, demosaicking results by ICC, MLRI and LED are all visually close to the original image.
IvC Evaluation on Modern Resolution Images
The resolution of the MR Kodak dataset is similar to today’s popular dailyuse cameras, which entail fast demosaicking speed. Table V compares the proposed LED with faster algorithms HA and HQLI, as well as more sophisticated algorithms RI, ICC, MLRI, CS and DDR, which have higher cPSNR accuracy than LED on low resolution dataset McM. In this experiment, we exclude methods that are more than times slower than LED, since they would have different application scenarios. On test images of modern resolution, LED and DDR have the highest average SSIM value. It outperforms CS by cPSNR, SSIM and running time. The cPSNR of LED is still notably (more than 1db) higher than HA and HQLI, while it is comparable to the topperforming stateoftheart methods that are tens of time slower. It takes LED only seconds, but takes RI, ICC, MLRI tens of seconds and DDR hundreds of seconds.
metrics  HQLI  HA  RI  ICC  CS  MLRI  DDR  LED 

time(sec)  0.034  0.820  20.96  30.87  33.71  50.21  192.90  2.86 
cPSNR  41.23  39.90  42.50  42.55  41.60  42.74  42.79  42.28 
SSIM  0.975  0.967  0.974  0.978  0.971  0.975  0.980  0.980 
V Conclusion
We have proposed a very low cost edge sensing strategy, termed as LED, for color image demosaicking. It guides the green channel interpolation and color difference plane interpolation by logistic functional of the difference between directional variation. Among 29 demosaicking methods tested by code running, our method is one of the fastest. Without using any refinement or postprocessing technique, LED achieves the accuracy higher than many recently proposed methods on low resolution images, and comparable to top performers on images of currently popular resolution. Our extensive experiments suggest that, accurate nonlocal edge detection for demosaicking is generally difficult and time consuming. Instead, leveraging the originally captured values of the nearest neighbours is much more efficient.
Our algorithm is highly parallelable, and hence its GPU or FPGA implementation can easily restore very high resolution images in real time. This is desirable in the digital camera industry, as the camera resolution is increasing rapidly. Furthermore, in demosaicking applications where speed is allowed to trade for accuracy, the proposed method provides a quick and high quality initialization, which is generally needed in sophisticated iterative demosaicking algorithms.
References
 [1] H. Kopka and P. W. Daly, A Guide to LaTeX, 3rd ed. Harlow, England: AddisonWesley, 1999.
References
 [1] R. Szeliski, Computer vision: algorithms and applications. Springer Science & Business Media, 2010.
 [2] J. F. H. Jr. and J. E. Adams, “Adaptive color plane interpolation in single sensor color electronic camera,” 1997, uS Patent 5,629,734.
 [3] C. Zhang, Y. Li, J. Wang, and P. Hao, “Universal demosaicking of color filter arrays,” IEEE Transactions on Image Processing, vol. 25, no. 11, pp. 5173–5186, 2016.
 [4] L. Shao and A. U. Rehman, “Image demosaicing using content and colourcorrelation analysis,” Signal Processing, vol. 103, pp. 84–91, 2014.
 [5] I. Pekkucuksen and Y. Altunbasak, “Multiscale gradientsbased color filter array interpolation,” IEEE Transactions on Image Processing, vol. 22, no. 1, pp. 157–165, 2013, [Online Code: https://sites.google.com/site/ibrahimepekkucuksen/publications; accessed 27Feb2018].
 [6] J. Wu, M. Anisetti, W. Wu, E. Damiani, and G. Jeon, “Bayer demosaicking with polynomial interpolation,” IEEE Transactions on Image Processing, vol. 25, no. 11, pp. 5369–5382, 2016.
 [7] H. S. Malvar, L. He, and R. Cutler, “Highquality linear interpolation for demosaicing of bayerpatterned color images,” in Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing, 2004, pp. iii–485–8, [Matlab buildin function demosaic()].
 [8] L. Zhang, X. Wu, A. Buades, and X. Li, “Color demosaicking by local directional interpolation and nonlocal adaptive thresholding,” Journal of Electronic imaging, vol. 20, no. 2, p. 023016, 2011, [Online Code: http://www4.comp.polyu.edu.hk/~cslzhang/CDM_Dataset.htm; accessed 27Feb2018].
 [9] I. Pekkucuksen and Y. Altunbasak, “Gradient based threshold free color filter array interpolation,” in Image Processing, IEEE International Conference on, 2010, pp. 137–140.
 [10] D. Kiku, Y. Monno, M. Tanaka, and M. Okutomi, “Residual interpolation for color image demosaicking,” in Proc. IEEE International Conference on Image Processing, 2013, pp. 2304–2308, [Online Code: http://www.ok.sc.e.titech.ac.jp/res/DM/RI.html; accessed 27Feb2018].
 [11] Y. Monno, D. Kiku, M. Tanaka, and M. Okutomi, “Adaptive residual interpolation for color image demosaicking,” in Proc. IEEE International Conference on Image Processing, 2015, pp. 3861–3865, [Online Code: http://www.ok.sc.e.titech.ac.jp/res/DM/RI.html; accessed 27Feb2018].
 [12] S. P. Jaiswal, O. C. Au, V. Jakhetiya, Y. Yuan, and H. Yang, “Exploitation of intercolor correlation for color image demosaicking,” in Proc. IEEE International Conference on Image Processing, 2014, pp. 1812–1816, [Online Code: http://spjaiswal.student.ust.hk/color_demosaicing.html; accessed 27Feb2018].
 [13] J. Wu, R. Timofte, and L. V. Gool., “Demosaicing based on directional difference regression and efficient regression priors,” IEEE Transactions on Image Processing, vol. 25, no. 8, pp. 3862–3874., 2016, [Online Code: http://www.vision.ee.ethz.ch/~timofter/; accessed 27Feb2018].
 [14] K.H. Chung and Y.H. Chan, “Color demosaicing using variance of color differences,” IEEE Transactions on Image Processing, vol. 15, no. 10, pp. 2944–2955, 2006.
 [15] A. Buades, B. Coll, J.M. Morel, and C. Sbert, “Selfsimilarity driven color demosaicking,” IEEE Transactions on Image Processing, vol. 18, no. 6, pp. 1192–1202, 2009, [Online code: http://www.ipol.im/pub/art/2011/bcmsssdd/; accessed 27Feb2018].
 [16] S. Tajima, R. Funatsu, and Y. Nishida, “Chromatic interpolation based on anisotropyscalemixture statistics,” Signal Processing, vol. 97, pp. 262–268, 2014.
 [17] C. Y. Su, “Highly effective iterative demosaicing using weightededge and colordifference interpolations,” IEEE Transactions on Consumer Electronics, vol. 52, no. 2, pp. 639–645, 2006, [Online Code: http://web.ntnu.edu.tw/~scy/heid_demo.html; accessed 27Feb2018].
 [18] X. Chen, G. Jeon, and J. Jeong, “Votingbased directional interpolation method and its application to still color image demosaicking,” IEEE Transactions on circuits and systems for video technology, vol. 24, no. 2, pp. 255–262, 2014.
 [19] Y. Kim and J. Jeong, “Fourdirection residual interpolation for demosaicking,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 26, no. 5, pp. 881–890, 2016.
 [20] J. Wang, J. Wu, Z. Wu, G. Jeon, and J. Jeong, “Bilateral filtering and directional differentiation for bayer demosaicking,” IEEE Sensors Journal, vol. 17, no. 3, pp. 726–734, 2017.
 [21] X. Wu and N. Zhang, “Primaryconsistent softdecision color demosaicking for digital cameras (patent pending),” IEEE Transactions on Image Processing, vol. 13, no. 9, pp. 1263–1274, 2004, [Online Code: www.ece.mcmaster.ca/%7Exwu/executables/pcsd.rar; accessed 27Feb2018].
 [22] D. Menon, S. Andriani, and G. Calvagno, “Demosaicing with directional filtering and a posteriori decision,” IEEE Transactions on Image Processing, vol. 16, no. 1, pp. 132–141, 2007, [Online Code: http://www.danielemenon.netsons.org/pub/dfapd/dfapd.php; accessed 27Feb2018].
 [23] J. Duran and A. Buades, “Selfsimilarity and spectral correlation adaptive algorithm for color demosaicking,” IEEE transactions on image processing, vol. 23, no. 9, pp. 4031–4040, 2014.
 [24] D. Menon and G. Calvagno, “A regularization approach to demosaicking,” in Proc. of IS& T/SPIE Visual Communications and Image Processing, 2008, p. 68221L, [Online Code: http://www.dei.unipd.it/~menond/pub/rad/rad.html; accessed 27Feb2018].
 [25] D. Kiku, Y. Monno, M. Tanaka, and M. Okutomi, “Beyond color difference: residual interpolation for color image demosaicking,” IEEE Transactions on Image Processing, vol. 25, no. 3, pp. 1288–1300, 2016, [Online Code: http://www.ok.sc.e.titech.ac.jp/res/DM/RI.html; accessed 27Feb2018].
 [26] B. K. Gunturk, Y. Altunbasak, and R. M. Mersereau, “Color plane interpolation using alternating projections,” IEEE transactions on image processing, vol. 11, no. 9, pp. 997–1013, 2002, [Online Code: lu.seas.harvard.edu/software/demosaickingmatlabcodeimplementingfastdemosaickingalgorithmdescribedfollowing; accessed 27Feb2018].
 [27] X. Li, “Demosaicing by successive approximation,” IEEE Transactions on Image Processing, vol. 14, no. 3, pp. 370–379, 2005.
 [28] K. Hirakawa and T. W. Parks, “Adaptive homogeneitydirected demosaicing algorithm,” IEEE Transactions on Image Processing, vol. 14, no. 3, pp. 360–369, 2005, [Online Code: http://issl.udayton.edu/index.php/software/; accessed 27Feb2018].
 [29] Y. M. Lu, M. Karzand, and M. Vetterli, “Demosaicking by alternating projections: theory and fast onestep implementation,” IEEE Transactions on Image Processing, vol. 19, no. 8, pp. 2085–2098, 2010, [Online Code: lu.seas.harvard.edu/software/demosaickingmatlabcodeimplementingfastdemosaickingalgorithmdescribedfollowing; accessed 27Feb2018].
 [30] W. Ye and K.K. Ma, “Color image demosaicing using iterative residual interpolation,” IEEE Transactions on Image Processing, vol. 24, no. 12, pp. 5879–5891, 2015.
 [31] W. Lu and Y.P. Tan, “Color filter array demosaicking: new method and performance measures,” IEEE transactions on image processing, vol. 12, no. 10, pp. 1194–1210, 2003.
 [32] Y. Niu, A. Dick, and M. Brooks, “Locally oriented optical flow computation,” IEEE Transactions on Image Processing, vol. 21, no. 4, pp. 1573–1586, 2012.
 [33] Y.Q. Wang, “A multilayer neural network for image demosaicking,” in Image Processing, 2014 IEEE International Conference on, 2014, pp. 1852–1856.
 [34] M. Gharbi, G. Chaurasia, S. Paris, and F. Durand, “Deep joint demosaicking and denoising,” ACM Transactions on Graphics, vol. 35, no. 6, p. 191, 2016.
 [35] R. Tan, K. Zhang, W. Zuo, and L. Zhang, “Color image demosaicking via deep residual learning,” in 2017 IEEE International Conference on Multimedia and Expo (ICME), 2017, pp. 793–798.
 [36] D. S. Tan, W.Y. Chen, and K.L. Hua, “Deepdemosaicking: Adaptive image demosaicking via multiple deep fully convolutional networks,” IEEE Transactions on Image Processing, vol. 27, no. 5, pp. 2408–2419, 2018.
 [37] “Low resolution kodak image dataset,” http://r0k.us/graphics/kodak/, [Online; accessed 11Jan2018].
 [38] K. Ma, Z. Duanmu, Q. Wu, Z. Wang, H. Yong, H. Li, and L. Zhang, “Waterloo Exploration Database: New challenges for image quality assessment models,” IEEE Transactions on Image Processing, vol. 26, no. 2, pp. 1004–1016, 2017.
 [39] “Waterloo exploration dataset,” https://ece.uwaterloo.ca/~k29ma/exploration/, [Online; accessed 1Jun2018].
 [40] “Modern resolution kodak image dataset,” http://www.math.purdue.edu/~lucier/PHOTO_CD/BMP_IMAGES/, [Online; accessed 11Jan2018].
 [41] X. Zhang and B. A. Wandell, “A spatial extension of CIELAB for digital color‐image reproduction,” Journal of the Society for Information Display, vol. 5, no. 1, pp. 61–63, 1997.
 [42] G. Jeon and E. Dubois, “Demosaicking of noisy bayersampled color images with leastsquares lumachroma demultiplexing and noise level estimation,” IEEE Transactions on Image Processing, vol. 22, no. 1, pp. 146–156, 2013, [Online Code: http://www.site.uottawa.ca/~edubois/lslcd_ne/; accessed 27Feb2018].
 [43] “D. khashabi list of online demosaicking codes,” http://www.cis.upenn.edu/~danielkh/files/2013_2014_demosaicing/demosaicing.html, [Online; accessed 11Jan2018].
 [44] “Tokyo institute of technology list of online demosaicking codes,” http://www.ok.sc.e.titech.ac.jp/res/DM/RI.html, [Online; accessed 11Jan2018].
 [45] L. Zhang and X. Wu, “Color demosaicking via directional linear minimum mean squareerror estimation,” IEEE Transactions on Image Processing, vol. 14, no. 12, pp. 2167–2178, 2005, [Online Code: http://www4.comp.polyu.edu.hk/~cslzhang/papers.htm; accessed 27Feb2018].
 [46] K. Hirakawa and T. W. Parks, “Joint demosaicing and denoising,” IEEE Transactions on Image Processing, vol. 15, no. 8, pp. 2146–2157, 2006, [Online Code: http://issl.udayton.edu/index.php/software/; accessed 27Feb2018].
 [47] D. Menon and G. Calvagno, “Demosaicing based on wavelet analysis of the luminance component,” in Proc. IEEE Int. Conf. Image Processing, vol. 2, Sep. 2007, pp. 181–184, [Online Code: http://www.danielemenon.netsons.org/pub/dbwalc/dbwalc.php; accessed 27Feb2018].
 [48] C.Y. Tsai and K.T. Song, “Heterogeneityprojection harddecision color interpolation using spectralspatial correlation,” IEEE Transactions on Image Processing, vol. 16, no. 1, pp. 78–91, 2007, [Online Code: http://isci.cn.nctu.edu.tw/video/Demo/; accessed 27Feb2018].
 [49] P. Getreuer, “Image demosaicking with contour stencils,” Image Processing On Line, vol. 2, pp. 22–34, 2012, [Online code: http://www.ipol.im/pub/art/2012/gdwcs/; accessed 27Feb2018].
 [50] D. Zhou, X. Shen, and W. Dong, “Colour demosaicking with directional filtering and weighting,” IET Image Processing, vol. 6, no. 8, pp. 1084–1092, 2012, [Online Code: https://www.mathworks.com/matlabcentral/fileexchange/39843colourdemosaickingwithdirectionalfilteringandweighting?s_tid=gn_loc_drop; accessed 27Feb2018].
 [51] F. Heide, M. Steinberger, Y. T. Tsai, M. Rouf, D. Paja̧k, D. Reddy, G. Orazio, J. Liu, W. Heidrich, K. Egiazarian, J. Kautz, and K. Pulli, “Flexisp: a flexible camera image processing framework,” ACM Transactions on Graphics, vol. 33, no. 6, p. 231, 2014, [Online Code: implemented by https://github.com/TomHeaven/JointDemosaicandDenoisingwithADMM; accessed 27Feb2018].
 [52] H. Tan, X. Zeng, S. Lai, Y. Liu, and M. Zhang, “Joint demosaicing and denoising of noisy bayer images with admm,” in Proc. IEEE International Conference on Image Processing, 2017, pp. 2951–2955, [Online Code: https://github.com/TomHeaven/JointDemosaicandDenoisingwithADMM; accessed 27Feb2018].
 [53] X. Chen, L. He, G. Jeon, and J. Jeong, “Multidirectional weighted interpolation and refinement method for bayer pattern cfa demosaicking,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 25, no. 8, pp. 1271–1282, 2015, [Online Code: http://https://github.com/skye17/MDWI_demosaicking; accessed 27Feb2018].
 [54] T. Klatzer, K. Hammernik, P. Knöbelreiter, and T. Pock, “Joint demosaicing and denoising based on sequential energy minimization,” in Proc. IEEE International Conference on Computational Photography, 2016, [Online Code: https://github.com/VLOGroup/jointdemosaicingdenoisingsem; accessed 27Feb2018].
Comments
There are no comments yet.