Super-resolved Chromatic Mapping of Snapshot Mosaic Image Sensors via a Texture Sensitive Residual Network

09/05/2019 ∙ by Mehrdad Shoeiby, et al. ∙ CSIRO Deakin University 7

This paper introduces a novel method to simultaneously super-resolve and colour-predict images acquired by snapshot mosaic sensors. These sensors allow for spectral images to be acquired using low-power, small form factor, solid-state CMOS sensors that can operate at video frame rates without the need for complex optical setups. Despite their desirable traits, their main drawback stems from the fact that the spatial resolution of the imagery acquired by these sensors is low. Moreover, chromatic mapping in snapshot mosaic sensors is not straightforward since the bands delivered by the sensor tend to be narrow and unevenly distributed across the range in which they operate. We tackle this drawback as applied to chromatic mapping by using a residual channel attention network equipped with a texture sensitive block. Our method significantly outperforms the traditional approach of interpolating the image and, afterwards, applying a colour matching function. This work establishes state-of-the-art in this domain while also making available to the research community a dataset containing 296 registered stereo multi-spectral/RGB images pairs.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

page 6

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1:

(a) Original mosaic images, with (b) the actual one pixel of the multi-spectral vs 4 pixels of the RGB bayer image encompassing 16 vs 3 channels (wavelengths) per pixel respectively. (c) and (d) demonstrate the data formation for multi-spectral vs RGB images. Each colour is indicative of respective wavelength. The multi-spectral cube is formed by a one-to-one mapping of each wavelength (sub-pixel) to respective channel (one of 16) and zero padding the other 15 sub-pixels. The RGB image is debayered (interpolated) to form the respective R-G-B channels. While a one-to-one mapping (with zero-padding) leads to a large number of redundant zero pixels, as opposed to debayering for RGB images, it results in better super-resolution by taking into account the spatial offset of each pixel.

Imaging spectroscopy devices can capture an information-rich representation of a scene, often in terms of tens or hundreds of wavelength-indexed bands. Recent advances in imaging spectroscopy have seen the development of real-time snapshot mosaic image sensors, which are compact in size and exhibit comparable frame rates to current trichromatic cameras  [44, 4]. Despite of the extensive interest in snapshot mosaic sensors and their potential for multi-spectral imaging, they suffer from an inherent trade-off between the spatial and spectral resolution. This is as a result of their architecture, where the raw resolution of the detector is distributed across the number of wavelength-indexed bands in the spectral image produced at output.

As a result, a higher spatial resolution (smaller pixel size) reduces the number of wavelength bins that can fit on that pixel on the image sensor. This creates a constraint for certain applications where smaller/lighter cameras are needed, for instance, on a UAV [8]. A smaller/lighter camera for portability reasons renders a device suffering from lower spatial and spectral resolution. Further, these sensors have promising applications ranging from remote sensing [13, 15] to food monitoring [11], and from astronomy [3] to object detection in autonomous vehicles [37, 27].

Furthermore, in many applications it is useful, or in fact crucial, to obtain an RGB image of the same scene. There is a large body of work in computer vision that can be directly leveraged if we can devise a method that delivers a high-quality, high-resolution RGB image from a multi-spectral sensor. For example, in case of object detection in autonomous vehicles 

[8, 27], RGB cameras as well as multi-spectral cameras were deployed. The acquired RGB images are usually registered against their multi-spectral counterpart to obtain 3D information of the scene, or to compensate for lower spectral information of the multi-spectral images. However, the low spatial/spectral resolution of multi-spectral cameras could render the registration challenging.

Traditionally, the RGB equivalent of a scene can be extracted from the spectral image using a Colour Matching Function [34], given that the wavelength range of the camera covers, relatively densely, the complete range of the visual spectrum. Given the relatively limited spectral resolution of snapshot mosaic sensors and their uneven spacing over the operating spectral range, the problem of obtaining high resolution RGB images from low spatial and spectral resolution multi-spectral images is an interesting one that needs to be addressed. Therefore, in this paper, we identify a gap in the scientific literature and propose a single unifying method that carries out 1) color-prediction, and 2) super-resolution (SR) from the multi-spectral space to the RGB space simultaneously.

The reason for this gap in the literature is the fact that these devices, with relatively recent technology, have just come in to the market. This leads to a limited exposure to researchers and a lack of rich, publicly available, datasets. Thus, we not only present a method that can simultaneously super-resolve and colour-predict spectral images acquired by snapshot mosaic sensors, but also introduce a novel stereo registered multi-spectral/RGB dataset. Further, our method is quite general in nature, being applicable not only to mosaic snapshot sensor imagery but also to spectral images delivered by other kinds of cameras.

Contributions

  • We propose a method which exploits the mosaic structure of the images acquired by the snapshot sensor directly as opposed to demosaicing images to perform SR and colour prediction sequentially.

  • We introduce a novel algorithm, to our knowledge the first in the literature, to carry out SR and colour-prediction simultaneously from mosaic images, establishing state-of-the-art in the field.

  • We introduce a novel dataset containing 296 registered stereo snapshot mosaic-RGB image pairs.

2 Related work

Due to the lack of suitable datasets, color-prediction is not a problem that has been extensively studied for multi-spectral images in general. Most of the work is carried out with simulated multi-spectral images [26, 25, 24, 16]. The images were simulated exploiting high resolution hyperspectral images. In addition, the focus of these works, while producing RGB images from simulated multi-spectral images, is to mitigate the structural artifacts introduced by different demosaicing methods. They achieve this, with demonstrated good results, via using forms of interpolation (eg, linear, polynomial, low pass filtering in the frequency domain) to insert additional pixels between the observed spatial/spectral ones. None of the works above attempt to predict RGB from spectrally under-sampled data. The camera we are using is an off-the-shelf commercial camera with narrow FWHM . In addition, it covers the blue and red spectra only partially (see Figure 3). Furthermore, many demosaicing methods such as [26, 25] are dependent on a given mosaic pattern as part of their approach. Also, [26, 25, 24, 16] and most of the demosaicing algorithms rely on a wavelength channel being more densely sampled than the others, using that as a guide image. A pattern, with each of the 16 pixels/wavelengths appearing only once (similar to our camera), would be considered by [16] as severely under-sampled and as demonstrated experimentally, leads to poor results [16]. Note that a multi-spectral camera that covers a broader spectral range by using a larger number of narrow wavelength bins would be rendered very bulky and expensive, while the camera used here111Ximea model MQ022HG-IM-SM4x4 470-620 has dimensions of .

Image super-resolution (SR) is a problem that has been studied extensively for RGB images. While these algorithms are not perfect for multi-spectral images, they could be exploited to design efficient multi-spectral SR methods. Early approaches to SR were often based upon the rationale that images with higher spatial information have a frequency domain response whose higher frequency components contribute more compared to images with lower spatial information. Hence, such methods [41]

utilise the shift and aliasing properties of the Fourier transform to obtain a high-resolution representation of the image. Kim  

[19] further extended the concept in [41] to take into account noise and spatial blurring present in the input image. In a related development, in [6], Tikhonov regularization was exploited to carry out SR in the frequency domain.

Modern single-image methods, often based upon learning, also known as example-based single image SR aim at learning the relationship between low resolution (LR) and high resolution (HR) images by training with LR and HR image pairs. Dong  [9] present a deep convolutional network for single-image SR which surpasses the state-of-the-art performance at that time represented by patch-based methods using sparse coding [45] or anchored neighborhood regression [39]. Kim  [17] go deeper with a network based on VGG-net [33]. The network in [17] is comprised of 20 layers so as to exploit the image context across larger image regions. More recently, thanks to some of the recent benchmarks on example-based single image SR [38, 40, 5], several algorithms were introduced for super-resolving images [23, 10, 2, 1, 14]. These algorithms can be directly used on multi-spectral images, however, as applied to snapshot mosaic sensors, they do not take into account the spectral correlation of different channels nor the spatial offset of each pixel.

Despite the fact that modern multi-spectral cameras are more adversely affected by resolution constraints than regular RGB cameras, there are not many works specifically on CNN based multi-spectral SR. Example-based learning methods are limited mainly due to the lack of multi-spectral SR benchmarking platforms and difficulty accessing suitable SR spectral datasets. For example, [22], which focuses on hyperspectral SR and not multi-spectral SR, is among one of the few example-based spectral SR methods. The only directly related multi-spectral SR methods [21, 30], to the best of our knowledge, were recently introduced through the PIRM2018 spectral SR challenge [32, 31]. The work proposed by Lahoud  [21], used an image completion technique followed by 12 convolutional layers to super-resolve images. The second work, [30] by Shi , proposes a deep residual network with channel attention (RCAN) to super-resolve images. The former method [21], unlike the latter [30] involves some image pre-processing and is not an end-to-end CNN implementation. The RCAN network exploited in [30], has also exhibited state-of-the-art performance in the context of RGB image SR [46]. Both of these works take into account spectral correlation, and do not consider the spatial offsets of each wavelength channel.

While all the above exploit demosaiced (debayered or interpolated) images as their LR/HR pair, a very recent work by Fu [12] exploits the mosaic RGB images directly to super-resolve hyperspectral images using a variational method. The work was preceded by the SR work by Zhou [47] who presented a deep residual network for RGB SR that uses mosaic images. They highlighted the fact that demosaicing, which involves some sort of interpolation (such as bicubic), introduces artifacts that can deteriorate SR performance.

3 Proposed Method

As mentioned earlier, the method presented here is quite general in nature. For the sake of generality we view the problem at hand as that of super-resolving and chromatically mapping images with missing or unevenly distributed wavelength bands. To this end, we propose to investigate the structure of the RCAN network in [30] as a baseline, for the combined task of image SR and color-prediction. Moreover, we propose an additional texture network as a means to re-introduce lost information about these bands in the scene. In addition, we notice that the concept of using mosaic images can be extended to multi-spectral images as well, while to the best of our knowledge there is no work reported on using mosaic multi-spectral SR. As a result of this treatment, we can also capitalise on the on-sensor spatial arrangement of the wavelength indexed channels on the mosaic images to improve the SR and colour-prediction performance of our proposed network instead of using demosaiced imagery as input.

Figure 2: Illustration of the proposed network. The residual group depicted in this figure corresponds to the network of [30]. Channel-wise concatenation is denoted by . The input of the network is uninterpolated data (see Figure 1), and the output is the pseudo-colour image.

3.1 Texture Sensitive Residual Channel Attention Network (TSRCAN)

As depicted in Figure 2, our network consists of an RCAN network, and a texture network structure. The RCAN network [46, 30]

encompasses three main parts, the head, the body, and the tail. The head of the network carries out feature extraction via two convolutional layers. The body is comprised of

number of sequential residual groups as the body of the network. Each residual group contains number of residual channel attention networks, each constituting a residual block which incorporates within it a channel attention network (CA). The tail of the RCAN network is the reconstruction part which consists of only one convolutional layer to produce an output with the desired dimension of RGB images. The RCAN network, on its own, given the and pairs, can do a relatively modest job to super-resolve and colour-predict the input multi-spectral images. The RCAN network can be shown as

(1)

However, our network expands the RCAN by introducing a texture sensitive network (TN) on the output of the RCAN. Its structure constitutes two convolutional layers and a pooling layer, followed by a residual block. The residual block comprises two stacks of convolution, batch normalization, and ReLU gating. The whole representation is then upsampled through a deconvolution layer with

number of channels. Each filter in the deconvolution layer represents a texture in the image. The deconvolution layer produces an output of size , where , and are the width and height of the desired RGB images ().

The transformation imposed by the TN network is

(2)

where has the dimensions . is then concatenated with the output of the RCAN network (

), producing a tensor of dimension

. The concatenation operation () is expressed as

(3)

Through a convolutional layer, with channels is reduced to the 3 channels required to produce RGB images. Lets express the operation of this convolutional layer by the transformer . The relationship between the input image , and the output image can be expressed using the transfer function as

(4)

3.2 Loss functions

Figure 3: Representation of the visual wavelength against the 16 wavelength bins (sub-pixels) of the multi-spectral camera, and the ideal CMF. The wavelengths for the 16 channels of the multi-spectral camera are , , , , , , , , , , , , , , ,

In the CNN based SR literature, a simple loss function such as

[46] or [30, 21] is usually utilized to train models. An

function is less sensitive to outliers compared to an

function, and our dataset, which uses registered stereo images, is prone to outliers due to inherent registration artifacts. Hence, we choose the [35]PyTorch[28] implementation which is a more stable implementation compared to vanilla . For simplicity of notation, we refer to the function as , which can be expressed as

(5)

where

and .

In addition, we know that (through experiments) RCAN by itself is capable of learning SR and also the colour relationship between input and output images. Hence, the output of RCAN and input of the TN, is also expected to be similar to . Therefore, we choose to minimise the cost function

(6)

Figure 3 displays the wavelength range of our multi-spectral images vs the wavelength range of the visual light. It also shows the relative amplitude of a CMF for three channels. It is obvious that our multi-spectral images have incomplete blue and red channels, which is one of the main drivers behind this work. We believe that the above loss functions, along with our proposed network, can predict the missing channels and hence improve the colour-prediction performance.

3.3 Implementation Details

Now we specify the implementation details of our proposed TSRCAN. The RCAN part of our network has residual groups. Each residual group contains residual channel attention blocks (RCAB). The channel attention, similar to [46], has a 64 channel input and 64 weighted channel output with a reduction factor of 16. The kernel size of all our convolutional layers are set to . Convolutional layers in shallow feature extraction and the body have filters, except at the tail of the RCAN where channels are reduced to .

The TN structure constitutes a convolutional layer, followed by batch normalization, ReLU, maxpooling, and a residual block similar to that of RCAN but without a channel attention mechanism. In fact, the texture network is identical to the first few layers of the Resnet-18 structure, and we only remove the last layers up to the first residual block. This is followed by a convolutional layer with channels to achieve a tensor with the size . After concatenating this tensor with , the last layer, a convolutional layer with 3 filters produces the desired output dimensions of .

3.4 Zero padded, uninterpolated data

As explained in the introduction, interpolation of mosaic images gives rise to artifacts. For example, SR CNN based methods such as VDSR [17], and SRCNN [18] that first interpolate the input LR images up to the scale of the HR images suffer from these artifacts via losing information and decreasing computational efficiency [46]. Hence, inspired by the procedures in [12, 47], where authors super-resolved hyperspectral [12], and RGB images [47], using RGB bayer patterns, we choose not to interpolate the multi-spectral image. Instead, we use the mosaic pattern in the manner presented in Figure 1. The mosaic multi-spectral pattern in Figure 1(c) represents 1 multi-spectral pixel which constitutes 16 sub-pixels of 16 wavelength channels. To transform the mosaic multi-spectral input to a format that is suitable to be consumed by the network, and to avoid interpolation, we take the following approach. We generate an image with size , that is a multi-spectral image with height and width of the mosaic image, but with 16 channels. For each channel the value of respective sub-pixel is used and another 15 sub-pixels are added and set to zero. This process, for 1 multi-spectral pixel alone for ease of illustration, is shown in Figure 1(c-d).

4 Experiments

4.1 Dataset description

We carry out our experiments using our 296 registered stereo pair multi-spectral/RGB images which were collected from a diverse range of environments. During acquisition time, no gamma correction was applied to the images. In addition, since the stereo pairs were captured using cameras with different image mosaic sensors, the exposure time was optimised for each camera individually for optimum image quality. One is an RGB camera and the other is a multi-spectral camera covering the visible wavelength range . The RGB camera has a CMOS image sensor with a mosaic (bayer) pattern delivering the three RGB channels whereas the mosaic sensor of the multi-spectral camera has a pattern delivering 16 wavelength bands. Hence, the resolution of the RGB images in each axis is twice that of the spectral images. Figure 1 illustrates this resolution relationship between the two filter arrays on both cameras. The original images were interpolated and converted to grayscale for registering using PWC-Net [36], the state-of-the-art optical flow algorithm. Original multi-spectral and the registered RGB were then cropped to the size to minimise optical flow artifacts on the border of the images which also led to training acceleration. We split our 296 image pairs to 250 image pairs for training, 25 for validation and 21 for testing. For each image pair, the multi-spectral image with lower spectral and spatial resolution is referred to as and the registered RGB image with higher spatial resolution is referred to as .

4.2 Analysis of the effect of occlusions

We train our CNN using the and its registered pair. However, with every registration, there are some artifacts including wrong registration and occlusions [42]. We hypothesise that if these artifacts are abundant, they could affect the training process. To check if errors of the above nature could affect the training process, we take the following approach. We calculate optical flow from the multi-spectral image to the RGB image and vice-versa using the [36] algorithm. A straightforward way to detect erroneous flow and occlusions is to calculate the euclidean distance between the two optical flows and remove the pixels with errors larger than a threshold [42]. Thereby, we removed pixels with errors larger than 3 pixels, and created a mask for each image. We multiplied this mask with , , and the output of respective model (). After carrying out several experiments, we did not see a strong correlation between removing the occlusions and improvements in the results. Hence, we believe that either occlusions do not have a significant adverse effect on the training process (which can be attributed to good registration of the images) or substantially more advanced occlusion detection techniques are required to remove their effect. To avoid this topic turning into a research subject of its own, in this work, we decide to acknowledge but not address their effect further in this paper.

4.3 Settings

Evaluation metrics:

The 21 test images were super-resolved and colour-predicted and then evaluated using Pixel Signal to Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Spectral Information Divergence (SID) per channel. As a reference metric, as is customary with SR evaluations, we also present the results for bicubic up-sampled images where the images were colour-predicted using an ideal CMF function [29]. We compare the results from our TSRCAN network to the conventional method and the baseline RCAN network.

Training settings:

During training, we performed data augmentation on our batches of 10 images of our 250 training images, which included random cropping with size , random rotation by , , , with , and random horizontal flip with . The batch size is fixed at 10. Our model is trained by ADAM optimizer [20] with , , and . The initial learning rate is set to

and then halved every 2500 epochs. To implement our models we used PyTorch

[28], and in particular, to implement our functions, the function [35] was used as the main building block. To test our algorithms, we select the models with the best performance on the validation dataset, and present the test results for those models.

4.4 Baseline RCAN Network vs the conventional method

Table 1

presents the numerical results of our ablation studies. The first two rows compares the performance of the RCAN network (a deep learning approach) vs the traditional method of bicubic upsampling followed by a CMF transfer. There are a number of different CMF available due to the fact that the combination of light wavelengths to produce a given perceived color is not unique

[34], and there is a degree of subjectivity in drawing a given CMF function [34]. Hence, to calculate the metrics presented here, only for the conventional method, we normalize all the three R-G-B channels of the ground truth image and the results obtained using the conventional method between 0 and 255 before calculating the metrics. In spite of this, considerable improvements can be observed with the trained baseline RCAN method compared to the conventional method.

4.5 Effect of using zero padded data

Figure 4: Qualitative comparison of the results for training RCAN with zero-padded uninterpolated data (left) with training RCAN with uninterpolated data without zero-padding [32] (right) which is denoted as RCAN in Table 1.

To assess the effect of zero padded uninterpolated data we need to compare it with a baseline. We chose to train the RCAN network not only with the uninterpolated data, but also with the data format the does not take into account the spatial location of each pixel. In this way, the multi-spectral mosaic pixel in Figure 1(c) with dimension translates to . In other words, we remove the zero padded sub-pixels in Figure 1(d). In fact, this is the approach that was taken in the algorithms introduced in [32]. In Table 1, the results obtained using this method is presented as RCAN. Comparing the results of RCAN with RCAN, it can be seen that with zero-padded uninterpolated data, while displaying an improvement across all metrics, the improvements in PSNR is minimal. However, looking at Figure 4, it is obvious that the quality of the images produced by RCAN are superior to that of RCAN. We can reaffirm the notion that, while PSNR remain a descent measure of image quality, it does not provide an accurate measure of perceptual quality of the image [43] compared to SSIM and SID [7].

Method PSNR SSIM SID
(dB) Blue Green Red
Bicubic + 22.375 0.779 7.29e-05 5.72e-05 7.43e-05
CMF (3.40) (0.102) (3.39e-4) (1.79e-4) (1.96e-4)
RCAN 24.78 0.814 5.53e-05 4.18e-05 4.38e-05
(4.62) (0.093) (4.38e-05) (3.34e-05) (3.90e-05)
RCAN 24.90 0.847 4.90e-05 3.63e-05 4.147e-05
(Ours) (3.50) (0.0812) (3.64e-05) (3.25e-05) (4.38e-05)
TSRCAN 26.02 0.855 5.74e-05 3.99e-05 3.53e-05
(Ours) (4.59) (0.095) (6.29e-05) (3.83e-05) (2.81e-05)
Table 1:

Mean and standard deviation (in parenthesis) of PSNR, SSIM, and SID obtained using different models. RCAN

denotes RCAN trained with data without zero padding.
Figure 5: Visual representation of the performance of three methods: the conventional method (white-balanced), RCAN, and TSRCAN along with the ground truth with diverse lighting environments. For better visualization, we show a zoomed in area of the images in the even rows.
Figure 6: Effect of the TN. For an image from the test set (a) shows the error map at the output of RCAN, (b) shows the error map at the output of TSRCAN, (c) depicts the output image from TSRCAN, and (d) illustrates the activation map at the output of TN.

4.6 Effect of Texture Network (TN)

We trained TSRCAN from scratch. A considerable improvement of in PSNR performance is observed compared to the baseline RCAN network performance. SSIM also displays some improvements. Regarding the SID metric, there are some improvements in the red channel compared to the baseline RCAN network but not in the green or the blue channels. Also, the qualitative results in Figure 5 evidence the superiority of our network relative to the baselines. To explain the overall improvement, we look into the activation map at the output of the TN. Figure 6 presents, for an image from the test set, (a) the error map at the output of RCAN, (b) the error map at the output of TSRCAN, (c) output image of TSRCAN, and (d) the activation map at the output of the TN. It seems generally, in textures/pixels that RCAN produces a larger error, the activation map is more “active”, leading to less error for TSRCAN in the same areas/pixels.

4.7 Effect of network size

Figure 7 displays PSNR and SSIM results for RCAN networks with different numbers of residual groups. There is no clear correlation between the number of residual groups and the performance of the network. For example, as the number of residual groups increases from 5 to 6 and 7, the PSNR improves only negligibly by . Also, SSIM results deteriorate slightly and then bounce back slightly. For 4, and 3 residual groups, the improvement in PSNR is 0.16dB, and 0.62dB respectively. In addition, these results do not correlate with the SSIM results in which the baseline we use (with 5 residual groups) exhibits approximately the mean of all the RCAN versions. Our choice of using five residual groups was due to the fact that the most recent work on multi-spectral SR, based on RCAN, used 5 residual groups, and we chose to investigate that model as our baseline [30]. These observations, indeed, highlight the effectiveness of the texture network module in our network (TSRCAN) which exhibited a 1.1dB improvement in PSNR compared to the baseline RCAN.

Figure 7: Effect of the number of residual groups on the performance of the network in terms of PSNR and SSIM on the test data.

4.8 Spectral Information Divergence (SID)

A close observation of SID per channel results in Table 1 shows that the blue channel generally exhibits a larger SID error, followed by the red channel, and with the green channel having the lowest SID. This is true except for the results obtained using the conventional method, where we white-balanced the channels to calculate the metrics. These results correlate with the fact that the multi-spectral camera has a large portion of the blue wavelength range missing (see Figure 3) compared to the red wavelengths. The green channel exhibits the lowest SID because the multi-spectral camera covers the whole green wavelength range, although sparsely. The reddish appearance in the white-point balanced images transferred using the conventional method can also be explained by the fact that the camera has a larger portion of the blue wavelengths missing compared with the blue wavelengths. Therefore, leading to an exaggerated contribution of the red channel compared to the blue channel.

4.9 Effect of poor lighting conditions

In Figure 5, we include scenarios with diverse lighting conditions. Specifically, the first row of the figure includes an indoor image with poor lighting which implies an incomplete wavelength spectrum. To elaborate, the camera channels depicted in Figure 3 can be thought of as sampling wavelengths which already do not cover the complete visible wavelength range. On top of this, poor lighting conditions results in fewer efficient samples. For example, with the indoor image in the first row, most of the wavelength spectrum is likely to be emitted from fluorescent lights (which is known for producing a poor, nonuniform, sparse spectrum) and some leakage of outdoor light which results in a poor spectrum. Hence, there are much fewer wavelength samples and the network is having a harder time predicting the unknown wavelengths, and hence the R-G-B values of multi-spectral pixels. Our algorithm produces somewhat decent results for this difficult scenario, which means that the network is doing a good job in learning to predict the unknown wavelengths. However, the performance can be improved by expanding the dataset with images taken in a controlled laboratory environment to include more examples of poorly lit conditions. Given that the dataset contains 296 image pairs to train and test our networks, expanding this data set is a perfectly feasible task. In fact, this is a future work that we are planning to carry out.

5 Conclusion

In this paper we proposed a novel deep learning approach that addresses the ill-posed problem of producing RGB images from spectrally and spatially under-sampled multi-spectral images, which significantly outperforms, quantitatively and qualitatively, conventional methods as well as the state-of-the-art RCAN network. Moreover, the method is quite general in nature and can be applied to multi-spectral images of low spatial and spectral resolution with unevenly spaced or missing channels. Our approach uses a texture sensitive block to enable the network so as to re-introduce information from missing wavelength bands that may be still implicitly available in the texture of the image. In addition, we have introduced a novel dataset consisting of 296 registered stereo multi-spectral/RGB image pairs.

References

  • [1] N. Ahn, B. Kang, and K. Sohn (2018) Image super-resolution via progressive cascading residual network. progressive 24, pp. 0–771. Cited by: §2.
  • [2] Y. Bei, A. Damian, S. Hu, S. Menon, N. Ravi, and C. Rudin (2018) New techniques for preserving global structure and denoising with low information loss in single-image super-resolution. In

    The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops

    ,
    Vol. 4. Cited by: §2.
  • [3] J. F. Bell, D. Wellington, C. Hardgrove, A. Godber, M. S. Rice, J. R. Johnson, and A. Fraeman (2016) Multispectral imaging of mars from the mars science laboratory mastcam instruments: spectral properties and mineralogic implications along the gale crater traverse. In AAS/Division for Planetary Sciences Meeting Abstracts, Vol. 48. Cited by: §1.
  • [4] M. Bigas, E. Cabruja, J. Forest, and J. Salvi (2006) Review of cmos image sensors. Microelectronics journal 37 (5), pp. 433–451. Cited by: §1.
  • [5] Y. Blau, R. Mechrez, R. Timofte, T. Michaeli, and L. Zelnik-Manor (2018) 2018 PIRM challenge on perceptual image super-resolution. In European Conference on Computer Vision Workshops (ECCVW), Cited by: §2.
  • [6] N. Bose, H. Kim, and H. Valenzuela (1993) Recursive total least squares algorithm for image reconstruction from noisy, undersampled frames. Multidimensional Systems and Signal Processing 4 (3), pp. 253–268. Cited by: §2.
  • [7] C. Chang (1999) Spectral information divergence for hyperspectral image analysis. In Geoscience and Remote Sensing Symposium, 1999. IGARSS’99 Proceedings. IEEE 1999 International, Vol. 1, pp. 509–511. Cited by: §4.5.
  • [8] D. Doering, M. Vizzotto, C. Bredemeier, C. da Costa, R. Henriques, E. Pignaton, and C. Pereira (2016) MDE-based development of a multispectral camera for precision agriculture. IFAC-PapersOnLine 49 (30), pp. 24–29. Cited by: §1, §1.
  • [9] C. Dong, C. C. Loy, K. He, and X. Tang (2016) Image super-resolution using deep convolutional networks. IEEE transactions on pattern analysis and machine intelligence 38 (2), pp. 295–307. Cited by: §2.
  • [10] Y. Fan, H. Shi, J. Yu, D. Liu, W. Han, H. Yu, Z. Wang, X. Wang, and T. S. Huang (2017) Balanced two-stage residual networks for image super-resolution. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1157–1164. Cited by: §2.
  • [11] Y. Feng and D. Sun (2012) Application of hyperspectral imaging in food safety inspection and control: a review. Critical reviews in food science and nutrition 52 (11), pp. 1039–1058. Cited by: §1.
  • [12] Y. Fu, Y. Zheng, H. Huang, I. Sato, and Y. Sato (2018) Hyperspectral image super-resolution with a mosaic rgb image. IEEE Transactions on Image Processing 27 (11), pp. 5539–5552. Cited by: §2, §3.4.
  • [13] A. F. Goetz (2009) Three decades of hyperspectral remote sensing of the earth: a personal view. Remote Sensing of Environment 113, pp. S5–S16. Cited by: §1.
  • [14] M. Haris, G. Shakhnarovich, and N. Ukita (2018) Deep backprojection networks for super-resolution. In Conference on Computer Vision and Pattern Recognition, Cited by: §2.
  • [15] M. Hasan, X. Jia, A. Robles-Kelly, J. Zhou, and M. R. Pickering (2010) Multi-spectral remote sensing image registration via spatial relationship analysis on sift keypoints. In Geoscience and Remote Sensing Symposium (IGARSS), 2010 IEEE International, pp. 1011–1014. Cited by: §1.
  • [16] S. P. Jaiswal, L. Fang, V. Jakhetiya, J. Pang, K. Mueller, and O. C. Au (2016) Adaptive multispectral demosaicking based on frequency-domain analysis of spectral correlation. IEEE Transactions on Image Processing 26 (2), pp. 953–968. Cited by: §2.
  • [17] J. Kim, J. Kwon Lee, and K. Mu Lee (2016) Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1646–1654. Cited by: §2, §3.4.
  • [18] J. Kim, J. Kwon Lee, and K. Mu Lee (2016) Deeply-recursive convolutional network for image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1637–1645. Cited by: §3.4.
  • [19] S. Kim, N. K. Bose, and H. Valenzuela (1990) Recursive reconstruction of high resolution image from noisy undersampled multiframes. IEEE Transactions on Acoustics, Speech, and Signal Processing 38 (6), pp. 1013–1027. Cited by: §2.
  • [20] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §4.3.
  • [21] F. Lahoud, R. Zhou, and S. Süsstrunk (2018) Multi-modal spectral image super-resolution. In European Conference on Computer Vision, pp. 35–50. Cited by: §2, §3.2.
  • [22] Y. Li, J. Hu, X. Zhao, W. Xie, and J. Li (2017)

    Hyperspectral image super-resolution using deep convolutional neural network

    .
    Neurocomputing 266, pp. 29–41. Cited by: §2.
  • [23] B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee (2017) Enhanced deep residual networks for single image super-resolution. In The IEEE conference on computer vision and pattern recognition (CVPR) workshops, Vol. 1, pp. 4. Cited by: §2.
  • [24] L. Miao, H. Qi, R. Ramanath, and W. E. Snyder (2006) Binary tree-based generic demosaicking algorithm for multispectral filter arrays. IEEE Transactions on Image Processing 15 (11), pp. 3550–3558. Cited by: §2.
  • [25] Y. Monno, M. Tanaka, and M. Okutomi (2015) N-to-srgb mapping for single-sensor multispectral imaging. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 33–40. Cited by: §2.
  • [26] Y. Monno, S. Kikuchi, M. Tanaka, and M. Okutomi (2015) A practical one-shot multispectral imaging system using a single image sensor. IEEE Transactions on Image Processing 24 (10), pp. 3048–3059. Cited by: §2.
  • [27] M. Najafi, S. T. Namin, and L. Petersson (2013) Classification of natural scene multi spectral images using a new enhanced crf. In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3704–3711. Cited by: §1, §1.
  • [28] Pytorch v0.4.0. Note: https://pytorch.org/docs/0.4.0/Last accessed: 2019-03-22 Cited by: §3.2, §4.3.
  • [29] A. Robles-Kelly and C. P. Huynh (2012) Imaging spectroscopy for scene analysis. Springer Science & Business Media. Cited by: §4.3.
  • [30] Z. Shi, C. Chen, Z. Xiong, D. Liu, and F. Wu (2018) Deep residual attention network for spectral image super-resolution. In European Conference on Computer Vision Workshops (ECCVW), Cited by: §2, Figure 2, §3.1, §3.2, §3, §4.7.
  • [31] M. Shoeiby, A. Robles-Kelly, R. Wei, and R. Timofte (2018) PIRM2018 challenge on spectral image super-resolution: dataset and study. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 0–0. Cited by: §2.
  • [32] M. Shoeiby, A. Robles-Kelly, R. Zhou, F. Lahoud, S. Süsstrunk, Z. Xiong, Z. Shi, C. Chen, D. Liu, Z. Zha, F. Wu, K. Wei, T. Zhang, L. Wang, Y. Fu, Z. Zhong, K. Nagasubramanian, A. K. Singh, A. Singh, S. Sarkar, and G. Baskar (2018) PIRM2018 challenge on spectral image super-resolution: methods and results. In European Conference on Computer Vision Workshops (ECCVW), Cited by: §2, Figure 4, §4.5.
  • [33] K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §2.
  • [34] T. Smith and J. Guild (1931) The cie colorimetric standards and their use. Transactions of the optical society 33 (3), pp. 73. Cited by: §1, §4.4.
  • [35] SmoothL1 loss function. Note: https://pytorch.org/docs/stable/nn.html#smoothl1lossLast accessed: 2019-03-22 Cited by: §3.2, §4.3.
  • [36] D. Sun, X. Yang, M. Liu, and J. Kautz (2018) Pwc-net: cnns for optical flow using pyramid, warping, and cost volume. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8934–8943. Cited by: §4.1, §4.2.
  • [37] K. Takumi, K. Watanabe, Q. Ha, A. Tejero-De-Pablos, Y. Ushiku, and T. Harada (2017) Multispectral object detection for autonomous vehicles. In Proceedings of the on Thematic Workshops of ACM Multimedia 2017, pp. 35–43. Cited by: §1.
  • [38] R. Timofte, E. Agustsson, L. Van Gool, M. Yang, L. Zhang, B. Lim, S. Son, H. Kim, S. Nah, K. M. Lee, et al. (2017) Ntire 2017 challenge on single image super-resolution: methods and results. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2017 IEEE Conference on, pp. 1110–1121. Cited by: §2.
  • [39] R. Timofte, V. De Smet, and L. Van Gool (2014) A+: adjusted anchored neighborhood regression for fast super-resolution. In Asian Conference on Computer Vision, pp. 111–126. Cited by: §2.
  • [40] R. Timofte, S. Gu, J. Wu, L. Van Gool, L. Zhang, M. Yang, M. Haris, G. Shakhnarovis, N. Ukita, H. Shijia, et al. (2018) NTIRE 2018 challenge on single image super-resolution: methods and results. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2017 IEEE Conference on, Cited by: §2.
  • [41] R. Tsai (1984) Multiframe image restoration and registration. Advance Computer Visual and Image Processing 1, pp. 317–339. Cited by: §2.
  • [42] Y. Wang, Y. Yang, Z. Yang, L. Zhao, P. Wang, and W. Xu (2018)

    Occlusion aware unsupervised learning of optical flow

    .
    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4884–4893. Cited by: §4.2.
  • [43] Z. Wang and A. C. Bovik (2009) Mean squared error: love it or leave it? a new look at signal fidelity measures. IEEE signal processing magazine 26 (1), pp. 98–117. Cited by: §4.5.
  • [44] D. Wu and D. Sun (2013) Advanced applications of hyperspectral imaging technology for food quality and safety analysis and assessment: a review—part i: fundamentals. Innovative Food Science & Emerging Technologies 19, pp. 1–14. Cited by: §1.
  • [45] J. Yang, J. Wright, T. S. Huang, and Y. Ma (2010) Image super-resolution via sparse representation. IEEE transactions on image processing 19 (11), pp. 2861–2873. Cited by: §2.
  • [46] Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu (2018) Image super-resolution using very deep residual channel attention networks. In European Conference on Computer Vision, pp. 294–310. Cited by: §2, §3.1, §3.2, §3.3, §3.4.
  • [47] R. Zhou, R. Achanta, and S. Süsstrunk (2018) Deep residual network for joint demosaicing and super-resolution. In Color and Imaging Conference, Vol. 2018, pp. 75–80. Cited by: §2, §3.4.