Interpretable Deep Multimodal Image Super-Resolution

by   Iman Marivani, et al.

Multimodal image super-resolution (SR) is the reconstruction of a high resolution image given a low-resolution observation with the aid of another image modality. While existing deep multimodal models do not incorporate domain knowledge about image SR, we present a multimodal deep network design that integrates coupled sparse priors and allows the effective fusion of information from another modality into the reconstruction process. Our method is inspired by a novel iterative algorithm for coupled convolutional sparse coding, resulting in an interpretable network by design. We apply our model to the super-resolution of near-infrared image guided by RGB images. Experimental results show that our model outperforms state-of-the-art methods.


Multimodal Deep Unfolding for Guided Image Super-Resolution

The reconstruction of a high resolution image given a low resolution obs...

Multimodal Sensor Fusion In Single Thermal image Super-Resolution

With the fast growth in the visual surveillance and security sectors, th...

Weakly Aligned Joint Cross-Modality Super Resolution

Non-visual imaging sensors are widely used in the industry for different...

SR-Affine: High-quality 3D hand model reconstruction from UV Maps

Under various poses and heavy occlusions,3D hand model reconstruction ba...

Multimodal Image Super-resolution via Deep Unfolding with Side Information

Deep learning methods have been successfully applied to various computer...

Cryo-ZSSR: multiple-image super-resolution based on deep internal learning

Single-particle cryo-electron microscopy (cryo-EM) is an emerging imagin...