Robust image reconstruction from multi-view measurements
We propose a novel method to accurately reconstruct a set of images representing a single scene from few linear multi-view measurements. Each observed image is modeled as the sum of a background image and a foreground one. The background image is common to all observed images but undergoes geometric transformations, as the scene is observed from different viewpoints. In this paper, we assume that these geometric transformations are represented by a few parameters, e.g., translations, rotations, affine transformations, etc.. The foreground images differ from one observed image to another, and are used to model possible occlusions of the scene. The proposed reconstruction algorithm estimates jointly the images and the transformation parameters from the available multi-view measurements. The ideal solution of this multi-view imaging problem minimizes a non-convex functional, and the reconstruction technique is an alternating descent method built to minimize this functional. The convergence of the proposed algorithm is studied, and conditions under which the sequence of estimated images and parameters converges to a critical point of the non-convex functional are provided. Finally, the efficiency of the algorithm is demonstrated using numerical simulations for applications such as compressed sensing or super-resolution.
READ FULL TEXT