Convolutional neural network for Fourier ptychography video reconstruction: learning temporal dynamics from spatial ensembles
Convolutional neural networks (CNNs) have gained tremendous success in solving complex inverse problems for both problems involving independent datasets from input-output pairs of static objects, as well as sequential datasets from dynamic objects. In order to learn the underlying temporal statistics, a video sequence is typically used at the cost of network complexity and computation. The aim of this work is to develop a novel CNN framework to reconstruct video sequence of dynamic live cells captured using a computational microscopy technique, Fourier ptychographic microscopy (FPM). The unique feature of the FPM is its capability to reconstruct images with both wide field-of-view (FOV) and high resolution, i.e. a large space-bandwidth-product (SBP), by taking a series of low resolution intensity images. For live cell imaging, a single FPM frame contains thousands of cell samples with different morphological features. Our idea is to fully exploit the statistical information provided by this large spatial ensembles so as to learn temporal information in a sequential measurement, without using any additional temporal dataset. Specifically, we show that it is possible to reconstruct high-SBP dynamic cell videos by a CNN trained only on the first FPM dataset captured at the beginning of a time-series experiment. Our CNN approach reconstructs a 12800x10800 pixels phase image using only 25 seconds, a 50x speedup compared to the model-based FPM algorithm. In addition, the CNN further reduces the required number of images in each time frame by 6x. Overall, this significantly improves the imaging throughput by reducing both the acquisition and computational times. Our technique demonstrates a promising deep learning approach to continuously monitor large live-cell populations over an extended time and gather useful spatial and temporal information with sub-cellular resolution.
READ FULL TEXT