DeepAI AI Chat
Log In Sign Up

Improving Generalization of Deep Networks for Inverse Reconstruction of Image Sequences

by   Sandesh Ghimire, et al.
Rochester Institute of Technology

Deep learning networks have shown state-of-the-art performance in many image reconstruction problems. However, it is not well understood what properties of representation and learning may improve the generalization ability of the network. In this paper, we propose that the generalization ability of an encoder-decoder network for inverse reconstruction can be improved in two means. First, drawing from analytical learning theory, we theoretically show that a stochastic latent space will improve the ability of a network to generalize to test data outside the training distribution. Second, following the information bottleneck principle, we show that a latent representation minimally informative of the input data will help a network generalize to unseen input variations that are irrelevant to the output reconstruction. Therefore, we present a sequence image reconstruction network optimized by a variational approximation of the information bottleneck principle with stochastic latent space. In the application setting of reconstructing the sequence of cardiac transmembrane potential from bodysurface potential, we assess the two types of generalization abilities of the presented network against its deterministic counterpart. The results demonstrate that the generalization ability of an inverse reconstruction network can be improved by stochasticity as well as the information bottleneck.


A Learning-Based 3D EIT Image Reconstruction Method

Deep learning has been widely employed to solve the Electrical Impedance...

Learning Geometry-Dependent and Physics-Based Inverse Image Reconstruction

Deep neural networks have shown great potential in image reconstruction ...

Learning to Drop Out: An Adversarial Approach to Training Sequence VAEs

In principle, applying variational autoencoders (VAEs) to sequential dat...

Towards Generalizable Graph Contrastive Learning: An Information Theory Perspective

Graph contrastive learning (GCL) emerges as the most representative appr...

Generalization Gap in Amortized Inference

The ability of likelihood-based probabilistic models to generalize to un...

Learning Optimal Representations with the Decodable Information Bottleneck

We address the question of characterizing and finding optimal representa...