DeepAI AI Chat
Log In Sign Up

Improving Generalization of Deep Networks for Inverse Reconstruction of Image Sequences

03/05/2019
by   Sandesh Ghimire, et al.
Rochester Institute of Technology
0

Deep learning networks have shown state-of-the-art performance in many image reconstruction problems. However, it is not well understood what properties of representation and learning may improve the generalization ability of the network. In this paper, we propose that the generalization ability of an encoder-decoder network for inverse reconstruction can be improved in two means. First, drawing from analytical learning theory, we theoretically show that a stochastic latent space will improve the ability of a network to generalize to test data outside the training distribution. Second, following the information bottleneck principle, we show that a latent representation minimally informative of the input data will help a network generalize to unseen input variations that are irrelevant to the output reconstruction. Therefore, we present a sequence image reconstruction network optimized by a variational approximation of the information bottleneck principle with stochastic latent space. In the application setting of reconstructing the sequence of cardiac transmembrane potential from bodysurface potential, we assess the two types of generalization abilities of the presented network against its deterministic counterpart. The results demonstrate that the generalization ability of an inverse reconstruction network can be improved by stochasticity as well as the information bottleneck.

READ FULL TEXT
08/30/2022

A Learning-Based 3D EIT Image Reconstruction Method

Deep learning has been widely employed to solve the Electrical Impedance...
07/18/2020

Learning Geometry-Dependent and Physics-Based Inverse Image Reconstruction

Deep neural networks have shown great potential in image reconstruction ...
09/26/2022

Learning to Drop Out: An Adversarial Approach to Training Sequence VAEs

In principle, applying variational autoencoders (VAEs) to sequential dat...
11/20/2022

Towards Generalizable Graph Contrastive Learning: An Information Theory Perspective

Graph contrastive learning (GCL) emerges as the most representative appr...
05/23/2022

Generalization Gap in Amortized Inference

The ability of likelihood-based probabilistic models to generalize to un...
09/27/2020

Learning Optimal Representations with the Decodable Information Bottleneck

We address the question of characterizing and finding optimal representa...