Model-based disentanglement of lens occlusions
With lens occlusions, naive image-to-image networks fail to learn an accurate source to target mapping, due to the partial entanglement of the scene and occlusion domains. We propose an unsupervised model-based disentanglement training, which learns to disentangle scene from lens occlusion and can regress the occlusion model parameters from target database. The experiments demonstrate our method is able to handle varying types of occlusions (raindrops, dirt, watermarks, etc.) and generate highly realistic translations, qualitatively and quantitatively outperforming the state-of-the-art on multiple datasets.
READ FULL TEXT