Janine Thoma

is this you? claim profile


  • Sliced Wasserstein Generative Models

    In generative modeling, the Wasserstein distance (WD) has emerged as a useful metric to measure the discrepancy between generated and real data distributions. Unfortunately, it is challenging to approximate the WD of high-dimensional distributions. In contrast, the sliced Wasserstein distance (SWD) factorizes high-dimensional distributions into their multiple one-dimensional marginal distributions and is thus easier to approximate. In this paper, we introduce novel approximations of the primal and dual SWD. Instead of using a large number of random projections, as it is done by conventional SWD approximation methods, we propose to approximate SWDs with a small number of parameterized orthogonal projections in an end-to-end deep learning fashion. As concrete applications of our SWD approximations, we design two types of differentiable SWD blocks to equip modern generative frameworks---Auto-Encoders (AE) and Generative Adversarial Networks (GAN). In the experiments, we not only show the superiority of the proposed generative models on standard image synthesis benchmarks, but also demonstrate the state-of-the-art performance on challenging high resolution image and video generation in an unsupervised manner.

    04/10/2019 ∙ by Jiqing Wu, et al. ∙ 46 share

    read it

  • Image-based Navigation using Visual Features and Map

    Building on progress in feature representations for image retrieval, image-based localization has seen a surge of research interest. Image-based localization has the advantage of being inexpensive and efficient, often avoiding the use of 3D metric maps altogether. This said, the need to maintain a large number of reference images as an effective support of localization in a scene, nonetheless calls for them to be organized in a map structure of some kind. The problem of localization often arises as part of a navigation process. We are, therefore, interested in summarizing the reference images as a set of landmarks, which meet the requirements for image-based navigation. A contribution of the paper is to formulate such a set of requirements for the two sub-tasks involved: map construction and self localization. These requirements are then exploited for compact map representation and accurate self-localization, using the framework of a network flow problem. During this process, we formulate the map construction and self-localization problems as convex quadratic and second-order cone programs, respectively. We evaluate our methods on publicly available indoor and outdoor datasets, where they outperform existing methods significantly.

    12/10/2018 ∙ by Janine Thoma, et al. ∙ 4 share

    read it

  • Energy-relaxed Wassertein GANs(EnergyWGAN): Towards More Stable and High Resolution Image Generation

    Recently, generative adversarial networks (GANs) have achieved great impacts on a broad number of applications, including low resolution(LR) image synthesis. However, they suffer from unstable training especially when image resolution increases. To overcome this bottleneck, this paper generalizes the state-of-the-art Wasserstein GANs (WGANs) to an energy-relaxed objective which enables more stable and higher-resolution image generation. The benefits of this generalization can be summarized in three main points. Firstly, the proposed EnergyWGAN objective guarantees a valid symmetric divergence serving as a more rigorous and meaningful quantitative measure. Secondly, EnergyWGAN is capable of searching a more faithful solution space than the original WGANs without fixing a specific k-Lipschitz constraint. Finally, the proposed EnergyWGAN offers a natural way of stacking GANs for high resolution image generation. In our experiments we not only demonstrate the stable training ability of the proposed EnergyWGAN and its better image generation results on standard benchmark datasets, but also show the advantages over the state-of-the-art GANs on a real-world high resolution image dataset.

    12/04/2017 ∙ by Jiqing Wu, et al. ∙ 0 share

    read it