MaGIC: Multi-modality Guided Image Completion

05/19/2023
by   Yongsheng Yu, et al.
0

The vanilla image completion approaches are sensitive to the large missing regions due to limited available reference information for plausible generation. To mitigate this, existing methods incorporate the extra cue as a guidance for image completion. Despite improvements, these approaches are often restricted to employing a single modality (e.g., segmentation or sketch maps), which lacks scalability in leveraging multi-modality for more plausible completion. In this paper, we propose a novel, simple yet effective method for Multi-modal Guided Image Completion, dubbed MaGIC, which not only supports a wide range of single modality as the guidance (e.g., text, canny edge, sketch, segmentation, reference image, depth, and pose), but also adapts to arbitrarily customized combination of these modalities (i.e., arbitrary multi-modality) for image completion. For building MaGIC, we first introduce a modality-specific conditional U-Net (MCU-Net) that injects single-modal signal into a U-Net denoiser for single-modal guided image completion. Then, we devise a consistent modality blending (CMB) method to leverage modality signals encoded in multiple learned MCU-Nets through gradient guidance in latent space. Our CMB is training-free, and hence avoids the cumbersome joint re-training of different modalities, which is the secret of MaGIC to achieve exceptional flexibility in accommodating new modalities for completion. Experiments show the superiority of MaGIC over state-of-arts and its generalization to various completion tasks including in/out-painting and local editing. Our project with code and models is available at yeates.github.io/MaGIC-Page/.

READ FULL TEXT

page 2

page 3

page 7

page 8

page 9

research
07/07/2022

A Novel Unified Conditional Score-based Generative Framework for Multi-modal Medical Image Completion

Multi-modal medical image completion has been extensively applied to all...
research
05/28/2023

Cognitively Inspired Cross-Modal Data Generation Using Diffusion Models

Most existing cross-modal generative methods based on diffusion models u...
research
06/01/2023

Cocktail: Mixing Multi-Modality Controls for Text-Conditional Image Generation

Text-conditional diffusion models are able to generate high-fidelity ima...
research
04/11/2023

Unified Multi-Modal Image Synthesis for Missing Modality Imputation

Multi-modal medical images provide complementary soft-tissue characteris...
research
07/25/2019

Hetero-Modal Variational Encoder-Decoder for Joint Modality Completion and Segmentation

We propose a new deep learning method for tumour segmentation when deali...
research
04/25/2022

SceneTrilogy: On Scene Sketches and its Relationship with Text and Photo

We for the first time extend multi-modal scene understanding to include ...
research
09/28/2022

Clustering-Induced Generative Incomplete Image-Text Clustering (CIGIT-C)

The target of image-text clustering (ITC) is to find correct clusters by...

Please sign up or login with your details

Forgot password? Click here to reset