Multi-view Cross-Modality MR Image Translation for Vestibular Schwannoma and Cochlea Segmentation

03/27/2023
by   Bogyeong Kang, et al.
0

In this work, we propose a multi-view image translation framework, which can translate contrast-enhanced T1 (ceT1) MR imaging to high-resolution T2 (hrT2) MR imaging for unsupervised vestibular schwannoma and cochlea segmentation. We adopt two image translation models in parallel that use a pixel-level consistent constraint and a patch-level contrastive constraint, respectively. Thereby, we can augment pseudo-hrT2 images reflecting different perspectives, which eventually lead to a high-performing segmentation model. Our experimental results on the CrossMoDA challenge show that the proposed method achieved enhanced performance on the vestibular schwannoma and cochlea segmentation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset