Fork or Fail: Cycle-Consistent Training with Many-to-One Mappings

12/14/2020
by   Qipeng Guo, et al.
10

Cycle-consistent training is widely used for jointly learning a forward and inverse mapping between two domains of interest without the cumbersome requirement of collecting matched pairs within each domain. In this regard, the implicit assumption is that there exists (at least approximately) a ground-truth bijection such that a given input from either domain can be accurately reconstructed from successive application of the respective mappings. But in many applications no such bijection can be expected to exist and large reconstruction errors can compromise the success of cycle-consistent training. As one important instance of this limitation, we consider practically-relevant situations where there exists a many-to-one or surjective mapping between domains. To address this regime, we develop a conditional variational autoencoder (CVAE) approach that can be viewed as converting surjective mappings to implicit bijections whereby reconstruction errors in both directions can be minimized, and as a natural byproduct, realistic output diversity can be obtained in the one-to-many direction. As theoretical motivation, we analyze a simplified scenario whereby minima of the proposed CVAE-based energy function align with the recovery of ground-truth surjective mappings. On the empirical side, we consider a synthetic image dataset with known ground-truth, as well as a real-world application involving natural language generation from knowledge graphs and vice versa, a prototypical surjective case. For the latter, our CVAE pipeline can capture such many-to-one mappings during cycle training while promoting textural diversity for graph-to-text tasks. Our code is available at github.com/QipengGuo/CycleGT

READ FULL TEXT

page 7

page 13

page 22

04/18/2016

Learning Dense Correspondence via 3D-guided Cycle Consistency

Discriminative deep learning approaches have shown impressive results fo...
02/27/2018

Augmented CycleGAN: Learning Many-to-Many Mappings from Unpaired Data

Learning inter-domain mappings from unpaired data can improve performanc...
09/15/2022

Distribution Aware Metrics for Conditional Natural Language Generation

Traditional automated metrics for evaluating conditional natural languag...
10/25/2021

Spread2RML: Constructing Knowledge Graphs by Predicting RML Mappings on Messy Spreadsheets

The RDF Mapping Language (RML) allows to map semi-structured data to RDF...
09/06/2018

Cycle-Consistent Speech Enhancement

Feature mapping using deep neural networks is an effective approach for ...
11/06/2017

A Linear Variational Principle for Riemann Mappings and Discrete Conformality

We consider Riemann mappings from bounded Lipschitz domains in the plane...
10/14/2020

Better Patch Stitching for Parametric Surface Reconstruction

Recently, parametric mappings have emerged as highly effective surface r...