DeepAI
Log In Sign Up

Conservative Wasserstein Training for Pose Estimation

11/03/2019
by   Xiaofeng Liu, et al.
0

This paper targets the task with discrete and periodic class labels (e.g., pose/orientation estimation) in the context of deep learning. The commonly used cross-entropy or regression loss is not well matched to this problem as they ignore the periodic nature of the labels and the class similarity, or assume labels are continuous value. We propose to incorporate inter-class correlations in a Wasserstein training framework by pre-defining (i.e., using arc length of a circle) or adaptively learning the ground metric. We extend the ground metric as a linear, convex or concave increasing function w.r.t. arc length from an optimization perspective. We also propose to construct the conservative target labels which model the inlier and outlier noises using a wrapped unimodal-uniform mixture distribution. Unlike the one-hot setting, the conservative label makes the computation of Wasserstein distance more challenging. We systematically conclude the practical closed-form solution of Wasserstein distance for pose data with either one-hot or conservative target label. We evaluate our method on head, body, vehicle and 3D object pose benchmarks with exhaustive ablation studies. The Wasserstein loss obtaining superior performance over the current methods, especially using convex mapping function for ground metric, conservative label, and closed-form solution.

READ FULL TEXT

page 1

page 2

page 3

page 4

11/03/2019

Unimodal-uniform Constrained Wasserstein Training for Medical Diagnosis

The labels in medical diagnosis task are usually discrete and successive...
10/21/2020

Importance-Aware Semantic Segmentation in Self-Driving with Discrete Wasserstein Training

Semantic segmentation (SS) is an important perception manner for self-dr...
08/11/2020

Reinforced Wasserstein Training for Severity-Aware Semantic Segmentation in Autonomous Driving

Semantic segmentation is important for many real-world systems, e.g., au...
11/17/2016

Squared Earth Mover's Distance-based Loss for Training Deep Neural Networks

In the context of single-label classification, despite the huge success ...
03/01/2021

Computationally Efficient Wasserstein Loss for Structured Labels

The problem of estimating the probability distribution of labels has bee...
04/08/2019

Pushing the right boundaries matters! Wasserstein Adversarial Training for Label Noise

Noisy labels often occur in vision datasets, especially when they are is...