Pose Embeddings: A Deep Architecture for Learning to Match Human Poses

07/01/2015
by   Greg Mori, et al.
0

We present a method for learning an embedding that places images of humans in similar poses nearby. This embedding can be used as a direct method of comparing images based on human pose, avoiding potential challenges of estimating body joint positions. Pose embedding learning is formulated under a triplet-based distance criterion. A deep architecture is used to allow learning of a representation capable of making distinctions between different poses. Experiments on human pose matching and retrieval from video data demonstrate the potential of the method.

READ FULL TEXT

page 1

page 5

page 7

page 9

research
07/01/2019

Large Area 3D Human Pose Detection Via Stereo Reconstruction in Panoramic Cameras

We propose a novel 3D human pose detector using two panoramic cameras. W...
research
03/07/2023

A Light-Weight Contrastive Approach for Aligning Human Pose Sequences

We present a simple unsupervised method for learning an encoder mapping ...
research
03/24/2016

Seeing Invisible Poses: Estimating 3D Body Pose from Egocentric Video

Understanding the camera wearer's activity is central to egocentric visi...
research
09/15/2023

PoseFix: Correcting 3D Human Poses with Natural Language

Automatically producing instructions to modify one's posture could open ...
research
07/08/2019

Linking Art through Human Poses

We address the discovery of composition transfer in artworks based on th...
research
07/04/2018

Deep Autoencoder for Combined Human Pose Estimation and body Model Upscaling

We present a method for simultaneously estimating 3D human pose and body...
research
12/16/2022

GFPose: Learning 3D Human Pose Prior with Gradient Fields

Learning 3D human pose prior is essential to human-centered AI. Here, we...

Please sign up or login with your details

Forgot password? Click here to reset