Adversarial Synthesis of Human Pose from Text

05/01/2020
by   Yifei Zhang, et al.
0

This work introduces the novel task of human pose synthesis from text. In order to solve this task, we propose a model that is based on a conditional generative adversarial network. It is designed to generate 2D human poses conditioned on human-written text descriptions. The model is trained and evaluated using the COCO dataset, which consists of images capturing complex everyday scenes. We show through qualitative and quantitative results that the model is capable of synthesizing plausible poses matching the given text, indicating it is possible to generate poses that are consistent with the given semantic features, especially for actions with distinctive poses. We also show that the model outperforms a vanilla GAN.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/30/2018

Pose Guided Human Video Generation

Due to the emergence of Generative Adversarial Networks, video synthesis...
research
12/13/2021

Hallucinating Pose-Compatible Scenes

What does human pose tell us about a scene? We propose a task to answer ...
research
10/08/2016

Learning What and Where to Draw

Generative Adversarial Networks (GANs) have recently demonstrated the ca...
research
07/05/2021

Towards Better Adversarial Synthesis of Human Images from Text

This paper proposes an approach that generates multiple 3D human meshes ...
research
09/15/2023

PoseFix: Correcting 3D Human Poses with Natural Language

Automatically producing instructions to modify one's posture could open ...
research
05/26/2018

Human Action Generation with Generative Adversarial Networks

Inspired by the recent advances in generative models, we introduce a hum...
research
08/14/2023

A Unified Masked Autoencoder with Patchified Skeletons for Motion Synthesis

The synthesis of human motion has traditionally been addressed through t...

Please sign up or login with your details

Forgot password? Click here to reset