Log In Sign Up

H-GAN: the power of GANs in your Hands

by   Sergiu Oprea, et al.

We present HandGAN (H-GAN), a cycle-consistent adversarial learning approach implementing multi-scale perceptual discriminators. It is designed to translate synthetic images of hands to the real domain. Synthetic hands provide complete ground-truth annotations, yet they are not representative of the target distribution of real-world data. We strive to provide the perfect blend of a realistic hand appearance with synthetic annotations. Relying on image-to-image translation, we improve the appearance of synthetic hands to approximate the statistical distribution underlying a collection of real images of hands. H-GAN tackles not only cross-domain tone mapping but also structural differences in localized areas such as shading discontinuities. Results are evaluated on a qualitative and quantitative basis improving previous works. Furthermore, we successfully apply the generated images to the hand classification task.


page 1

page 3

page 4

page 5

page 7


Cartoon-to-real: An Approach to Translate Cartoon to Realistic Images using GAN

We propose a method to translate cartoon images to real world images usi...

Sem-GAN: Semantically-Consistent Image-to-Image Translation

Unpaired image-to-image translation is the problem of mapping an image i...

Twin-GAN -- Unpaired Cross-Domain Image Translation with Weight-Sharing GANs

We present a framework for translating unlabeled images from one domain ...

Automatic Feature Highlighting in Noisy RES Data With CycleGAN

Radio echo sounding (RES) is a common technique used in subsurface glaci...

Cross-Domain Conditional Generative Adversarial Networks for Stereoscopic Hyperrealism in Surgical Training

Phantoms for surgical training are able to mimic cutting and suturing pr...

Code Repositories


Reference code for the paper "HandGAN - The power of GANs in yout Hands" (IJCNN 2021)

view repo