Artificial (or) Fake Human Face Generator using Generative Adversarial Network (GAN) Machine Learning Model

10/05/2022
by   Mohana, et al.
0

Graphics algorithms for high quality image rendering are highly involved process, as layout, components, and light transport must be explicitly simulated. While existing algorithms excel in this task, creating and formatting virtual environments is a costly and time-consuming process. Thus, there is an opportunity for automating this labor intensive process by leveraging recent development in computer vision. Recent development in deep generative models, especially GANs, has spurred much interest in the computer vision domain for synthesizing realistic images. GANs combine backpropagation with a competitive process involving a pair networks, called Generative Network G and Discriminative Network D, in which G generate artificial images and D classifies it into real or artificial image categories. As the training proceeds, G learns to generate realistic images to confuse D [1]. In this work, a convolutional architecture based on GAN, specifically Deep Convolutional Generative Adversarial Networks (DCGAN) has been implemented to train a generative model that can produce good quality images of human faces at scale. CelebFaces Attributes Dataset (CelebA) has been used to train the DCGAN model. Structural Similarity Index (SSIM), that measures the structural and contextual similarity of two images, has been used for quantitative evaluation of the trained DCGAN model. Obtained results shows that the quality of generated images is quite similar to the high quality images of the CelebA dataset.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset