EmoGen: Quantifiable Emotion Generation and Analysis for Experimental Psychology
3D facial modelling and animation in computer vision and graphics traditionally require either digital artist's skill or complex pipelines with objective-function-based solvers to fit models to motion capture. This inaccessibility of quality modelling to a non-expert is an impediment to effective quantitative study of facial stimuli in experimental psychology. The EmoGen methodology we present in this paper solves the issue democratising facial modelling technology. EmoGen is a robust and configurable framework letting anyone author arbitrary quantifiable facial expressions in 3D through a user-guided genetic algorithm search. Beyond sample generation, our methodology is made complete with techniques to analyse distributions of these expressions in a principled way. This paper covers the technical aspects of expression generation, specifically our production-quality facial blendshape model, automatic corrective mechanisms of implausible facial configurations in the absence of artist's supervision and the genetic algorithm implementation employed in the model space search. Further, we provide a comparative evaluation of ways to quantify generated facial expressions in the blendshape and geometric domains and compare them theoretically and empirically. The purpose of this analysis is 1. to define a similarity cost function to simulate model space search for convergence and parameter dependence assessment of the genetic algorithm and 2. to inform the best practices in the data distribution analysis for experimental psychology.
READ FULL TEXT