Recognition and Synthesis of Object Transport Motion
Deep learning typically requires vast numbers of training examples in order to be used successfully. Conversely, motion capture data is often expensive to generate, requiring specialist equipment, along with actors to generate the prescribed motions, meaning that motion capture datasets tend to be relatively small. Motion capture data does however provide a rich source of information that is becoming increasingly useful in a wide variety of applications, from gesture recognition in human-robot interaction, to data driven animation. This project illustrates how deep convolutional networks can be used, alongside specialized data augmentation techniques, on a small motion capture dataset to learn detailed information from sequences of a specific type of motion (object transport). The project shows how these same augmentation techniques can be scaled up for use in the more complex task of motion synthesis. By exploring recent developments in the concept of Generative Adversarial Models (GANs), specifically the Wasserstein GAN, this project outlines a model that is able to successfully generate lifelike object transportation motions, with the generated samples displaying varying styles and transport strategies.
READ FULL TEXT