Designing, Playing, and Performing with a Vision-based Mouth Interface

10/07/2020
by   Michael J. Lyons, et al.
0

The role of the face and mouth in speech production as well asnon-verbal communication suggests the use of facial action tocontrol musical sound. Here we document work on theMouthesizer, a system which uses a headworn miniaturecamera and computer vision algorithm to extract shapeparameters from the mouth opening and output these as MIDIcontrol changes. We report our experience with variousgesture-to-sound mappings and musical applications, anddescribe a live performance which used the Mouthesizerinterface.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 6

10/07/2020

Sonification of Facial Actions for Musical Expression

The central role of the face in social interaction and non-verbal commun...
02/10/2010

Vision Based Game Development Using Human Computer Interaction

A Human Computer Interface (HCI) System for playing games is designed he...
07/27/2021

Ghostfinger: a novel platform for fully computational fingertip controllers

We present Ghostfinger, a technology for highly dynamic up/down fingerti...
09/14/2018

Mugeetion: Musical Interface Using Facial Gesture and Emotion

People feel emotions when listening to music. However, emotions are not ...
10/04/2020

Problems and Prospects for Intimate Musical Control of Computers

In this paper we describe our efforts towards the development of live pe...
10/04/2020

Facial gesture interfaces for expression and communication

Considerable effort has been devoted to the automatic extraction of info...
02/18/2019

In a Silent Way: Communication Between AI and Improvising Musicians Beyond Sound

Collaboration is built on trust, and establishing trust with a creative ...