Designing, Playing, and Performing with a Vision-based Mouth Interface

by   Michael J. Lyons, et al.

The role of the face and mouth in speech production as well asnon-verbal communication suggests the use of facial action tocontrol musical sound. Here we document work on theMouthesizer, a system which uses a headworn miniaturecamera and computer vision algorithm to extract shapeparameters from the mouth opening and output these as MIDIcontrol changes. We report our experience with variousgesture-to-sound mappings and musical applications, anddescribe a live performance which used the Mouthesizerinterface.


page 2

page 3

page 4

page 5

page 6


Sonification of Facial Actions for Musical Expression

The central role of the face in social interaction and non-verbal commun...

Vision Based Game Development Using Human Computer Interaction

A Human Computer Interface (HCI) System for playing games is designed he...

Ghostfinger: a novel platform for fully computational fingertip controllers

We present Ghostfinger, a technology for highly dynamic up/down fingerti...

Mugeetion: Musical Interface Using Facial Gesture and Emotion

People feel emotions when listening to music. However, emotions are not ...

Problems and Prospects for Intimate Musical Control of Computers

In this paper we describe our efforts towards the development of live pe...

Facial gesture interfaces for expression and communication

Considerable effort has been devoted to the automatic extraction of info...

In a Silent Way: Communication Between AI and Improvising Musicians Beyond Sound

Collaboration is built on trust, and establishing trust with a creative ...