Designing, Playing, and Performing with a Vision-based Mouth Interface

10/07/2020
by   Michael J. Lyons, et al.
0

The role of the face and mouth in speech production as well asnon-verbal communication suggests the use of facial action tocontrol musical sound. Here we document work on theMouthesizer, a system which uses a headworn miniaturecamera and computer vision algorithm to extract shapeparameters from the mouth opening and output these as MIDIcontrol changes. We report our experience with variousgesture-to-sound mappings and musical applications, anddescribe a live performance which used the Mouthesizerinterface.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 6

research
10/07/2020

Sonification of Facial Actions for Musical Expression

The central role of the face in social interaction and non-verbal commun...
research
02/10/2010

Vision Based Game Development Using Human Computer Interaction

A Human Computer Interface (HCI) System for playing games is designed he...
research
07/27/2021

Ghostfinger: a novel platform for fully computational fingertip controllers

We present Ghostfinger, a technology for highly dynamic up/down fingerti...
research
09/14/2018

Mugeetion: Musical Interface Using Facial Gesture and Emotion

People feel emotions when listening to music. However, emotions are not ...
research
10/04/2020

Problems and Prospects for Intimate Musical Control of Computers

In this paper we describe our efforts towards the development of live pe...
research
03/02/2023

AI as mediator between composers, sound designers, and creative media producers

Musical professionals who produce material for non-musical stakeholders ...
research
09/21/2023

Variational Quantum Harmonizer: Generating Chord Progressions and Other Sonification Methods with the VQE Algorithm

This work investigates a case study of using physical-based sonification...

Please sign up or login with your details

Forgot password? Click here to reset