DeepAI AI Chat
Log In Sign Up

Speaker identification from the sound of the human breath

12/01/2017
by   Wenbo Zhao, et al.
Carnegie Mellon University
0

This paper examines the speaker identification potential of breath sounds in continuous speech. Speech is largely produced during exhalation. In order to replenish air in the lungs, speakers must periodically inhale. When inhalation occurs in the midst of continuous speech, it is generally through the mouth. Intra-speech breathing behavior has been the subject of much study, including the patterns, cadence, and variations in energy levels. However, an often ignored characteristic is the sound produced during the inhalation phase of this cycle. Intra-speech inhalation is rapid and energetic, performed with open mouth and glottis, effectively exposing the entire vocal tract to enable maximum intake of air. This results in vocal tract resonances evoked by turbulence that are characteristic of the speaker's speech-producing apparatus. Consequently, the sounds of inhalation are expected to carry information about the speaker's identity. Moreover, unlike other spoken sounds which are subject to active control, inhalation sounds are generally more natural and less affected by voluntary influences. The goal of this paper is to demonstrate that breath sounds are indeed bio-signatures that can be used to identify speakers. We show that these sounds by themselves can yield remarkably accurate speaker recognition with appropriate feature representations and classification frameworks.

READ FULL TEXT
05/20/2021

Speaker disentanglement in video-to-speech conversion

The task of video-to-speech aims to translate silent video of lip moveme...
07/14/2021

Localization Based Sequential Grouping for Continuous Speech Separation

This study investigates robust speaker localization for con-tinuous spee...
06/10/2021

Improving multi-speaker TTS prosody variance with a residual encoder and normalizing flows

Text-to-speech systems recently achieved almost indistinguishable qualit...
12/06/2018

Pitch-synchronous DCT features: A pilot study on speaker identification

We propose a new feature, namely, pitchsynchronous discrete cosine trans...
05/22/2020

Identify Speakers in Cocktail Parties with End-to-End Attention

In scenarios where multiple speakers talk at the same time, it is import...
07/31/2019

Quantifying Cochlear Implant Users' Ability for Speaker Identification using CI Auditory Stimuli

Speaker recognition is a biometric modality that uses underlying speech ...
11/15/2017

Human and Machine Speaker Recognition Based on Short Trivial Events

Trivial events are ubiquitous in human to human conversations, e.g., cou...