Detection and Analysis of Human Emotions through Voice and Speech Pattern Processing

by   Poorna Banerjee Dasgupta, et al.

The ability to modulate vocal sounds and generate speech is one of the features which set humans apart from other living beings. The human voice can be characterized by several attributes such as pitch, timbre, loudness, and vocal tone. It has often been observed that humans express their emotions by varying different vocal attributes during speech generation. Hence, deduction of human emotions through voice and speech analysis has a practical plausibility and could potentially be beneficial for improving human conversational and persuasion skills. This paper presents an algorithmic approach for detection and analysis of human emotions with the help of voice and speech processing. The proposed approach has been developed with the objective of incorporation with futuristic artificial intelligence systems for improving human-computer interactions.


page 1

page 2

page 3


Voice Chatbot for Hospitality

Chatbot is a machine with the ability to answer automatically through a ...

Clinical Depression and Affect Recognition with EmoAudioNet

Automatic analysis of emotions and affects from speech is an inherently ...

Nonsense Attacks on Google Assistant

This paper presents a novel attack on voice-controlled digital assistant...

Fuzzy Model on Human Emotions Recognition

This paper discusses a fuzzy model for multi-level human emotions recogn...

V2C: Visual Voice Cloning

Existing Voice Cloning (VC) tasks aim to convert a paragraph text to a s...

Digital Einstein Experience: Fast Text-to-Speech for Conversational AI

We describe our approach to create and deliver a custom voice for a conv...

Emotion Recognition of the Singing Voice: Toward a Real-Time Analysis Tool for Singers

Current computational-emotion research has focused on applying acoustic ...

Please sign up or login with your details

Forgot password? Click here to reset