Analysis of constant-Q filterbank based representations for speech emotion recognition

11/29/2022
by   Premjeet Singh, et al.
0

This work analyzes the constant-Q filterbank-based time-frequency representations for speech emotion recognition (SER). Constant-Q filterbank provides non-linear spectro-temporal representation with higher frequency resolution at low frequencies. Our investigation reveals how the increased low-frequency resolution benefits SER. The time-domain comparative analysis between short-term mel-frequency spectral coefficients (MFSCs) and constant-Q filterbank-based features, namely constant-Q transform (CQT) and continuous wavelet transform (CWT), reveals that constant-Q representations provide higher time-invariance at low-frequencies. This provides increased robustness against emotion irrelevant temporal variations in pitch, especially for low-arousal emotions. The corresponding frequency-domain analysis over different emotion classes shows better resolution of pitch harmonics in constant-Q-based time-frequency representations than MFSC. These advantages of constant-Q representations are further consolidated by SER performance in the extensive evaluation of features over four publicly available databases with six advanced deep neural network architectures as the back-end classifiers. Our inferences in this study hint toward the suitability and potentiality of constant-Q features for SER.

READ FULL TEXT

page 5

page 15

page 16

page 17

page 18

page 28

page 29

page 32

research
02/08/2021

Non-linear frequency warping using constant-Q transformation for speech emotion recognition

In this work, we explore the constant-Q transform (CQT) for speech emoti...
research
05/11/2021

Deep scattering network for speech emotion recognition

This paper introduces scattering transform for speech emotion recognitio...
research
01/14/2023

Modulation spectral features for speech emotion recognition using deep neural networks

This work explores the use of constant-Q transform based modulation spec...
research
10/22/2022

Speech Emotion Recognition via an Attentive Time-Frequency Neural Network

Spectrogram is commonly used as the input feature of deep neural network...
research
11/01/2019

Clinical Depression and Affect Recognition with EmoAudioNet

Automatic analysis of emotions and affects from speech is an inherently ...
research
02/11/2021

Language Independent Emotion Quantification using Non linear Modelling of Speech

At present emotion extraction from speech is a very important issue due ...
research
03/31/2016

Learning Multiscale Features Directly From Waveforms

Deep learning has dramatically improved the performance of speech recogn...

Please sign up or login with your details

Forgot password? Click here to reset