Speech Emotion Recognition via an Attentive Time-Frequency Neural Network

10/22/2022
by   Cheng Lu, et al.
0

Spectrogram is commonly used as the input feature of deep neural networks to learn the high(er)-level time-frequency pattern of speech signal for speech emotion recognition (SER). Generally, different emotions correspond to specific energy activations both within frequency bands and time frames on spectrogram, which indicates the frequency and time domains are both essential to represent the emotion for SER. However, recent spectrogram-based works mainly focus on modeling the long-term dependency in time domain, leading to these methods encountering the following two issues: (1) neglecting to model the emotion-related correlations within frequency domain during the time-frequency joint learning; (2) ignoring to capture the specific frequency bands associated with emotions. To cope with the issues, we propose an attentive time-frequency neural network (ATFNN) for SER, including a time-frequency neural network (TFNN) and time-frequency attention. Specifically, aiming at the first issue, we design a TFNN with a frequency-domain encoder (F-Encoder) based on the Transformer encoder and a time-domain encoder (T-Encoder) based on the Bidirectional Long Short-Term Memory (Bi-LSTM). The F-Encoder and T-Encoder model the correlations within frequency bands and time frames, respectively, and they are embedded into a time-frequency joint learning strategy to obtain the time-frequency patterns for speech emotions. Moreover, to handle the second issue, we also adopt time-frequency attention with a frequency-attention network (F-Attention) and a time-attention network (T-Attention) to focus on the emotion-related frequency band ranges and time frame ranges, which can enhance the discriminability of speech emotion features.

READ FULL TEXT

page 1

page 2

page 4

page 5

page 7

page 11

research
08/28/2023

Time-Frequency Transformer: A Novel Time Frequency Joint Learning Method for Speech Emotion Recognition

In this paper, we propose a novel time-frequency joint learning method f...
research
11/29/2022

Analysis of constant-Q filterbank based representations for speech emotion recognition

This work analyzes the constant-Q filterbank-based time-frequency repres...
research
03/01/2023

Extending DNN-based Multiplicative Masking to Deep Subband Filtering for Improved Dereverberation

In this paper, we present a scheme for extending deep neural network-bas...
research
08/07/2019

Pitch-Synchronous Single Frequency Filtering Spectrogram for Speech Emotion Recognition

Convolutional neural networks (CNN) are widely used for speech emotion r...
research
06/19/2018

EmotionX-DLC: Self-Attentive BiLSTM for Detecting Sequential Emotions in Dialogue

In this paper, we propose a self-attentive bidirectional long short-term...
research
06/12/2023

Multi-View Frequency-Attention Alternative to CNN Frontends for Automatic Speech Recognition

Convolutional frontends are a typical choice for Transformer-based autom...
research
11/06/2020

Deep Learning-based Cattle Activity Classification Using Joint Time-frequency Data Representation

Automated cattle activity classification allows herders to continuously ...

Please sign up or login with your details

Forgot password? Click here to reset