Time-Frequency Localization Using Deep Convolutional Maxout Neural Network in Persian Speech Recognition

08/09/2021
by   Arash Dehghani, et al.
0

In this paper, a CNN-based structure for the time-frequency localization of information is proposed for Persian speech recognition. Research has shown that the receptive fields' spectrotemporal plasticity of some neurons in mammals' primary auditory cortex and midbrain makes localization facilities improve recognition performance. Over the past few years, much work has been done to localize time-frequency information in ASR systems, using the spatial or temporal immutability properties of methods such as HMMs, TDNNs, CNNs, and LSTM-RNNs. However, most of these models have large parameter volumes and are challenging to train. For this purpose, we have presented a structure called Time-Frequency Convolutional Maxout Neural Network (TFCMNN) in which parallel time-domain and frequency-domain 1D-CMNNs are applied simultaneously and independently to the spectrogram, and then their outputs are concatenated and applied jointly to a fully connected Maxout network for classification. To improve the performance of this structure, we have used newly developed methods and models such as Dropout, maxout, and weight normalization. Two sets of experiments were designed and implemented on the FARSDAT dataset to evaluate the performance of this model compared to conventional 1D-CMNN models. According to the experimental results, the average recognition score of TFCMNN models is about 1.6 addition, the average training time of the TFCMNN models is about 17 hours lower than the average training time of traditional models. Therefore, as proven in other sources, time-frequency localization in ASR systems increases system accuracy and speeds up the training process.

READ FULL TEXT

page 8

page 15

research
07/28/2018

Articulatory Features for ASR of Pathological Speech

In this work, we investigate the joint use of articulatory and acoustic ...
research
04/25/2022

Understanding Audio Features via Trainable Basis Functions

In this paper we explore the possibility of maximizing the information r...
research
01/27/2019

A Convolutional Neural Network model based on Neutrosophy for Noisy Speech Recognition

Convolutional neural networks are sensitive to unknown noisy condition i...
research
07/25/2023

Fitting Auditory Filterbanks with Multiresolution Neural Networks

Waveform-based deep learning faces a dilemma between nonparametric and p...
research
05/18/2020

Learning Deep Models from Synthetic Data for Extracting Dolphin Whistle Contours

We present a learning-based method for extracting whistles of toothed wh...
research
07/29/2020

Supervised Neural Networks for RFI Flagging

Neural network (NN) based methods are applied to the detection of radio ...
research
11/22/2022

SkipConvGAN: Monaural Speech Dereverberation using Generative Adversarial Networks via Complex Time-Frequency Masking

With the advancements in deep learning approaches, the performance of sp...

Please sign up or login with your details

Forgot password? Click here to reset