1 Introduction
Reinforcement Learning (RL) follows the principle of behaviourist psychology and learns similarly as a child learns to perform a new task. RL has been repeatedly successful in the past [1, 2], however, the successes were mostly limited to low-dimensional problems. In recent years, deep learning has significantly advanced the field of RL, with the use of deep learning algorithms within RL giving rise to the field of “deep reinforcement learning”. Deep learning enables RL to operate in high-dimensional state and action spaces and can now be used for complex decision-making problems [3].
Deep RL algorithms have been applied to video or image processing domains spanning video games [4, 5] to indoor navigation [6]. Very few studies have explored the promising aspects of deep RL in the field of audio processing particularly, in speech processing [7]. In this paper, we focus on this under-researched topic. Specifically, we conduct a case study of the feasibility of deep RL for automatic speech command classification.
A major challenge of deep RL is that it often requires a prohibitively large amount of training time and data to reach a reasonable accuracy, making it inapplicable in real-world settings [8]. Leveraging humans to provide demonstrations (known as learning from demonstration (LfD)) in RL has recently gained traction as a possible way of speeding up deep RL [9, 10, 11]. In LfD, actions demonstrated by the human are considered as the ground truth labels for a given input game/image frame. An agent closely simulates the demonstrator’s policy at the start, and later on, learns to surpass the demonstrator [8]. However, LfD holds a distinct challenge, in the sense that it often requires the agent to acquire skills from only a few demonstrations and interactions due to the time and expense of acquiring them [12]. Therefore, LfDs are generally not scalable, especially for high-dimensional problems.
Pre-training the underlying deep neural network is another approach to speed up training in deep RL. It enables the RL agent to learn better features which leads to better performance without changing the policy learning strategies
2 Related Work
Deep RL often requires prohibitively large amounts of training time and data to achieve a reasonable performance, which makes it unsuitable for real-world applications. Pre-training in deep RL is useful to speedup the training process and to reduce the requirement of a large amount of data [16].
Authors in [17] use sparse variational dropout regularisation for pre-training RL and show that pre-training allows an RL algorithm to learn optimal policies for high-dimensional continuous control problems in a practical time frame. In [18] , the authors combine Deep Belief Networks (DBNs) with RL to take advantage of the unsupervised pre-training phase in DBNs and then use the DBN as the opening point for a neural network function approximator. The authors in
Although we could not find studies using pre-training of RL for for audio, some studies used pre-training in speech research for Deep Learning (DL) models. Thomas et al. [14] utilised pre-training for Deep Neural Networks (DNN), where they achieved excellent results for speech recognition, by utilising only 1 hour of transcribed training data. Some studies (e. g., [20]) also achieved promising results for cross-lingual acoustic data using pre-training in deep learning neural networks. In contrast to these studies, we use pre-training for speech-based systems in deep RL setting.
3 Methodology
Feature Learning and Policy Learning are the main two sub-tasks of Deep RL [21]. To investigate pre-training in deep RL setting, we propose a model for speech command recognition whose details are explained below.
3.1 Pre-Training
Understanding the impact of pre-training on the performance of RL is the primary aim of this study. Using the Speech Command dataset, we trained a conventional supervised DNN model and the model parameters were used to initialise the policy network (see Section 3.3) of the Deep RL. We refer to this process as pre-training. Pre-training helps the model to converge quickly and help improve the accuracy of inference for unseen data during the RL execution.
3.2 Deep Reinforcement Learning Framework

The reinforcement learning framework mainly consists of two major entities namely “agent” and “environment”. The action decided by the agent is executed on the environment and it notifies the agent with the reward and next state in the environment. In this work, we focus on deep RL that involves a DNN structure in the agent module to resolve the action taken by observing the state which is illustrated in Figure 1
. We modelled this problem as a Markov decision process (MDP)
[22]. This can be considered as a tuple , where is the state space, is the action space, is the state transition policy, and is the reward function. Since the core goal of this problem is classification, we modelled the MDP in such a way that the predicted classes are to be as actions, , and the states, are the features of each audio segment in a batch of size . An action decision is carried out by an RL agent which receives a reward () using the following reward function:(1) |
where
is the ground truth value of the specific speech utterance. We modelled the probability of actions using the following equation:
(2) |
where is the class index of the maximum probability, is the ground truth value of the specific speech utterance and and are the weight and bias values. is the output from the previously hidden layer.
The target of the RL agent is to maximise the expected return using the following policy:
(3) |
where is the policy of agent, and is the expected reward return at state . To update the policy, we utilise the policy network. Details on the policy network are presented next.
3.3 Policy Network
The policy network model consists of a speech command recognition model as shown in figure 2.

The policy network learns to generate a definite output for a particular input in an RL algorithm. In this work, the policy network takes speech features as input state and recognises the spoken command. For this, we use a deep network consisting of convolutional (CNN) and Long Short Term Memory (LSTM) layers. Our choice of CNN-LSTM is motivated by their ability to learn both temporal and frequency components of speech signals .
An LSTM cell in recurrent neural networks (RNNs) is a memory unit for learning the temporal structure of sequential data
3.3.1 Trainable Model

To calculate the accuracy, we created a separate network by stacking the loss function on top of the output of the policy network and the “target model” as shown in Figure
3. The target model is of the same architecture of the policy network and it updates weights from the policy network once every 500 episodes. This target model is used to infer the target values ().3.3.2 REINFORCE Algorithm
The REINFORCE algorithm is used to approximate the gradient to maximise the objective function mentioned in Equation 3.
Algorithm 1 describes the algorithmic steps followed throughout the RL action prediction process, where indicates the maximum number of episodes to run (10,000 experiments). At the beginning of each episode, a subset of the initial dataset (N=50) is selected randomly as the state space . is the state at instant , is the predicted action for the at the instant, is the reward obtained by executing the predicted action , is a boolean flag indicating the end of an episode, where the end of the episode is decided when reaches the step size (50). are arrays collecting the values of , , for each step, which is consumed by the policy model’s training method described in Algorithm 2. Training is carried out at the end of each episode and “target model” update its weights from policy network after every 200 episodes
4 Experimental Setup
4.1 Dataset
To evaluate the proposed framework, we used the publicly available Speech Commands Dataset. The speech commands dataset [26] contains utterances of 30 command keywords spoken by 2,618 speakers. Each utterance represents a one-second file with a sampling rate of 16 kHz. This dataset contains mainly two subsets of command keywords, namely “main commands”, and “sub commands”. Table 1 shows the distribution of the 30 keywords among the two subsets.
Subset | Commands |
---|---|
Main Commands | one, two, three, four, five, six, seven, eight, nine, down, go, left, no, off, on, right, stop, up, yes, zero |
Sub Commands | bed, bird, cat, dog, happy, house, Marvin, Sheila, tree, wow |
Only 10% of the speech commands dataset was separated for the pre-training step and the remaining 90 % was used by the RL environment.
4.2 Feature Extraction
We use Mel Frequency Cepstral Coefficients (MFCC) to represent the speech signal. MFCCs are very popular features and widely used in speech and audio analysis [25, 27]. We extract 40 MFCCs from the Mel-spectrograms with a frame length of 2,048 and a hop length of 512 using Librosa [28].


4.3 Model Recipe
We use the Tensorflow library to implement the policy network, which is a combination of CNN and LSTM. The initial layers are 1d convolution layers wrapped in time distributed wrappers with filter sizes of 16 and 8, respectively, followed by a max-pooling layer. The feature maps are then passed to an LSTM layer of 50 cells for learning the temporal features. A dropout layer of dropout rate 0.3 is used for regularisation. Finally, three fully connected layers of 512, 256, and 64 units respectively are added before the softmax layer.
The input to the model is a matrix of , where is the number of MFCCs (40), and
is the number of frames (87) in the MFCC spectrum. We use a stochastic gradient descent optimiser with a learning rate of
. Thepre-training steps were carried out with stochastic gradient descent as the optimiser with a learning rate of 0.001. The model was trained for 10 epochs with a batch size of 8 and 10 % as validation split.
The “target model” does not update the weights during the training phase but updates the weights after every 200 episodes with the weights from the policy network. The “‘loss” tensor in the trainable model takes outputs from the “target model” and policy network as inputs, then calculates the loss at the end of each episode. This loss is minimised through the Adam optimiser. This adjusts the weights of the policy network towards the optimum.
Accuracy of each episode () is calculated by Equation 4. Where is the number of correct predictions and is the total number of steps per episode. We use .
(4) |
5 Results
To benchmark the results of the RL accuracy, we train a DNN with the same model configuration as of our policy network. We use 80% of data for training and 20% for testing. We use Stochastic gradient descent as the optimizer, where we use learning rate and batch size 32. We present the comparison results in Table 2 .
Classes | Binary | 20 Classes | 30 Classes | |||
---|---|---|---|---|---|---|
Accuracy (%) |
|
|
|
Experiments were carried out to identify the impact of pre-training on the training-time and accuracy of the RL Agent. Three subsets of speech command datasets were selected, namely “binary”, “20 class”, and “30 class”. The binary subset contains only the speech commands “left” and “right”. 20 classes and 30 classes subsets contain ”main” commands and the merge of “main” and “sub” commands, respectively in the “Speech Command” dataset.
We perform experiments using the proposed deep RL model on each subset and report the results in Tables 3. Table 3 provides the mean accuracy of 200 initial episodes for “with” () and “without” () pre-training. We observe that for all classification subsets, non-pre-trained RL gain considerably lower accuracy for the initial 200 episodes. However, while using pre-training, using the same number of episodes we achieve significantly higher accuracy. This essentially shows that using pre-training we are able to reduce the training time significantly.
Table 3 also shows the mean accuracy of the latest 5 episodes after 10,000 episodes for the “with” () and “without” () pre-training scenarios. Pre-trained RL after 10,000 episodes suppresses the benchmark results on every experiment reported in Table 2. The improvement column “‘” shows the increment of the accuracy of the “with pre-training” with respect to the “without pre-training” scenarios. Each improvement is significant, which further strengthen our findings that pre-train can reduce the training time for Deep RL.
# Classes | Initial 200 episodes | After 10000 episodes | ||||
---|---|---|---|---|---|---|
2 | 60.13 | 81.24 | 21.11 | 80 | 100 | 20 |
20 | 7.43 | 52.11 | 44.68 | 25.71 | 87.76 | 62.04 |
30 | 6.05 | 41.92 | 35.87 | 26.12 | 79.59 | 53.47 |
To further demonstrate the improvement in training time, the accuracy of the episodes was plotted against the episode number and presented in Figure 4. One can observe that the pre-training has increased the overall accuracy in each of the 3 experiments. Also, when the rate of change of accuracy is observed within the initial 2000 episodes it can be seen that the rate of change of accuracy is increased in all the pre-trained experiments. This infers that the number of episodes needed to achieve a defined accuracy is reduced by pre-training. Hence the efficiency is improved.
Lower standard deviation indicates higher consistency. Standard deviation of the accuracy is plotted against the episode in Figure 5 and it can be observed that the standard deviation has decreased rapidly in all the pre-trained experiments. This observation deduces that the pre-training improves the consistency of the predictions earlier.
6 Conclusions
In this paper, we propose the use of pre-training in deep reinforcement learning for speech recognition. The newly introduced framework uses pre-training for feature learning in a reinforcement learning problem. The learned feature knowledge through pre-training is used by Policy Learning during the reinforcement execution to achieve higher accuracy within a reduced time. We evaluate the proposed RL model using the Speech Command dataset for three different classification scenarios, which include binary (two different speech commands), and 20 and 30 class tasks. The results show that pre-training improves the time-efficiency of RL, helping to achieve considerably better results in a significantly smaller number of episodes compared to without using pre-training for RL.
References
-
[1]
S. Singh, D. Litman, M. Kearns, and M. Walker, “Optimizing dialogue management
with reinforcement learning: Experiments with the njfun system,”
Journal of Artificial Intelligence Research
, vol. 16, pp. 105–133, 2002. - [2] G. Tesauro, “Temporal difference learning and td-gammon,” Communications of the ACM, vol. 38, no. 3, pp. 58–68, 1995.
- [3] K. Arulkumaran, M. P. Deisenroth, M. Brundage, and A. A. Bharath, “A brief survey of deep reinforcement learning,” arXiv, vol. 2017, no. 1708.05866, 2017.
- [4] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot et al., “Mastering the game of go with deep neural networks and tree search,” Nature, vol. 529, no. 7587, p. 484, 2016.
- [5] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, p. 529, 2015.
- [6] Y. Zhu, R. Mottaghi, E. Kolve, J. J. Lim, A. Gupta, L. Fei-Fei, and A. Farhadi, “Target-driven visual navigation in indoor scenes using deep reinforcement learning,” in Proceedings ICRA. IEEE, 2017, pp. 3357–3364.
- [7] S. Latif, R. Rana, S. Khalifa, R. Jurdak, J. Qadir, and B. W. Schuller, “Deep representation learning in speech processing: Challenges, recent advances, and future trends,” arXiv preprint arXiv:2001.00378, 2020.
- [8] G. V. Cruz Jr, Y. Du, and M. E. Taylor, “Pre-training neural networks with human demonstrations for deep reinforcement learning,” arXiv preprint arXiv:1709.04083, 2017.
- [9] O. Vinyals, T. Ewalds, S. Bartunov, P. Georgiev, A. S. Vezhnevets, M. Yeo, A. Makhzani, H. Küttler, J. Agapiou, J. Schrittwieser et al., “Starcraft ii: A new challenge for reinforcement learning,” arXiv, vol. 2017, no. 1708.04782, 2017.
- [10] T. Hester, M. Vecerik, O. Pietquin, M. Lanctot, T. Schaul, B. Piot, D. Horgan, J. Quan, A. Sendonaris, I. Osband et al., “Deep q-learning from demonstrations,” in Proceedings AAAI, 2018.
- [11] V. Kurin, S. Nowozin, K. Hofmann, L. Beyer, and B. Leibe, “The atari grand challenge dataset,” arXiv, no. 1705.10998, 2017.
- [12] S. Calinon, “Learning from demonstration (programming by demonstration),” Encyclopedia of Robotics, pp. 1–8, 2018.
- [13] D. Yu, L. Deng, and G. Dahl, “Roles of pre-training and fine-tuning in context-dependent dbn-hmms for real-world speech recognition,” in Proc. NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2010.
- [14] S. Thomas, M. L. Seltzer, K. Church, and H. Hermansky, “Deep neural network features and semi-supervised training for low resource speech recognition,” in 2013 IEEE international conference on acoustics, speech and signal processing. IEEE, 2013, pp. 6704–6708.
- [15] Y. Liu and K. Kirchhoff, “Graph-based semi-supervised acoustic modeling in dnn-based speech recognition,” in 2014 IEEE Spoken Language Technology Workshop (SLT). IEEE, 2014, pp. 177–182.
- [16] X. Zhang and H. Ma, “Pretraining deep actor-critic reinforcement learning algorithms with expert demonstrations,” arXiv preprint arXiv:1801.10459, 2018.
- [17] T. Blau, L. Ott, and F. Ramos, “Improving reinforcement learning pre-training with variational dropout,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018, pp. 4115–4122.
- [18] F. Abtahi and I. Fasel, “Deep belief nets as function approximators for reinforcement learning,” in Proceedings Workshops AAAI, 2011.
- [19] C. W. Anderson, M. Lee, and D. L. Elliott, “Faster reinforcement learning after pretraining deep networks to predict state dynamics,” in Proceedings IJCNN. IEEE, 2015, pp. 1–7.
- [20] D. Imseng, B. Potard, P. Motlicek, A. Nanchen, and H. Bourlard, “Exploiting un-transcribed foreign data for speech recognition in well-resourced languages,” in 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2014, pp. 2322–2326.
- [21] G. V. d. l. Cruz Jr, Y. Du, and M. E. Taylor, “Pre-training Neural Networks with Human Demonstrations for Deep Reinforcement Learning,” arXiv:1709.04083 [cs], Apr. 2019, arXiv: 1709.04083.
- [22] J. R. Tetreault and D. J. Litman, “A reinforcement learning approach to evaluating state representations in spoken dialogue systems,” Speech Communication, vol. 50, no. 8-9, pp. 683–696, 2008.
- [23] S. Latif, M. Usman, R. Rana, and J. Qadir, “Phonocardiographic sensing using deep learning for abnormal heartbeat detection,” IEEE Sensors Journal, vol. 18, no. 22, pp. 9393–9400, 2018.
-
[24]
S. Latif, R. Rana, S. Khalifa, R. Jurdak, J. Epps, and B. W. Schuller, “Multi-task semi-supervised adversarial autoencoding for speech emotion recognition,”
IEEE Transactions on Affective Computing, 2020. - [25] S. Latif, R. Rana, S. Khalifa, R. Jurdak, and J. Epps, “Direct Modelling of Speech Emotion from Raw Speech,” in Proc. Interspeech 2019, 2019, pp. 3920–3924. [Online]. Available: http://dx.doi.org/10.21437/Interspeech.2019-3252
- [26] P. Warden, “Speech commands: A dataset for limited-vocabulary speech recognition,” arXiv, vol. 2018, no. 1804.03209.
- [27] S. Davis and P. Mermelstein, “Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences,” IEEE transactions on acoustics, speech, and signal processing, vol. 28, no. 4, pp. 357–366, 1980.
- [28] B. McFee, C. Raffel, D. Liang, D. P. Ellis, M. McVicar, E. Battenberg, and O. Nieto, “librosa: Audio and music signal analysis in python,” in Proceedings of the 14th python in science conference, vol. 8, 2015.
Comments
There are no comments yet.