Utterance-level Permutation Invariant Training with Latency-controlled BLSTM for Single-channel Multi-talker Speech Separation

12/25/2019 ∙ by Lu Huang, et al. ∙ TOM.COM Corporation Tsinghua University 0

Utterance-level permutation invariant training (uPIT) has achieved promising progress on single-channel multi-talker speech separation task. Long short-term memory (LSTM) and bidirectional LSTM (BLSTM) are widely used as the separation networks of uPIT, i.e. uPIT-LSTM and uPIT-BLSTM. uPIT-LSTM has lower latency but worse performance, while uPIT-BLSTM has better performance but higher latency. In this paper, we propose using latency-controlled BLSTM (LC-BLSTM) during inference to fulfill low-latency and good-performance speech separation. To find a better training strategy for BLSTM-based separation network, chunk-level PIT (cPIT) and uPIT are compared. The experimental results show that uPIT outperforms cPIT when LC-BLSTM is used during inference. It is also found that the inter-chunk speaker tracing (ST) can further improve the separation performance of uPIT-LC-BLSTM. Evaluated on the WSJ0 two-talker mixed-speech separation task, the absolute gap of signal-to-distortion ratio (SDR) between uPIT-BLSTM and uPIT-LC-BLSTM is reduced to within 0.7 dB.



There are no comments yet.


page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Many advancements have been observed for monaural multi-talker speech separation [29, 25, 14, 16, 6, 21, 35, 17, 20], known as cocktail party problem [13]

, which is meaningful to many practical applications, such as human-machine interaction, automatic meeting transcription etc. With the development of deep learning

[18], a lot of innovations have been proposed, such as deep clustering [14, 16], deep attractor network [6], time-domain audio separation network [21, 20] and permutation invariant training (PIT) [35, 17].

Deep clustering [14, 16] projects the time-frequency (TF) units into an embedding space, with a clustering algorithm to generate a partition of TF units, which assumes that each bin belongs to only one speaker. However, the separation under the embedding space may be not the optimal technique.

Deep attractor network [6]

also learns a high-dimensional representation of the mixed speech with some attractor points in the embedding space to attract all the TF units corresponding to the target speaker. However, the estimation of attractor points has a high computational cost.

PIT [35] is an end-to-end speech separation method, which gives an elegant solution to the training label permutation problem [6, 35]. It is extended to utterance-level PIT (uPIT) [17] with an utterance-level cost function to further improve the performance. Because uPIT is simple and well-performed, it draws a lot of attention [21, 20, 34, 24, 5, 2, 27, 26, 3, 32]. LSTM [15, 8, 9] and BLSTM [11, 10] are widely used for uPIT to exploit utterance-level long time dependency. Although uPIT-BLSTM outperforms uPIT-LSTM, its inference latency is as long as the utterance, which hampers its applications in many scenarios.

To reduce the latency of BLSTM-based acoustic model on automatic speech recognition (ASR) tasks, context-sensitive chunk (CSC)

[4], which is the chunk with appended contextual frames, is proposed for both training and decoding. In [36], CSC-BLSTM is extended to latency-controlled BLSTM (LC-BLSTM), which directly carries over the left contextual information from previous chunk of the same utterance to reduce the computational cost and improve the recognition accuracy.

In this paper, inspired by LC-BLSTM-based acoustic model on ASR tasks, uPIT-LC-BLSTM for low-latency speech separation is proposed, which splits an utterance into non-overlapping chunks with future contextual frames during inference to reduce the latency from utterance-level to chunk-level. The chunk-level PIT (cPIT) of BLSTM is also proposed, but the preliminary experiments indicate that cPIT is inferior to uPIT. uPIT-LC-BLSTM propagates BLSTM’s forward hidden states across chunks, which helps keep the speaker consistency across chunks. Meanwhile, an inter-chunk speaker tracing (ST) algorithm is proposed to further improve the performance of uPIT-LC-BLSTM. Experiments evaluated on the WSJ0 two-talker mixed-speech separation task show that uPIT-LC-BLSTM with ST only loses a little when compared to uPIT-BLSTM.

The paper starts by briefly describing prior work in Section 2. The cPIT, uPIT-LC-BLSTM and speaker tracing algorithm are described in Section 3. The experimental setup and results are discussed in Section 4. Section 5 presents the conclusions.

2 Prior Work

2.1 Monaural Speech Separation

The goal of single-channel multi-talker speech separation is to separate the individual source signals from the mixed audio. Let us denote source signals as and the microphone receives mixed audio

. The separation is often carried out in the time-frequency (TF) domain, where the task is to reconstruct the short-time Fourier transform (STFT) of each individual source signal. The STFT of the mixed signal is

, where is the TF unit at frame and frequency .

The STFT reconstruction of each source can be done by estimating masks . We use phase sensitive mask (PSM) here: , where and are the magnitude and phase of Y respectively. With an estimated mask and the mixed STFT, the STFT of source is , where is imaginary unit.

The straightforward mask-based separation methods based on deep learning are to use neural network to estimate masks for

source signals and then minimize the mean square error (MSE) between estimated and target magnitudes. For PSM, the cost function is as follows:


where is the total number of TF units, is the element-wise product and is the Frobenius norm.

Figure 1: The architecture of cPIT, whose main idea is to split an utterance into chunks. The main chunk has frames, with appended left and right contextual frames. For the first/last chunk of each utterance, no left/right contextual frames are appended. The appended frames are only used to provide context information and do not generate error signals during training. LC-BLSTM does not need left contextual frames.

2.2 Utterance-level Permutation Invariant Training

The cost function mentioned above is a good way for some simple cases. For example, when a priori convention can be learned, we can force the speakers with higher energy (or male speakers) to be the first output, and those with lower energy (or female speakers) to be the second output. However, when the energy difference is small or two speakers have the same gender, a problem named label permutation [6, 35] is introduced, where the permutation of two output streams is unknown.

PIT [35] has eliminated the label permutation problem, while it faces another problem named speaker tracing, which is solved by extending PIT with an utterance-level cost function, i.e. uPIT [17], to force the separation of the same speaker into the same output stream. The cost function of uPIT is as follows:


where is the permutation that minimizes the separation error:


where is the set of all permutations. As illustrated in the area surrounded by dotted lines in Figure 1, PIT computes MSE between estimated and target magnitudes using all possible permutations, and the minimum error is used for back propagation.


BLSTM is often used in uPIT-based speech separation systems for its capacity of modeling long time dependency in forward and backward directions [17, 24, 34, 5, 2, 27, 26, 3]. BLSTM has a high latency as long as the utterance. Since BLSTM is one of the state-of-the-art acoustic models on ASR tasks [11, 10, 4, 36, 33, 31, 30, 7, 19, 12], there have been some relative works to address the latency problem [4, 36, 23].

In [4], context-sensitive chunk (CSC) with left and right contextual frames to initialize the forward and backward LSTM is used for both training and decoding, which reduces the decoding latency from utterance-level to chunk-level. CSC-BLSTM is extended to LC-BLSTM by directly carrying over the left contextual information from previous chunk of the same utterance into current chunk [36], where the latency can be determined by the number of right contextual frames and modified by users to get a trade-off between performance and latency.

3 Proposed Methods

3.1 Chunk-level PIT

As illustrated in Figure 1, the proposed cPIT splits an utterance into context-sensitive chunks, where main chunks (without contextual frames) do not overlap. Since the lengths of chunks are very close (no longer than

), we do not need to do zero padding frequently during training, so the training can be sped up significantly when compared to uPIT. Besides, we evaluate whether cPIT is beneficial for chunk-level inference.


Inference can also be done at the utterance level or chunk level. If we simply infer at the chunk level, i.e. use CSC-BLSTM, the output streams of main chunks in the same utterance are spliced to compose utterance-level separated results. However, permutation may change across neighboring chunks. For instance, in two-speaker case, the output permutation may be 1-1 (the first output stream corresponds to the first speaker) and 2-2 in previous chunk, while it may change to 1-2 (the first output stream corresponds to the second speaker) and 2-1 in current chunk. If the output streams of these two chunks are simply spliced, the separated speech may face the speaker inconsistency problem.

The first proposed method to alleviate the problem is to replace CSC-BLSTM with LC-BLSTM. The only difference between them is that LC-BLSTM copies the forward hidden states from previous chunk directly and does not need left contextual frames, while CSC-BLSTM uses left contextual frames to initialize forward LSTM. They both need right contextual frames to initialize backward LSTM. There are two advantages in using LC-BLSTM. Firstly, computational cost is reduced by with the removing the left initialization operation. Secondly, it helps keep the forward hidden states continuous across neighboring chunks, which is beneficial for modeling a broader left context and to some extent alleviates the speaker inconsistency problem.

With the model trained at the chunk level or utterance level, cPIT-LC-BLSTM or uPIT-LC-BLSTM method is obtained. Besides, some other denotations are also listed in Table 1.

Denotation Model Training Strategy Inferring Method
uPIT-LSTM LSTM utterance-level PIT utterance-level
uPIT-BLSTM utterance-level
uPIT-CSC-BLSTM BLSTM utterance-level PIT chunk-level (CSC)
uPIT-LC-BLSTM chunk-level (LC)
cPIT-BLSTM utterance-level
cPIT-CSC-BLSTM BLSTM chunk-level PIT chunk-level (CSC)
cPIT-LC-BLSTM chunk-level (LC)
Table 1: For simplicity and clarity, some denotations are listed.

3.3 Inter-chunk Speaker Tracing

In [35], there is a huge performance gap between default assign (without ST) and optimal assign (assuming that all speakers are correctly traced), which can be reduced with ST algorithms.

In this paper, a simple ST algorithm is adopted to exploit the overlapping frames between two neighboring chunks. Let us denote and as two output streams of overlapping frames in previous chunk, and and as those in current chunk. We compute pairwise MSE as PIT does:


If , we consider there exists a change of output permutation, where is the penalty factor and set to 2.0 by default. There are two reasons to set

to 2.0 instead of 1.0. Firstly, we believe that the probability of permutation changing is smaller than that of the same permutation, especially when LC-BLSTM is used. Secondly, more robustness is added into the system. For example, if both speakers are silent in the overlapping frames, the two output streams are almost similar, and then setting

to 1.0 may lead to a false detection of permutation changing.

PIT Model Average M-F F-F M-M
Mixtures 0.06 0.06 0.07 0.06
uPIT-LSTM [17] 7.0 - - -
uPIT-BLSTM [17] 9.4 - - -
Our uPIT-LSTM 7.16 9.02 3.80 5.77
Our uPIT-BLSTM 9.46 10.90 7.61 8.11
Table 2: SDR improvements (dB) for original mixtures and uPIT-(B)LSTM baselines. M/F stands for male/female.

4 Experiments and Results

4.1 Experimental Setup

The dataset is the same as the two-talker mixed dataset in [14, 16, 21, 35, 17]

, except that the sample rate is 16 kHz. It is generated by mixing the utterances in WSJ0 corpus at various signal-to-noise ratios uniformly chosen between 0 dB and 5 dB, and has 20k, 5k and 3k mixtures for training, validation and testing respectively. The 30-hour training set and 10-hour validation set are generated from

si_tr_s using 49 male and 51 female speakers. The 5-hour testing set is generated from si_dt_05 and si_et_05 using 16 speakers.

The input to the model is the magnitude of mixture’s STFT, which is extracted with a frame size 32 ms and 16 ms shift,  and has 257 frequency sub-band. The PIT model has a fully-connected layer, 3 (B)LSTM layers and two output layers. The dimension of LSTM cell is 640, so each BLSTM layer has 1280 units. We use ReLU


as the activation function of two output layers, and two output masks have the same dimension as that of input. The input mixed magnitude is multiplied by two masks respectively to get two separated magnitudes, and then use the phase of mixed speech and inverse STFT to get the separated audios. Signal-to-distortion ratio (SDR)

[28] is used to evaluate the performance of separation.

Tensorflow [1]

is used to build the systems. The validation set is only used for tuning the learning rate as it will be halved by 0.7 when the loss on validation set increases. The initial learning rate is 0.0005. Dropout is applied to BLSTM layers with a rate 0.5. For faster evaluation, all models are trained for 32 epochs. When training at the utterance level, each minibatch contains 10 random utterances. When training at the chunk level, each minibatch contains 100 random chunks.

4.2 uPIT Baselines

Table 2 presents the SDR improvements of baseline uPIT-(B)LSTM. It is obvious that uPIT-BLSTM is far better than uPIT-LSTM. It is also noticed that the same-gender separation is more difficult, especially female-female separation. Although the size of our model is smaller than that in [17] and we trained for fewer epochs, the obtained results are comparable with the baseline results in [17].

4.3 cPIT v.s. uPIT

Method SDR Abs. Gap
cPIT-CSC-BLSTM 8.00 -1.46
cPIT-CSC-BLSTM + ST 8.72 -0.74
cPIT-LC-BLSTM 8.61 -0.85
cPIT-LC-BLSTM + ST 8.71 -0.75
cPIT-BLSTM 8.73 -0.73
uPIT-CSC-BLSTM 8.09 -1.37
uPIT-CSC-BLSTM + ST 9.10 -0.36
uPIT-LC-BLSTM 8.98 -0.48
uPIT-LC-BLSTM + ST 9.16 -0.30
uPIT-BLSTM 9.46 -
Table 3: Average SDR improvements (dB) for BLSTM trained with cPIT or uPIT. Speaker tracing (ST) is used to improve the performance of CSC-BLSTM and LC-BLSTM. The absolute gap (Abs. Gap) is compared to uPIT-BLSTM.
Figure 2: Permutation changing problem is alleviated by LC-BLSTM. The mixed/clean spectrograms of two speakers are shown in the first/last row respectively. The second, third and last rows are the separated spectrograms using uPIT-CSC-BLSTM, uPIT-LC-BLSTM and uPIT-BLSTM respectively. The vertical red lines are the borders of chunks. In the second row, the speakers exchanges in the last chunk when using CSC-BLSTM, and it is solved in the third row when using LC-BLSTM.

As described before, the model for inference can be trained at the utterance level or chunk level. We trained one BLSTM at the chunk level with , and compared it with the baseline BLSTM trained at the utterance level. We present the SDR results in Table 3. Here, we consider four inferring methods: cPIT-CSC-BLSTM, uPIT-CSC-BLSTM, cPIT-BLSTM and uPIT-BLSTM. Generally, the model trained at the utterance level performs better.


Here we compare two inferring methods: CSC-BLSTM and LC-BLSTM. As illustrated in Table 3, LC-BLSTM outperforms CSC-BLSTM significantly, with improvements of 0.61 dB when using the model trained at the chunk level and 0.89 dB when using the model trained at the utterance level. Besides, uPIT-LC-BLSTM outperforms cPIT-LC-BLSTM significantly.

To prove LC-BLSTM helps alleviate the speaker inconsistency problem, an example is shown in Figure 2. As illustrated, there exists a change of permutation in the last chunk when using uPIT-CSC-BLSTM. Also, the spectrograms separated by uPIT-LC-BLSTM and uPIT-BLSTM are quite similar.

4.5 Inter-chunk Speaker Tracing

As illustrated in Table 3, ST can further improve the performance of both CSC-BLSTM and LC-BLSTM. For the model trained at the chunk level, ST improves the cPIT-CSC-BLSTM and cPIT-LC-BLSTM by 0.72 dB and 0.1 dB respectively. For the model trained at the utterance level, ST improves the uPIT-CSC-BLSTM and uPIT-LC-BLSTM by 1.01 dB and 0.18 dB respectively, where the improvements are more obvious.

Finally, uPIT-LC-BLSTM with ST achieves the best results of chunk-level inference, which is slightly worse than that of uPIT-BLSTM with a gap 0.3 dB, but is significantly better than that of uPIT-LSTM with a gain of 2.0 dB.

4.6 Trade-off between Latency and Performance

Method SDR Abs. Gap Latency (ms)
uPIT-LC-BLSTM 0 8.76 -0.70 0
10 8.81 -0.55 160
25 9.02 -0.44 400
uPIT-LC-BLSTM + ST 35 9.07 -0.39 560
50 9.16 -0.30 800
100 9.26 -0.20 1600
uPIT-BLSTM 9.46 - utterance-level
uPIT-LSTM 7.16 -2.30 0
Table 4: Average SDR improvements (dB) and latency (defined as ms as that in [36]) for uPIT-LC-BLSTM.

The latency of above chunk configuration is ms ms (defined as ms as that in [36]), which is quite high for low-latency applications. Here, we keep and fixed (Note is useless for LC-BLSTM), and change the value of to evaluate the performance with different latency, and the results are illustrated in Table 4.

Generally, SDR decreases as decreases. Note that when , we cannot perform ST for LC-BLSTM, since there is no overlapping frame. Even though is , uPIT-LC-BLSTM still outperforms uPIT-LSTM with a gain of 1.6 dB, and has a gap of 0.7 dB when compared to uPIT-BLSTM.

5 Conclusions

In this paper, we explored uPIT-LC-BLSTM on single-channel multi-talker speech separation task to reduce the latency of uPIT-BLSTM from utterance-level to chunk-level. To reduce the SDR gap between uPIT-LC-BLSTM and uPIT-BLSTM, inter-chunk speaker tracing was proposed to further alleviate the permutation changing problem across neighboring chunks. Besides, a trade-off between inference latency and separation performance could be obtained according to the actual demand by setting the number of right contextual frames. In the future, we plan to combine the uPIT-LC-BLSTM with cross entropy for directly multi-talker speech recognition [34, 24, 5, 2, 27].


This work is partially supported by the National Natural Science Foundation of China (Nos. 11590774, 11590770).


  • [1] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al. (2016)

    Tensorflow: a system for large-scale machine learning.

    In OSDI, Vol. 16, pp. 265–283. Cited by: §4.1.
  • [2] X. Chang, Y. Qian, and D. Yu (2018) Adaptive permutation invariant training with auxiliary information for monaural multi-talker speech recognition. In Proc. ICASSP, pp. 5974–5978. Cited by: §1, §2.3, §5.
  • [3] X. Chang, Y. Qian, K. Yu, and S. Watanabe (2019) End-to-end monaural multi-speaker asr system without pretraining. In Proc. ICASSP (Accepted), Cited by: §1, §2.3.
  • [4] K. Chen and Q. Huo (2016) Training deep bidirectional lstm acoustic model for lvcsr by a context-sensitive-chunk bptt approach. IEEE/ACM Transactions on Audio, Speech, and Language Processing 24 (7), pp. 1185–1193. Cited by: §1, §2.3, §2.3.
  • [5] Z. Chen, J. Droppo, J. Li, and W. Xiong (2018) Progressive joint modeling in unsupervised single-channel overlapped speech recognition. IEEE/ACM Transactions on Audio, Speech, and Language Processing 26 (1), pp. 184–196. Cited by: §1, §2.3, §5.
  • [6] Z. Chen, Y. Luo, and N. Mesgarani (2017) Deep attractor network for single-microphone speaker separation. In Proc. ICASSP, pp. 246–250. Cited by: §1, §1, §1, §2.2.
  • [7] G. Cheng, D. Povey, L. Huang, J. Xu, S. Khudanpur, and Y. Yan (2018)

    Output-gate projected gated recurrent unit for speech recognition

    In Proc. Interspeech, pp. 1793–1797. Cited by: §2.3.
  • [8] F. A. Gers, J. Schmidhuber, and F. Cummins (2000) Learning to forget: continual prediction with lstm. Neural Computation 12 (10), pp. 2451–2471. Cited by: §1.
  • [9] F. A. Gers, N. N. Schraudolph, and J. Schmidhuber (2002) Learning precise timing with lstm recurrent networks. Journal of machine learning research 3 (Aug), pp. 115–143. Cited by: §1.
  • [10] A. Graves, A. Mohamed, and G. Hinton (2013)

    Speech recognition with deep recurrent neural networks

    In Proc. ICASSP, pp. 6645–6649. Cited by: §1, §2.3.
  • [11] A. Graves and J. Schmidhuber (2005) Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural Networks 18 (5-6), pp. 602–610. Cited by: §1, §2.3.
  • [12] K. Han, A. Chandrashekaran, J. Kim, and I. Lane (2018) Densely connected networks for conversational speech recognition. In Proc. Interspeech, pp. 796–800. Cited by: §2.3.
  • [13] S. Haykin and Z. Chen (2005) The cocktail party problem. Neural computation 17 (9), pp. 1875–1902. Cited by: §1.
  • [14] J. R. Hershey, Z. Chen, J. Le Roux, and S. Watanabe (2016) Deep clustering: discriminative embeddings for segmentation and separation. In Proc. ICASSP, pp. 31–35. Cited by: §1, §1, §4.1.
  • [15] S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural computation 9 (8), pp. 1735–1780. Cited by: §1.
  • [16] Y. Isik, J. Le Roux, Z. Chen, S. Watanabe, and J. R. Hershey (2016) Single-channel multi-speaker separation using deep clustering. In Proc. Interspeech, pp. 545–549. Cited by: §1, §1, §4.1.
  • [17] M. Kolbæk, D. Yu, Z. Tan, and J. Jensen (2017) Multitalker speech separation with utterance-level permutation invariant training of deep recurrent neural networks. IEEE/ACM Transactions on Audio, Speech, and Language Processing 25 (10), pp. 1901–1913. Cited by: §1, §1, §2.2, §2.3, Table 2, §4.1, §4.2.
  • [18] Y. LeCun, Y. Bengio, and G. Hinton (2015) Deep learning. Nature 521 (7553), pp. 436. Cited by: §1.
  • [19] W. Li, G. Cheng, F. Ge, P. Zhang, and Y. Yan (2018)

    Investigation on the combination of batch normalization and dropout in blstm-based acoustic modeling for asr

    In Proc. Interspeech, pp. 2888–2892. Cited by: §2.3.
  • [20] Y. Luo and N. Mesgarani (2018) Real-time single-channel dereverberation and separation with time-domain audio separation network. In Proc. Interspeech, pp. 342–346. Cited by: §1, §1.
  • [21] Y. Luo and N. Mesgarani (2018) Tasnet: time-domain audio separation network for real-time, single-channel speech separation. In Proc. ICASSP, pp. 696–700. Cited by: §1, §1, §4.1.
  • [22] V. Nair and G. E. Hinton (2010) Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning, pp. 807–814. Cited by: §4.1.
  • [23] V. Peddinti, Y. Wang, D. Povey, and S. Khudanpur (2018) Low latency acoustic modeling using temporal convolution and lstms. IEEE Signal Processing Letters 25 (3), pp. 373–377. Cited by: §2.3.
  • [24] Y. Qian, X. Chang, and D. Yu (2018) Single-channel multi-talker speech recognition with permutation invariant training. Speech Communication 104, pp. 1–11. Cited by: §1, §2.3, §5.
  • [25] M. N. Schmidt and R. K. Olsson (2006) Single-channel speech separation using sparse non-negative matrix factorization. In Ninth International Conference on Spoken Language Processing, Cited by: §1.
  • [26] H. Seki, T. Hori, S. Watanabe, J. Le Roux, and J. R. Hershey (2018) A purely end-to-end system for multi-speaker speech recognition. In Proc. the 56th Annual Meeting of the Association for Computational Linguistics, Vol. 1, pp. 2620–2630. Cited by: §1, §2.3.
  • [27] T. Tan, Y. Qian, and D. Yu (2018) Knowledge transfer in permutation invariant training for single-channel multi-talker speech recognition. In Proc. ICASSP, pp. 5340–5344. Cited by: §1, §2.3, §5.
  • [28] E. Vincent, R. Gribonval, and C. Févotte (2006) Performance measurement in blind audio source separation. IEEE transactions on audio, speech, and language processing 14 (4), pp. 1462–1469. Cited by: §4.1.
  • [29] D. Wang and G. J. Brown (2006) Computational auditory scene analysis: principles, algorithms, and applications. Wiley-IEEE press. Cited by: §1.
  • [30] W. Xiong, J. Droppo, X. Huang, F. Seide, M. Seltzer, A. Stolcke, D. Yu, and G. Zweig (2016) Achieving human parity in conversational speech recognition. arXiv preprint arXiv:1610.05256. Cited by: §2.3.
  • [31] W. Xiong, L. Wu, F. Alleva, J. Droppo, X. Huang, and A. Stolcke (2018) The microsoft 2017 conversational speech recognition system. In Proc. ICASSP, pp. 5934–5938. Cited by: §2.3.
  • [32] C. Xu, W. Rao, X. Xiao, E. S. Chng, and H. Li (2018) Single channel speech separation with constrained utterance level permutation invariant training using grid lstm. In Proc. ICASSP, pp. 6–10. Cited by: §1.
  • [33] S. Xue and Z. Yan (2017) Improving latency-controlled blstm acoustic models for online speech recognition. In Proc. ICASSP, pp. 5714–5718. Cited by: §2.3.
  • [34] D. Yu, X. Chang, and Y. Qian (2017) Recognizing multi-talker speech with permutation invariant training. In Proc. Interspeech, pp. 2456–2430. Cited by: §1, §2.3, §5.
  • [35] D. Yu, M. Kolbæk, Z. Tan, and J. Jensen (2017) Permutation invariant training of deep models for speaker-independent multi-talker speech separation. In Proc. ICASSP, pp. 241–245. Cited by: §1, §1, §2.2, §2.2, §3.3, §4.1.
  • [36] Y. Zhang, G. Chen, D. Yu, K. Yaco, S. Khudanpur, and J. Glass (2016) Highway long short-term memory rnns for distant speech recognition. In Proc. ICASSP, pp. 5755–5759. Cited by: §1, §2.3, §2.3, §4.6, Table 4.