Nearest Neighbor Search-Based Bitwise Source Separation Using Discriminant Winner-Take-All Hashing

08/26/2019 ∙ by Sunwoo Kim, et al. ∙ Indiana University Bloomington 0

We propose an iteration-free source separation algorithm based on Winner-Take-All (WTA) hash codes, which is a faster, yet accurate alternative to a complex machine learning model for single-channel source separation in a resource-constrained environment. We first generate random permutations with WTA hashing to encode the shape of the multidimensional audio spectrum to a reduced bitstring representation. A nearest neighbor search on the hash codes of an incoming noisy spectrum as the query string results in the closest matches among the hashed mixture spectra. Using the indices of the matching frames, we obtain the corresponding ideal binary mask vectors for denoising. Since both the training data and the search operation are bitwise, the procedure can be done efficiently in hardware implementations. Experimental results show that the WTA hash codes are discriminant and provide an affordable dictionary search mechanism that leads to a competent performance compared to a comprehensive model and oracle masking.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Numerous data-driven approaches to the source separation problem have attained much improvements in the denoising quality of the enhanced sound. Nonnegative matrix factorization (NMF)-based solutions have shown good performance, which provide not only dimensionality reduction but also an intuitive notion for audio signals [23, 25]

. Though lightweight and effective, NMF algorithms need to pre-specify the number of latent variables that can be found as a hyperparameter. Recently, deep learning approaches have been popular in this domain as well

[26]. Fully connected models trained on large sets of mixed spectra have been able to learn complex mappings to their corresponding ideal binary mask (IBM) targets [24, 8]

. In addition, recurrent neural networks, which can remember information from previous time frames using hidden states and gating techniques, have further improved denoising performance

[15]

. Convolutional neural networks

[9] and even Wavenets [17] have been successfully explored for source separation as well.

In the proposed method we formulate the source separation problem as a nearest search process, which is to find the nearest mixture spectrum, and consequently its corresponding IBM vector, in the training set for the given test mixture spectrum. We expedite this tedious search process by a hashing scheme that produces binary codes which enable a bitwise search operation. To this end, we choose the winner-take-all (WTA) hashing algorithm [27]

, which has been successfully used in speeding up complex computer vision tasks

[4]. Our proposed method provides an affordable, iteration-free, and hardware-friendly bitwise solution to the nearest search problem that finds the source separation solution within the raw training mixture spectra by using the test mixture spectrum as a query. One weakness is the requirement for a large dictionary, but we mitigate it by reducing its size using hashing.

2 Related Works

2.1 Manifold preserving source separation

Instead of learning complex models that can generalize well, the data themselves can act as a more expressive representation. In the sparse topic modeling-based source separation [19]

, clean source spectra are set as the overcomplete bases for source-specific dictionaries. Incoming mixture spectrum is then decomposed into sparse activations of those predefined bases vectors. This procedure preserves the manifold of the sources, thereby providing more natural reconstruction of sources. Nonetheless, manifold preservation necessitates a large source dictionary to extract close estimates. A less computationally intensive formulation can be found in the hierarchical latent variable model, where additional latent variables weed out redundant dictionary items during analysis while still retaining the same expressiveness of the data

[13].

A source separation system can employ the -nearest neighbor (NN) search outside of the topic modeling or NMF context. For example, a vocal separation method was proposed in [6], where the median value of NN for each mixture frame estimates the background music. However, this approach is an unsupervised algorithm which cannot take advantage of available training data.

In the manifold preserving source separation context, WTA hashing was explored as a fast and low-cost surrogate for searching relevant source candidates in the sparse encoding step [12]. The clean training spectra and source estimates were transformed into binary hash codes, which allowed for an efficient bitwise search to reduce the size of the dictionary. One limitation is the inability of the hash codes to fully reflect the original error function, cross entropy; hence, for a guaranteed performance, a full EM procedure is still required on the reduced dictionary.

To resolve this issue, a fully bitwise voting-based solution was proposed in [10]. Again by applying WTA hashing on the source dictionaries, the algorithm counted the number of matches directly between the hashed mixture and each dictionary to represent the similarity of the mixture to the individual sources. This deemed source estimates unnecessary, and as an additional benefit, it could be performed as a single shot E-step. However, the procedure relies heavily on the W-disjoint orthogonality [18], a quality preserved by WTA codes. Furthermore, the separation quality is yet suboptimal due to the randomness in the hash function. We extend this line of work and propose a better-performing, fully bitwise, and iteration-free algorithm with no assumed W-disjoint orthogonality.

2.2 Winner-Take-All Hashing

In image classification [4] and audio source separation tasks [12, 10], WTA hashing has shown its potential in approximated NN searches. The core property of WTA is that rank orders of multiple dimensions can preserve the shape of the input vector. To this end, WTA approximates the exponentially complex rank order metric by repeatedly sub-sampling randomly permuted dimensions and recording the position of the winner out of . As each repetition adds more ordering information, the accumulated winner indices form a hash code that holds partial rank orders.

WTA is formulated as follows. Let denote data samples in a -dimensional feature space. Let be permuted indices: , where . Note that . The WTA procedure generates a set of such permutations . Each permutation selects elements from the input vectors , and calculates the positions of the maximum elements, which forms -th integer hash code . By repeating this process for all permutations, the retrieved integers per sample form hash codes . For example, for input vectors of , suppose and such that and . Then, the WTA hash codes for an input is as and .

There are several benefits of employing WTA. First, it nonlinearly transforms the data samples into binary features preserving the rank correlation of the original representation. Also, as with locality sensitive hashing [7]

, the encodings for similar data points have higher probability of collision. Furthermore, the Hamming metric can expedite comparisons and the similarity search. Lastly, the partial rank order statistics encoded in the bistrings share the benefits of rank correlation measures such as robustness to additive noise.

2.3 Kernel-based source separation

(a) Original
(b) WTA hash codes
Figure 1: Self-affinity matrices of original time-frequency bins and WTA hash codes.

Although we rely on the randomly generated permutation table

, we wish that it leads to discriminant binary embeddings that preserve pairwise similarity among original data samples. Finding embeddings that preserve the semantic similarity is a popular goal in many disciplines. In natural language processing, Word2Vec

[14] or GloVe [16] methods use pairwise metric learning to retrieve a distributed contextual representation that retains complex syntactic and semantic relationships within documents. Another model that trains on similarity information is the Siamese networks, which learn to discriminate a pair of examples [2].

Utilizing similarity information has been explored in the source separation community by posing denoising as a segmentation problem in the time-frequency plane with an assumption that the affinities between time-frequency regions of the spectrogram could condense complex auditory features together [1]. Inspired by studies of perceptual grouping [3], in [1]

local affinity matrices were constructed out of cues specific to that of speech. Then, spectral clustering segments the weighted combination of similarity matrices to unmix speech mixtures. On the other hand, deep clustering learned a neural network encoder that produces discriminant spectrogram embeddings, whose objective is to approximate the ideal pairwise affinity matrix induced from IBM

[11].

Our proposed method also learns a transformation function as in deep clustering, but in the form of a WTA hash function, which still approximates the ground-truth affinity matrix in the binary embedding space. Figure 1 illustrates this self-affinity preserving quality of the WTA hash codes. Also, our method predicts an IBM vector per frame rather than attempting to segment spectrogram bins.

3 WTA Hashing for NN Source Separation

3.1 NN search-based source separation

We keep consistent with preserving the manifold by maintaining the excessively many training examples as the search space and finding only NN to infer the mask. We assume that if the mixture frames are similar, the sources in the mixture as well as their IBMs must also be similar. We also assume that the average of IBMs of NN is a good estimate of the ideal ratio mask (IRM).

Let be the feature vectors from frames of training mixture examples. As our training examples are the mixture signals of the sources of interest,

can be a potentially very large number as it grows with the number of sources. Out of many potential choices, we are interested in short-time Fourier transform (STFT) and mel-spectra for feature extraction. For example, if

is from STFT on the training mixture signals, equals the number of subbands in each spectrum, while for mel-spectra . We also prepare their corresponding IBM matrix, , whose dimension matches that of STFT. For a feature vector of an incoming test mixture frame , our goal is to estimate a denoising mask, , to recover the source by masking, , for which should be with full complex Fourier coefficients.

1:Input: , A test mixture vector and the dictionary
2:Output: A denoising mask vector
3:Initialize an empty set and
4:for  to  do
5:     if  then
6:         Replace the farthest neighbor index in with
7:         Update      
8:return
Algorithm 1 NN source separation

Algorithm 1 describes the NN source separation procedure. We use notation

as the affinity function, e.g., the cosine similarity function. For each frame

in the mixture signal, we find the closest frames in the dictionary (line 4 to 7), which forms the neighborhood set . Using them, we find the corresponding IBM vectors from and take their average (line 8).

Complexity: The search procedure is non-iterative but requires a linear scan of all frames in , giving . This procedure is restrictive since needs to be large for good source separation. In the next section, we apply WTA hashing to convert into integer values to perform the search in a bitwise fashion.

Figure 2: The NN-based source separation process using WTA hash codes.

3.2 NN-based source separation on WTA hash codes

We can expedite Algorithm 1 using hashed spectra and the Hamming similarity between them. To this end, we first generate the random permutations , which is used to convert and to obtain and , respectively. By having them as the new feature representations, we apply Algorithm 1, but this time with Hamming similarity as the similarity function that counts the number of matching integers: , where . The other parts of the algorithm are the same. Figure 2 describes the source separation process using NN searches on hash codes.

Complexity: Since the same Algorithm 1 is used, the time complexity is still

. Nonetheless, the procedure is significantly accelerated since the binarized feature vectors allow the Hamming similarity calculation to be done through bitwise AND operations. In addition, the spatial complexity reduces significantly from

to , where 64 and are the number of bits to store an element in (double precision) and , respectively. Our experiments choose , and .

Some degradation in performance is expected due to quantization error. Theoretically, the asymptotic behavior as , the hash codes closely approximates the full rank order metric, but there is a mismatch between the full rank order metric and the choice of similarity function in the original feature space, e.g., . Hence, increasing does not always guarantee the best result. Another problem with this approach is the randomness in computing , whose quality as a hash function fluctuates. A more consistent result can be achieved by repeating and averaging results from different tables, while the repetition will multiply the size of .

Noise SDR SIR SAR STOI
types Oracle NN WTA Oracle NN WTA Oracle NN WTA Oracle NN WTA
1 17.90 11.96 11.77 27.01 21.18 20.42 18.51 12.55 12.45 0.96 0.82 0.81
2 12.77 6.58 5.34 22.48 18.48 17.00 13.28 6.93 5.74 0.94 0.74 0.71
3 21.02 17.27 17.01 30.87 29.63 29.80 21.54 17.59 17.30 0.99 0.95 0.95
4 16.41 10.75 10.06 28.86 21.65 21.85 16.67 11.16 10.42 0.94 0.83 0.82
5 19.23 13.52 12.77 26.71 22.61 22.11 20.26 14.19 13.36 0.97 0.88 0.87
6 17.21 11.32 11.28 27.10 19.50 19.25 17.69 12.12 12.01 0.97 0.85 0.84
7 14.01 7.56 7.10 23.85 17.41 16.33 14.53 8.13 7.64 0.96 0.78 0.77
8 16.69 12.46 11.37 28.65 28.57 29.15 16.99 12.58 11.25 0.94 0.87 0.86
9 15.26 10.43 9.74 24.92 23.38 23.42 15.77 10.68 9.74 0.93 0.79 0.77
10 11.94 6.77 5.56 20.98 18.32 17.49 12.56 7.16 5.74 0.91 0.68 0.65
Table 2: A comparison of different models.
Systems SDR SIR SAR
Oracle 16.24 26.14 16.78
NN on mel 10.86 20.07 11.31
NN-WTA 10.20 21.68 10.56
KL-NMF (Male) 10.23 - -
USM (Male) 10.41 - -
WTA on E-step 7.47 10.03 9.35
Table 1: Speech denoising performance of the proposed WTA source separation model compared with oracle and plain NN procedure.

4 Experiments

4.1 Experimental setups

For the experiment, we randomly subsample 16 speakers for training and 10 speakers for testing from the TIMIT corpus with a gender balance, where each speaker is with ten short recordings of various utterances with a 16kHz sampling rate. Each utterance is mixed with ten different non-stationary noise sources with 0 dB signal-to-noise ratio (SNR), namely birds, casino, cicadas, computer keyboard, eating chips, frogs, jungle, machine guns, motorcycles, ocean [5]. For each noise type, we have 1,600 training utterances consisting of approximately 15,000 frames to build our mixture dictionary and ten query utterances. We apply a short-time Fourier transform (STFT) with a Hann window of 1024 and hop size of 512 and transform these into mel-spectrograms. For evaluation of the final results, we used signal-to-distortion ratio (SDR), signal-to-interference ratio (SIR), signal-to-artifact ratio (SAR) [22], and short-time objective intelligibility (STOI) [21].

We compare three systems we implement, and three other kinds from the literature:

  • [leftmargin=0in]

  • Oracle IRM: We apply the ground-truth IRM to the test signal to calculate the performance bound of the source separation task.

  • NN on the original spectra: For each given test spectrum we can find the best matches from the dictionary by using as the similarity metric (Algorithm 1). Hashing-based technics try to catch up this performance.

  • NN on the WTA hash codes: Performs NN separation using (Section 3.2).

  • KL-NMF: An NMF-based model that learns noise- and speaker-specific dictionaries. It is fully supervised, while our NN models are specific only to noise types. We are based on the experimental results reported in [20].

  • Universal speech model (USM): USM is another NMF-based fully supervised model that uses 20 speaker-specific dictionaries, but only a few of them are activated during the test time [20].

  • Bitwise E-step using WTA: Another variant that uses WTA process to replace the posterior estimation (E-step) in topic modeling [10]. This one is based on a large speech dictionary from 32 randomly chosen training speakers, while the noise type is known.

4.2 Experimental results and discussion

  • [leftmargin=0in]

  • WTA parameters: We explore and for our WTA source separation algorithm while fixing . is the number of samples in comparison to find the winner in each permutation. Larger value can exploit the distribution of the input vector more to some degree, but a too large is detrimental as it breaks down the locality assumption. is the number of nearest neighbor frames we search from the dictionary. Larger would provide more examples for the IRM estimation. However, more neighbors do not always correlate with better source separation performance, similarly to the NN classification case. Furthermore, and define the computational complexity of the system as discussed in Section 3. Figure 3 (a) illustrates a grid search result on and from the WTA-based NN source separation. For the given combination, we perform separation on all ten noise types and take the average. and are the best combination, giving peak average performance of 10.20 dB.

    We present a closer look at the best parameters in Table 2. The results show the source separation performance over all ten noises of the oracle, NN, and WTA. The performance of WTA catches up to that of NN for all metrics and obtains higher SIR values for certain noise types. For noise type 8, the WTA method even outperforms the oracle in terms of SIR. This shows that the NN procedure works well in the hashed space, which is expected from the property of WTA that claims that the ranking correlation measures are preserved with the Hamming metric. Some decrease in performance, however, is shown and this is expected since the hashing procedure incurs a quantization error.

Spectrogram format: Some time-frequency bins in a STFT spectrogram, especially in the high frequency, are with minuscule values. The mel-spectrogram, on the other hand, is on a logarithmic scale with lower frequency resolution in the high frequency. Therefore, we expect that WTA hashing on mel-spectra is based on comparisons involving more lower frequency bins, thus making winner indices more representative. Furthermore, mel-spectra is with much smaller dimensionality, a property that makes WTA hashing easier to preserve locality. The described effect is illustrated in Figure 3 (b). Note that for certain noises, WTA on a STFT spectrogram performs much worse, and even returns a negative SDR value.

Figure 3: (a) Comparison of source separation on STFT and Mel spectra in terms of SDR (b) Results of WTA on various and .

Comparison with other dictionary based methods:

Table 2 shows the results of the proposed method along with those of other dictionary based methods that use KL-NMF, USM, and the bitwise posterior estimation using WTA (WTA on E-step). Although each of the methods used different speaker sets, they were all tested using the same noise types in similar scenarios. KL-NMF and USM learns 10 basis vectors from the speakers, although KL-NMF assumes the known speaker identity and USM does not. Both of them are only on male speakers. WTA on E-step uses gender-balanced 64 speakers’ spectra as its large dictionary. Our proposed WTA method achieves either much higher or at least similar performance against all the mentioned methods. Additionally, NN-WTA is iteration-free. Although it requires a linear scan for the nearest neighbor search, it can be done efficiently with bitwise operations and avoid accruing additional run time.

Comparing against KL-NMF, a real-valued model that assumes the speaker identity to be known, our method on unseen test speaker sets does slightly worse (10.20 versus 10.23). Another NMF based approach, USM, utilizes a much larger dictionary and block sparsity as a regularizer, and performs slightly better than NN-WTA (10.20 versus 10.41). Finally, we compare the fully bitwise models together, NN-WTA and WTA on E-step. As the proposed NN-WTA is free from EM-based estimation, it benefits directly from the abundance of the data and outperforms the topic modeling-based bitwise model with a large margin (10.20 versus 7.47).

It should be noted that NN-WTA is performed in the fully bitwise domain, but it still competes with the other real-valued NMF-based models. Comparing against just NN on mel-spectra, it can be seen that our method incurred some penalty from the hashing process; however, the loss is minimal and acceptable given the massive reduction in computational cost from using bitwise operations. Furthermore, the hashing based approach is far more practical to deploy in extreme environments where resource is strictly constrained.

5 Conclusion

In this paper, we proposed a fully bitwise nearest neighbor based algorithm for source separation. The WTA source separation model generates permutations and transforms the dictionary and noisy signal into a hashed feature space in which the original rank correlation is preserved with the Hamming metric. This not only allows compression into binary bit strings, but it also allows the use of bitwise operations for the Hamming distance, further reducing computation. Hence, search for nearest neighbors can be done efficiently from which to estimate the IRM. For good parameters, NN-WTA performs well for the speech denoising job reaching SDR values above 10dB. Future directions to explore are minimizing the quantization error introduced from hashing by exploring self-affinity matrices of the hashed and original for better hashed representation.

References

  • [1] F. R. Bach and M. I. Jordan (2006) Learning spectral clustering, with application to speech separation. Journal of Machine Learning Research 7 (Oct), pp. 1963–2001. Cited by: §2.3.
  • [2] J. Bromley, I. Guyon, Y. LeCun, E. Säckinger, and R. Shah (1994) Signature verification using a” siamese” time delay neural network. In Advances in neural information processing systems, pp. 737–744. Cited by: §2.3.
  • [3] M. Cooke and D. P. Ellis (2001) The auditory organization of speech and other sources in listeners and computational models. Speech communication 35 (3-4), pp. 141–177. Cited by: §2.3.
  • [4] T. Dean, M. A. Ruzon, M. Segal, J. Shlens, S. Vijayanarasimhan, and J. Yagnik (2013) Fast, accurate detection of 100,000 object classes on a single machine. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    ,
    pp. 1814–1821. Cited by: §1, §2.2.
  • [5] Z. Duan, G. J. Mysore, and P. Smaragdis (2012) Online PLCA for real-time semi-supervised source separation. In International Conference on Latent Variable Analysis and Signal Separation, pp. 34–41. Cited by: §4.1.
  • [6] D. FitzGerald (2012) Vocal separation using nearest neighbours and median filtering. Cited by: §2.1.
  • [7] A. Gionis, P. Indyk, R. Motwani, et al. (1999) Similarity search in high dimensions via hashing. In Vldb, Vol. 99, pp. 518–529. Cited by: §2.2.
  • [8] E. M. Grais, M. U. Sen, and H. Erdogan (2014) Deep neural networks for single channel source separation. In Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pp. 3734–3738. Cited by: §1.
  • [9] E. M. Grais and M. D. Plumbley (2017)

    Single channel audio source separation using convolutional denoising autoencoders

    .
    In 2017 IEEE Global Conference on Signal and Information Processing (GlobalSIP), pp. 1265–1269. Cited by: §1.
  • [10] L. Guo and M. Kim (2018) Bitwise source separation on hashed spectra: an efficient posterior estimation scheme using partial rank order metrics. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 761–765. Cited by: §2.1, §2.2, 6th item.
  • [11] J. R. Hershey, Z. Chen, J. Le Roux, and S. Watanabe (2016) Deep clustering: discriminative embeddings for segmentation and separation. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 31–35. Cited by: §2.3.
  • [12] M. Kim, P. Smaragdis, and G. J. Mysore (2015) Efficient manifold preserving audio source separation using locality sensitive hashing. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 479–483. Cited by: §2.1, §2.2.
  • [13] M. Kim and P. Smaragdis (2013) Manifold preserving hierarchical topic models for quantization and approximation. In International Conference on Machine Learning, pp. 1373–1381. Cited by: §2.1.
  • [14] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean (2013) Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pp. 3111–3119. Cited by: §2.3.
  • [15] S. I. Mimilakis, K. Drossos, J. F. Santos, G. Schuller, T. Virtanen, and Y. Bengio (2018) Monaural singing voice separation with skip-filtering connections and recurrent inference of time-frequency mask. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 721–725. Cited by: §1.
  • [16] J. Pennington, R. Socher, and C. Manning (2014) Glove: global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp. 1532–1543. Cited by: §2.3.
  • [17] D. Rethage, J. Pons, and X. Serra (2018) A wavenet for speech denoising. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5069–5073. Cited by: §1.
  • [18] S. Rickard and O. Yilmaz (2002) On the approximate w-disjoint orthogonality of speech. In 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 1, pp. I–529. Cited by: §2.1.
  • [19] P. Smaragdis, M. Shashanka, and B. Raj (2009) A sparse non-parametric approach for single channel separation of known sounds. In Advances in neural information processing systems, pp. 1705–1713. Cited by: §2.1.
  • [20] D. L. Sun and G. J. Mysore (2013) Universal speech models for speaker independent single channel source separation. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 141–145. Cited by: 4th item, 5th item.
  • [21] C. H. Taal, R. C. Hendriks, R. Heusdens, and J. Jensen (2010) A short-time objective intelligibility measure for time-frequency weighted noisy speech. In 2010 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 4214–4217. Cited by: §4.1.
  • [22] E. Vincent, R. Gribonval, and C. Févotte (2006) Performance measurement in blind audio source separation. IEEE transactions on audio, speech, and language processing 14 (4), pp. 1462–1469. Cited by: §4.1.
  • [23] T. Virtanen (2007) Monaural sound source separation by nonnegative matrix factorization with temporal continuity and sparseness criteria. IEEE transactions on audio, speech, and language processing 15 (3), pp. 1066–1074. Cited by: §1.
  • [24] Y. Wang and D. Wang (2013) Towards scaling up classification-based speech separation. IEEE Transactions on Audio, Speech, and Language Processing 21 (7), pp. 1381–1390. Cited by: §1.
  • [25] K. W. Wilson, B. Raj, P. Smaragdis, and A. Divakaran (2008-03) Speech denoising using nonnegative matrix factorization with priors. In 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, Vol. , pp. 4029–4032. External Links: Document, ISSN 1520-6149 Cited by: §1.
  • [26] Y. Xu, J. Du, L.-R. Dai, and C.-H. Lee (2014) An experimental study on speech enhancement based on deep neural networks. IEEE Signal processing letters 21 (1), pp. 65–68. Cited by: §1.
  • [27] J. Yagnik, D. Strelow, D. A. Ross, and R. Lin (2011) The power of comparative reasoning. In 2011 International Conference on Computer Vision, pp. 2431–2438. Cited by: §1.