1 Introduction
For headmounted assistive listening devices (e.g., hearing aids, cochlear implants), algorithms that use the microphone signals from both the left and the right hearing device are effective techniques to improve speech intelligibility, as the spatial information captured by all microphones can be exploited [1, 2]. Besides reducing undesired sources and limiting speech distortion, another important objective of binaural speech enhancement algorithms is the preservation of the listener’s perception of the acoustical scene, in order to exploit the binaural hearing advantage [3] and to reduce confusions due to a mismatch between acoustical and visual information.
To achieve binaural noise reduction with binaural cue preservation, two main concepts have been developed.
In the first concept, a common realvalued spectrotemporal gain is applied to the reference microphone signals in the left and the right hearing device [4, 5, 6, 7, 8, 9, 10], ensuring perfect preservation of the instantaneous binaural cues but inevitably introducing speech distortion.
The second concept, which is considered in this paper, is to apply a complexvalued filter to all available microphone signals on the left and the right hearing device using binaural extensions of spatial filtering techniques [11, 12, 13, 14, 15, 16, 17, 18, 19].
While the wellknown binaural minimum variance distortionless response (BMVDR) beamformer
[1] preserves the binaural cues (i.e., the interaural level difference (ILD) and interaural time difference (ITD)) of one desired source, the binaural linearly constrained minimum variance (BLCMV) beamformer [15] is also able to preserve the binaural cues of interfering sources. This is achievable by imposing interference scaling constraints for these sources. It should be noted that the BMVDR and BLCMV beamformers require an estimate of the correlation matrix that should be minimized and an estimate of the relative transfer functions (RTFs) of the desired and interfering sources. The performance of these beamformers may significantly deteriorate in case of estimation errors. Such estimation errors occur if only short temporal observation intervals for estimation can be used, e.g., due to dynamic spatial scenarios such as moving sources or head movement.In this paper, we first derive optimal values for the interference scaling parameters in the BLCMV beamformer based on the BMVDR beamformer with RTF preservation (BMVDRRTF) [13, 14] for an arbitrary number of interfering sources. Secondly, since these values are optimal in the sense of noise reduction but not robust against RTF estimation errors in practice, we propose to apply an upper and lower threshold on them. We evaluate the performance of the BMVDR beamformer and the BLCMV beamformer using the two different interference scaling parameters and measured impulse responses from hearing aids in a cafeteria [20] for several temporal observation intervals. The results show that even rather short temporal temporal observation intervals lead to sufficient noise reduction performance and that the imposed threshold on the optimal interference scaling parameters can significantly reduce binaural cue errors.
2 Configuration and Notation
Consider the binaural hearing device configuration in Fig. 1, consisting of a microphone array with microphones on the left and the right hearing device. For an acoustic scenario with one desired source, interfering sources and incoherent background noise, the th microphone signal of the left hearing device can be written in the frequencydomain as
(1) 
with the desired speech component, the th interference component and the background noise component (e.g., diffuse noise) in the th microphone signal. The th microphone signal of the right hearing device is defined similarly. For conciseness we will omit the frequency variable in the remainder of the paper. We define the
dimensional stacked signal vector
as(2) 
where denotes the transpose and which can be written as
(3) 
where , and are defined similarly as in (2) and denotes the overall undesired component, i.e., interference plus background noise components. For the coherent desired source and the coherent interfering sources , with , the vectors and can be written as
(4) 
with and the acoustic transfer functions (ATFs) between all microphones and the desired and the th interfering source, respectively. Without loss of generality, we choose the first microphones on the left and the right hearing device as reference microphones, i.e.,
(5) 
where and are dimensional vectors with one element equal to and the other elements equal to , i.e., and . The correlation matrices of the background noise component, the desired speech component, the th interference component and all interference components are defined as
(6)  
(7) 
where denotes the expectation operator, denotes the conjugate transpose and and denote the power spectral density (PSD) of the desired source and the th interfering source, respectively. Assuming statistical independence between the components in (1), the correlation matrix of the microphone signals can be written as
(8) 
with the correlation matrix of the overall undesired component.
The output signal at the left hearing device is obtained by filtering the microphone signals with the dimensional filter , i.e.,
(9) 
The output signal at the right hearing aid is similarly defined. Furthermore, we define the dimensional filter vector as
(10) 
The RTF vectors of the desired and the interfering sources are defined by relating the ATF vectors to the ATF of the reference microphone on the left and the right hearing device, i.e.,
(11) 
The dimensional matrices and containing the RTF vectors of all interfering sources are defined as
(12) 
The binaural input and output signaltonoise ratio (SNR) is defined as the ratio of the average input and output PSDs of the desired speech component and the background noise component, i.e.,
(13) 
The binaural input and output signaltointerference ratio (SIR) is defined as the ratio of the average input and output PSDs of the desired speech component and the interference components, i.e.,
(14) 
The binaural input and output signaltointerferenceplusnoise ratio (SINR) is defined as the ratio of the average input and output PSDs of the desired speech component and the overall undesired component, i.e.,
(15) 
3 Binaural noise reduction algorithms
In Section 3.1 and 3.2 we briefly review the BMVDR beamformer [1, 2, 12] and the BLCMV beamformer [15]. Based on the optimality of the BMVDRRTF beamformer [14] in optimizing the SINR (or SNR) while preserving the binaural cues of all sources, in Section 3.3 we derive optimal values for the interference scaling parameters in the BLCMV beamformer in the case of an arbitrary number of interfering sources. Furthermore, in order to achieve a robust binaural cue preservation performance in case of estimation errors of the correlation matrices and the RTF vectors (Section 3.4), we propose to threshold these interference scaling parameters.
3.1 BMVDR beamformer
The BMVDR beamformer aims at minimizing the output PSD in both hearing devices, while preserving the desired speech component in the reference microphone signals. The corresponding constrained optimization problem is given by
(16) 
with
(17) 
with either equal to the correlation matrix of the microphone signals, the correlation matrix of the overall undesired component or the correlation matrix of the background noise component. The constraint set in (16) is given by
(18) 
requiring the RTF vectors of the desired source. The solution to the optimization problem in (16) using the constraint set in (18) is equal to [1, 12, 21]
(19) 
From a theoretical point of view, in the case of perfectly estimated quantities (i.e., correlation matrices and RTF vector), using or in (19) is optimal in the SINR sense, whereas using in (19) is optimal in the SNR sense. While the BMVDR beamformer preserves the binaural cues of the desired source, its major drawback is the distortion of the binaural cues of the interfering sources (and background noise), such that all sources are perceived as coming from the direction of the desired source. In practice, it should also be realized that using may lead to target cancellation in the case of RTF estimation errors of the desired source [21] and that is not straightforward to estimate.
3.2 BLCMV beamformer
In order to also take binaural cue preservation of the interfering sources into account as well as control the amount of interference suppression, it has been proposed in [15] to add interference scaling constraints to the BMVDR beamformer, leading to the BLCMV beamformer. This corresponds to the constrained optimization problem in (18) with the constraint set
(20) 
requiring the RTF vectors of the desired source and all interfering sources. The dimensional vectors and contain the interference scaling parameters, which control the suppression and the binaural cue preservation of the interfering sources. The BLCMV beamformer is given by
(21) 
Setting ensures binaural cue preservation of the th interfering source, while the absolute values of and directly determine the SIR improvement for the th interfering source. From a theoretical point of view, in the case of perfectly estimated quantities (i.e., correlation matrices and RTF vectors), setting in the BLCMV beamformer is optimal in the SIR sense, but not necessarily in the SINR or SNR sense. Moreover, in contrast to the BMVDR beamformer, the choice of the correlation matrix has no impact on the SINR, SNR and SIR improvement and the binaural cue preservation as these are completely determined by the interference scaling parameters. In practice, in the case of estimation errors the choice of the correlation matrix will obviously have an influence on the performance of the BLCMV beamformer (cf. Section 4).
3.3 Interference scaling parameters
As an extension of the method presented in [22] for an arbitrary number of interfering sources, in this section we propose a method to determine the interference scaling parameters that maximize the SINR or the SNR while preserving the binaural cues of the interfering sources. To this end, we will use the BMVDR beamformer with RTF preservation [14], denoted as BMVDRRTF beamformer, which is a special case of the BLCMV beamformer. In the BMVDRRTF beamformer the constraints related to the interfering sources only control the binaural cue preservation while the amount of desired interference suppression is not specified, i.e.,
(22) 
leading to the constraint set
(23) 
The BMVDRRTF beamformer is given by [14]
(24) 
and either maximizes the SINR ( or ) or the SNR (), while preserving the binaural cues of all sources.
Hence, the optimal interference scaling parameters for the BLCMV beamformer (in the SINR or SNR sense) can be determined as
(25) 
However, using the optimal interference scaling parameters may lead to problems in practice due to estimation errors of the correlation matrices and RTF vectors. More in particular, in the case of SINR maximization, the corresponding interference scaling parameters may be rather small, leading to a decreased binaural cue preservation performance (cf. simulations in Section 4). On the other hand, in the case of SNR maximization, the corresponding interference scaling parameters may be rather large, depending on the position of the interfering source, leading to an unsatisfying SINR improvement. Hence, we propose to enforce an upper and lower threshold on the optimal interference scaling parameters, i.e.,
(26) 
The thresholds have been experimentally obtained as and , limiting the theoretically possible SIR improvement for each interfering source between and .
3.4 Estimation of correlation matrices and RTFs
All considered binaural beamformers require an estimate of the RTF vectors and of the desired source (cf. (11)). In addition, the BLCMV and BMVDRRTF beamformers require an estimate of the RTF vectors and of each interfering source. In this paper, we will estimate these RTFs using the covariance whitening approach [23, 24]
, which is based on the generalized eigenvalue decomposition (GEVD) of the speech + noise correlation matrix
and the background noise correlation matrix or the GEVD of the interference + noise correlation matrix and . While can be estimated exploiting the assumed stationarity of the background noise, estimating and from the available mixture is not straightforward. Due to limited source activity and possible spatial changes of the acoustic scenarios, the temporal observation interval that is available in practice for estimating these correlation matrices is typically limited. We assume that the correlation matrix can be estimated from an observation interval consisting of frames (corresponding to seconds) where only the desired source and the background noise are active, i.e.,(27) 
where is the frame index. Similarly, we assume that the correlation matrix can be estimated from an observation interval of frames where only the th interfering source and the background noise are active.
4 Experimental Results
In this section, we experimentally investigate the effect of the temporal observation interval on the performance of the BMVDR beamformer () and the BLCMV beamformer using either the optimal interference scaling parameters () or the proposed thresholded interference scaling parameters () (cf. Section 3.3).
We consider three different acoustic scenarios comprising of one desired source, one or two interfering sources and diffuse background noise (cf. Table 1 for source positions).
The desired source was a male German speaker, the first interfering source was a male Dutch speaker and the second interfering source was a male English speaker.
The desired speech and interference components were generated by convolving the desired and interfering source signals with measured impulse responses of binaural behindtheear hearing aids mounted on a dummy head in a cafeteria ()[20], with microphones per hearing aid.
For background noise we used real ambient noise recorded in the same
cafeteria with the same setup. The sampling frequency was . All
signals start with of noiseonly, followed by about of all sources
being active.
The broadband input SNR was set to and the SIRs were set to .
The noise correlation matrix was estimated using the noiseonly segment.
To estimate the correlation matrices , , and , we considered different temporal observation intervals (starting at ), whose length ranged between and .
To estimate the correlation matrices , and the algorithm had access to the respective mixtures.
The RTF vectors of the desired source and the interfering source(s) were then calculated based on these estimated correlation matrices (cf. Section 3.4).
Please note that shorter temporal observation intervals correspond to larger estimation errors.
The microphone signals were processed using a weighted overlapadd framework with a block length of 256 with 50% overlap and a squareroot Hann window.
The BMVDR and BLCMV beamformers were calculated using three different correlation matrices, i.e., (maximizing SINR with possible target cancellation), (maximizing SINR) and (maximizing SNR).
The filters were used as fixed filters over the whole signal.
As performance measures we used the binaural SINR improvement and the binaural cues errors, i.e., ILD and ITD errors, that we calculated using a model of binaural auditory processing [25].
All performance measures were averaged over all frequencies and all acoustic scenarios.
Figure 2 depicts the SINR improvement for different lengths of the temporal observation interval and for different correlation matrices, while Figure 3 depicts the binaural cue errors of the first interfering source for the same temporal observation intervals and .
First, it can be observed that when using or the SINR improvement is generally larger than when using . This is expected because using the noise correlation matrix is maximizing the SNR and not the SINR.
Second, when using or , an apparent difference can be seen for small observation intervals below . The small observation intervals lead to larger estimation errors for the correlation matrices and hence also for the RTF vectors, such that the drop in SINR improvement observed when using
is probably attributed to target cancellation. For longer observation intervals and hence smaller estimation errors, the difference between using
and is smaller. As expected, the SINR improvement of the BLCMV beamformer using the thresholded interference scaling parameters is smaller than for the BLCMV beamformer using the optimal interference scaling parameters . Although, looking at the binaural cue errors, using in the BLCMV beamformer leads to much better binaural cue preservation, while using leads to similar binaural cue errors as for the BMVDR beamformer. This difference is especially visible for the ITD error at small observation intervals and is also confirmed by informal listening tests. Third, when using , the BLCMV beamformer outperforms the BMVDR beamformer for longer observation intervals above because of the additional constraints. Additionally, using in the BLCMV beamformer apparently leads to marginally better SINR improvement in this case. Because is in practice very hard to accurately estimate, it should be recommended to use when short observation intervals are required (e.g., in dynamic acoustic scenarios) and to use in the BLCMV beamformer to prevent binaural cue errors.Scenario  1  2  3 

Desired  
Interfering  , 
5 Conclusions
In this paper, we proposed optimal values for the interference scaling parameters in the BLCMV beamformer for an arbitrary number of interfering sources based on the BMVDRRTF beamformer. We showed how to set these parameters in practice such that a robust performance in the case of estimation errors can be achieved. Evaluation results in a complex acoustic scenario showed that even short temporal observation intervals for estimating the required correlation matrices and RTF vectors are sufficient to achieve a decent noise reduction performance and binaural cue preservation.
References
 [1] S. Doclo, W. Kellermann, S. Makino, and S.E. Nordholm, “Multichannel Signal Enhancement Algorithms for Assisted Listening Devices: Exploiting spatial diversity using multiple microphones,” IEEE Signal Processing Magazine, vol. 32, no. 2, pp. 18–30, Mar. 2015.
 [2] S. Doclo, S. Gannot, D. Marquardt, and E. Hadad, “Binaural Speech Processing with Application to Hearing Devices,” in Audio Source Separation and Speech Enhancement, chapter 18. Wiley, 2018.
 [3] A. W. Bronkhorst and R. Plomp, “The effect of headinduced interaural time and level differences on speech intelligibility in noise,” The Journal of the Acoustical Society of America, vol. 83, no. 4, pp. 1508–1516, 1988.
 [4] T. Lotter and P. Vary, “Dualchannel speech enhancement by superdirective beamforming,” EURASIP Journal on Applied Signal Processing, vol. 2006, pp. 1–14, 2006.
 [5] G. Grimm, V. Hohmann, and B. Kollmeier, “Increase and Subjective Evaluation of Feedback Stability in Hearing Aids by a Binaural Coherencebased Noise Reduction Scheme,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 17, no. 7, pp. 1408–1419, Sep. 2009.
 [6] A. H. KamkarParsi and M. Bouchard, “Improved Noise Power Spectrum Density Estimation for Binaural Hearing Aids Operating in a Diffuse Noise Field Environment,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 17, no. 4, pp. 521–533, May 2009.
 [7] A. H. KamkarParsi and M. Bouchard, “Instantaneous Binaural Target PSD Estimation for Hearing Aid Noise Reduction in Complex Acoustic Environments,” IEEE Transactions on Instrumentation and Measurement, vol. 60, no. 4, pp. 1141–1154, Apr. 2011.
 [8] K. Reindl, Y. Zheng, A. Schwarz, S. Meier, R. Maas, A. Sehr, and W. Kellermann, “A stereophonic acoustic signal extraction scheme for noisy and reverberant environments,” Computer Speech and Language, vol. 27, no. 3, pp. 726–745, 2013.
 [9] R. Baumgärtel, M KrawczykBecker, D. Marquardt, C. Völker, H. Hu, T. Herzke, G. Coleman, K. Adiloğlu, S. M. A. Ernst, T. Gerkmann, S. Doclo, B. Kollmeier, V. Hohmann, and M. Dietz, “Comparing binaural signal processing strategies I: Instrumental evaluation.,” Trends in Hearing, vol. 19, pp. 1–16, 2015.
 [10] D. Marquardt and S. Doclo, “Noise power spectral density estimation for binaural noise reduction exploiting direction of arrival estimates,” in Proc. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), New Paltz NY, USA, Oct. 2017, pp. 234–238.
 [11] R. Aichner, H. Buchner, M. Zourub, and W. Kellermann, “Multichannel source separation preserving spatial information,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Honolulu HI, USA, Apr. 2007, pp. 5–8.
 [12] B. Cornelis, S. Doclo, T. Van den Bogaert, J. Wouters, and M. Moonen, “Theoretical analysis of binaural multimicrophone noise reduction techniques,” IEEE Transactions on Audio, Speech and Language Processing, vol. 18, no. 2, pp. 342–355, Feb. 2010.
 [13] D. Marquardt, E. Hadad, S. Gannot, and S. Doclo, “Theoretical Analysis of Linearly Constrained Multichannel Wiener Filtering Algorithms for Combined Noise Reduction and Binaural Cue Preservation in Binaural Hearing Aids,” IEEE/ACM Trans. on Audio, Speech, and Language Processing, vol. 23, no. 12, pp. 2384–2397, Dec. 2015.
 [14] E. Hadad, D. Marquardt, S. Doclo, and S. Gannot, “Theoretical Analysis of Binaural Transfer Function MVDR Beamformers with Interference Cue Preservation Constraints,” IEEE/ACM Trans. Audio, Speech and Language Proc., vol. 23, no. 12, pp. 2449–2464, Dec. 2015.
 [15] E. Hadad, S. Doclo, and S. Gannot, “The Binaural LCMV Beamformer and its Performance Analysis,” IEEE/ACM Trans. on Audio, Speech, and Language Proc., vol. 24, no. 3, pp. 543–558, 2016.
 [16] E. Hadad, D. Marquardt, W. Pu, S. Gannot, S. Doclo, Z.Q. Luo, I. Merks, and T. Zhang, “Comparison of two binaural beamforming approaches for hearing aids,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, USA, Mar. 2017, pp. 236–240.
 [17] A. I. Koutrouvelis, R. C. Hendriks, R. Heusdens, and J. Jensen, “Relaxed binaural LCMV beamforming,” IEEE/ACM Trans. on Audio, Speech and Language Processing, vol. 25, no. 1, pp. 137–152, Jan. 2017.
 [18] W. Pu, J. Xiao, T. Zhang, and Z.Q. Luo, “A penalized inequalityconstrained minimum variance beamformer with applications in hearing aids,” in Proc. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), New Paltz NY, USA, Oct. 2017, pp. 175–179.
 [19] D. Marquardt and S. Doclo, “Interaural Coherence Preservation for Binaural Noise Reduction Using Partial Noise Estimation and Spectral Postfiltering,” IEEE/ACM Trans. on Audio, Speech and Language Processing, vol. 26, no. 7, pp. 1257–1270, 2018.
 [20] H. Kayser, S. Ewert, J. Annemüller, T. Rohdenburg, V. Hohmann, and B. Kollmeier, “Database of multichannel InEar and BehindTheEar HeadRelated and Binaural Room Impulse Responses,” Eurasip Journal on Advances in Signal Processing, vol. 2009, pp. 10 pages, 2009.
 [21] B. D. Van Veen and K. M. Buckley, “Beamforming: a versatile approach to spatial filtering,” IEEE ASSP Magazine, vol. 5, no. 2, pp. 4–24, Apr. 1988.
 [22] D. Marquardt, E. Hadad, S. Gannot, and S. Doclo, “Optimal binaural LCMV beamformers for combined noise reduction and binaural cue preservation,” in Proc. International Workshop on Acoustic Signal Enhancement (IWAENC), JuanlesPins, France, Sep. 2014, pp. 288–292.

[23]
S. Markovich, S. Gannot, and I. Cohen,
“Multichannel eigenspace beamforming in a reverberant noisy environment with multiple interfering speech signals,”
IEEE Transactions on Audio, Speech, and Language Processing, vol. 17, no. 6, pp. 1071–1086, Aug 2009.  [24] S. MarkovichGolan and S. Gannot, “Performance analysis of the covariance subtraction method for relative transfer function estimation and comparison to the covariance whitening method,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brisbane, Australia, Apr. 2015, pp. 544–548.
 [25] M. Dietz, S. D. Ewert, and V. Hohmann, “Auditory model based direction estimation of concurrent speakers from binaural signals,” Speech Communication, vol. 53, pp. 592–605, 2011.
Comments
There are no comments yet.