With the rapid proliferation of wireless earbuds (100 million AirPods sold in 2020 (26)), more people than ever are taking calls on-the-go. While these systems offer unprecedented convenience, their mobility raises an important technical challenge: environmental noise (e.g., street sounds, people talking) can interfere and make it harder to understand the speaker. We therefore seek to enhance the speaker’s voice and suppress background sounds using speech captured across the two earbuds.
Source separation of acoustic signals is a long-standing problem where the conventional approach for decades has been to perform beamforming using multiple microphones. Signal processing-based beamformers that are computationally lightweight can encode the spatial information but do not effectively capture acoustic cues (Van Veen and Buckley, 1988; Krim and Viberg, 1996; Chhetri et al., 2018). Recent work has shown that deep neural networks can encode both spatial and acoustic information and hence can achieve superior source separation with gains of up to dB over signal processing baselines (Subakan et al., 2021; Luo and Mesgarani, 2019). However, these neural networks are computationally expensive. None of the existing binaural (i.e., using two microphones) neural networks can meet the end-to-end latency required for telephony applications or have been evaluated with real earbud data. Commercial end-to-end systems, like Krisp (78), use neural networks on a cloud server for single-channel speech enhancement, with implications to cost and privacy.
We present the first mobile system that uses neural networks to achieve real-time speech enhancement from binaural wireless earbuds. Our key insight is to treat wireless earbuds as a binaural microphone array, and exploit the specific geometry – two well-separated microphones behind a proximal source – to devise a specialized neural network for high quality speaker separation. In contrast to using multiple microphones on the same earbud to perform beamforming, as is common in Apple AirPods (2) and other hearing aids, we use microphones across the left and right earbuds, increasing the distance between the two microphones and thus the spatial resolution.
To realize this vision, we need to address three key technical challenges to deliver a functioning, practical system:
Today’s wireless earbuds only support one channel of microphone up-link to the phone. AirPods and similar devices upload microphone output from only a single earbud at a time. To achieve binaural speaker separation, we need to design and build novel earbud hardware that can synchronously transmit audio data from both the earbuds, and maintain tight synchronization over long periods of time.
Binaural speech enhancement networks have high computational requirements, and have not been demonstrated on mobile devices with data from wireless earbuds. Reducing the network size naively often leads to unpleasant artifacts. Thus, we also need to optimize the neural networks to run in real-time on smart devices that have a limited computational capability compared to cloud GPUs. Further, we need to meet the end-to-end latency requirements for telephony applications and ensure that the resulting audio output has a high quality from a user experience perspective.
Prior binaural speech enhancement networks are trained and tested on synthetic data and have not been shown to generalize to real data. Building an end-to-end system however requires a network that generalizes to in-the-wild use.
To achieve this system, we make three technical contributions spanning earable hardware and neural networks.
Synchronized binaural earables. We designed a binaural wireless earbud system (Fig. 2
) capable of streaming two time-synchronized microphone audio streams to a mobile device. This is one of the first systems of its kind, and we expect our open-source earbud hardware and firmware to be of wider interest as a research and development platform. Existing earable platforms such as eSense(Kawsar et al., 2018) do not support time-synchronized audio transmission from two earbuds to a mobile device. We designed our DIY hardware using open source eCAD software, outsourced fabrication and assembly (K for 50 units), and 3D printed the enclosures.
Lightweight cascaded neural network. We introduce a lightweight neural network that utilizes binaural input from wearable earbuds to isolate the target speaker. To achieve real-time operation, we start with the Conv-TasNet source separation network (Luo and Mesgarani, 2019) and redesign the network to achieve a 90% re-use of the computed network activations from the previous time step for each new audio segment (see §3.2). While these optimizations make this network real-time, they also introduce artifacts in the audio output (i.e., crackling, static). Interestingly, these artifacts have little effect on traditional metrics, like Signal-to-Distortion Ratio (SDR), but have a noticeable effect on subjective listening scores (see §5.2). These artifacts however are often visible in a frequency representation of the audio. To address this, we combine our mobile temporal model with a real-time spectrogram-based frequency masking neural network. We show that by combining the two networks and creating a lightweight cascaded network, we can reduce artifacts and improve the audio quality further.
Network training for in-the-wild generalization. Training the network in a supervised way requires clean ground truth speech samples as training targets. This is difficult to obtain in fully natural settings since the ground truth speech is corrupted with background noise and voices. Training a network that generalizes to in-the-wild scenarios also requires the training data to mimic the dynamics of real speech as closely as possible. This includes reverb, voice resonance, and microphone response. Synthetically rendered spatial data is the easiest type of data to obtain, but most different from real recordings, while real speakers wearing the headset in an anechoic chamber provide the best ground-truth training targets, but are the most costly to obtain. Synthetic data can simulate various reverb and multi-path that are not captured in an anechoic chamber. Our training methodology uses large amounts of synthetic data simulated in software, small amounts of hardware data with speakers embedded into a foam mannequin head and small amounts of data from human speakers wearing the earbuds in an anechoic chamber (see §4) to create a neural network that generalizes to users and multi-path environments not in the training data.
We combine our wireless earbuds and neural network to create ClearBuds, an end-to-end system capable of (1) source separation for the intended speaker in noisy environments, (2) attenuation and/or elimination of both background noises and external human voices, and (3) real-time, on-device processing on a commodity mobile phone paired to the two earbuds. Our results show that:
Our binaural wireless earbuds can stream audio to a phone with a synchronization error less than 64s and operate continuously on a coin cell battery for 40 hours.
Our system outperforms Apple AirPods Pro by 5.23, 8.61, and 6.94 dB for the tasks of separating the target voice from background noise, background voices, and a combination of background noise and voices respectively.
Our network has a runtime of 21.4ms on iPhone 12, and the entire ClearBuds system operates in real-time with an end-to-end latency of 109ms. For telephony applications, an ear-to-mouth latency of less than 200ms is required for a good user experience (58).
In-the-wild evaluation with eight users in various indoor and outdoor scenarios shows that our system generalizes to previously unseen participants and multipath environments, that are not in the training data.
In a user study with 37 participants who spent over 15.4 hours and rated a total of 1041 in-the-wild audio samples, our cascaded network achieved a higher mean opinion score and noise suppression than both the input speech as well as a lightweight Conv-TasNet.
We believe that this paper bridges state-of-the-art deep learning for blind audio source separation and in-ear mobile systems. The ability to perform background noise suppression and speech separation could positively impact millions of people who use earbuds to take calls on-the-go. By open-sourcing the hardware and collected datasets, our work may help kickstart future research among mobile system and machine learning researchers to design algorithms around wireless earbud data.
2. Related Work
Endfire beamforming configurations remain popular on consumer mobile phones and earbuds (20; 2; 15; 43). While recent advances in neural networks have shown promising results, none of them are demonstrated with wireless earbuds. By creating a wireless network between two earbuds, we demonstrate that our real-time, two-channel neural network can outperform current real-time speech enhancement approaches for wireless earbuds.
Beamforming techniques. Since signal-processing based beamforming is computationally lightweight, these techniques are deployed on commercial devices such as smart speakers (16), mobile phones (20), and earbud devices like Apple AirPods (2). However, the performance of beamforming is limited by the geometry of the microphones and the distance between them (Van Veen and Buckley, 1988; InvenSense, 2013). The form factor of devices like AirPods restricts both the number of microphones on a single earbud and the available distance between them, limiting the gain of the beamformer. While beamforming across two earbuds could provide better performance in principle, current wireless architectures are limited to streaming from a single earbud at a time (Telephony and Group, 2020). Furthermore, adaptive beamformers such as MVDR (Frost, 1972), while showing promise with relatively few interfering sources, are sensitive to sensor placement tolerance and steering (Zhang and Wang, 2017; Brandstein, 2001). Finally, beamforming leverages spatial or spectral cues only and does not use acoustic cues (e.g., structure in human speech) and perceptual differences to discriminate sources, information that machine learning methods leverage successfully.
Single-channel deep speech enhancement. Many deep learning techniques operate on spectrograms to separate the human voice from background noise (Xu et al., 2015; Mohammadiha et al., 2013; Duan et al., ; Nikzad et al., 2020; Choi et al., 2019; Weninger et al., 2015; Fu et al., 2019; Soni et al., ). However, recent works instead operate directly on the time domain signals (Luo and Mesgarani, 2019; Germain et al., 2018; Pascual et al., 2017; Defossez et al., 2020; Macartney and Weyde, 2018), yielding performance improvements over spectrogram approaches. Commercial noise suppression software like Krisp (78) and Google Meet (42) have successfully deployed single-channel models in real-time and are available for use on mobile phones and desktop computers, but processing is performed on the cloud. (Fedorov et al., 2020)
achieves low-power speech enhancement using long short-term memory (LSTM), but it is for a single-channel network but not for multichannel source separation. Further, single-channel models cannot effectively capture spatial information and fail to isolate the intended speaker when there are multiple speakers (see Fig.3).
Multi-channel source separation and speech enhancement. Multi-channel methods have been shown to perform better than their single-channel source separation counterparts (Yoshioka et al., 2018; Chen et al., 2018; Zhang and Wang, 2017; Gu et al., 2020; Tzirakis et al., 2021; Jenrungrot et al., 2020). Binaural methods have also been used for source separation (Sun et al., ; Han et al., 2020; Li et al., 2011; Reindl et al., 2010) and localization (van Hoesel et al., 2008; Lyon, 1983; Kock, 1950); (Han et al., 2020) reduces the look-ahead time in the network to make it causal in behavior but has not been demonstrated to run on a mobile device. Our method improves on existing binaural methods by combining time-domain neural network with spectrogram-based frequency masking networks as well as optimizing them to enable real-time processing on a phone. Recent works such as (Tan et al., 2019, 2021; Shankar et al., 2020) use multiple microphones on a smartphone for speech-enhancement. However, neither of them demonstrates evaluation with real data, where artifacts because of network optimizations can affect user performance. In contrast, we demonstrate the first system that achieves real-time speech enhancement using microphones on the two wireless earbuds. Further, as the distance between the earbuds is larger than the distance between microphones on a typical mobile phone, we can attain a better baseline than a mobile phone implementation, while also retaining the ability to speak hands-free. More recent works tackle the problem of real-time directional hearing using eye trackers and wearable headsets. For example, (Wang et al., AAAI 2022) uses a hybrid network that combines signal processing with neural networks, but shows that their technique performs poorly in binaural scenarios (i.e., two microphones) and requires four or more microphones. In contrast, we focus on the problem of speech enhancement and create the first real-time end-to-end hardware-software neural-network based system using wireless synchronized earbuds.
Earbud computing and platforms. There has been recent interest in earbud computing (Ma et al., 2021b; Kawsar et al., 2018; Min et al., 2018; Powar and Beresford, 2019; Yang and Choudhury, 2021) to address applications in health monitoring (Chan et al., 2019; Bui et al., 2021; Chan et al., 2022), activity tracking (Ma et al., 2021a) and sensor fusion with EEG signals (Ceolini et al., 2020). The eSense platform (Kawsar et al., 2018; Min et al., 2018) has enabled research in sensing applications with earables. OpenMHA (Pavlovic,Caslav et al., 2018; Herzke et al., 2017) is an open signal processing software platform for hearing aid research. Neither of these platforms support time-synchronized audio transmission from two earbuds, which is a critical requirement for achieving speech enhancement in binaural settings. In contrast, we created open-source wireless earbud hardware that can support synchronize wireless transmission from the two earbuds.
3. ClearBuds Design
We first introduce our lightweight neural network architecture. We then describe system design of our hardware platform and our synchronization algorithm. We open-source our mechanical, firmware, application, and network designs at our project website: https://clearbuds.cs.washington.edu.
3.1. Problem Formulation
Suppose we have a 2 channel microphone array with one microphone on each ear of the wearer. The target voice is speaking with a signal in the presence of some background noise bg or other non-target speakers . There may also be multi-path reflections and reverberations r which we would also like to reduce, i.e., . Our goal is then to recover the target speaker’s signal, , while ignoring the background, reverbations, or other speakers. We also must do so in a real-time way, meaning that the a mixture sample received at time must be processed and outputted by the network before for some defined latency L. We refer to the non-target speakers as "background voices". These background voices may be at any location in the scene, including very close to the target speaker and their angle can change with time and motion.
3.2. Neural Network Architecture Motivation
Our network needs to perform in real-time on a mobile device with minimal latency. This is challenging for several reasons. First, the processing device has a much lower compute capacity, especially compared to cloud GPUs. Additionally, the network should separate non-speech noises as well as unwanted speech. To do this, it must learn spatial cues and human voice characteristics. Finally, the resulting output should maximize the quality from a human experience perspective while minimizing any artifacts the network might introduce.
Our network, which we call ClearBuds-Net or CB-Net, is a cascaded model that operates in both time and frequency domains. The full network architecture is illustrated in Fig. 4 and contains two main sub-components: A dual-channel time domain network called CB-Conv-TasNet, and a frequency based network called CB-UNet.
The first component of separation method is a time domain network that is based on a multi-channel extension of Conv-TasNet (Luo and Mesgarani, 2019). This is a network in the waveform domain that has a Temporal Convolution Network (TCN) structure, lending itself to a causal implementation with intermediate layer caching (Paine et al., 2016). We use depthwise separable convolutions (Howard et al., 2017) to further reduce the number of parameters and make the design real-time. We call this network CB-Conv-TasNet since it is an optimized version of the original Conv-TasNet.
A key feature of the time domain approach is that it can easily capture spatial cues in the network. In our application, the desired source is always physically between two microphones, thus the voice signal will reach the microphones roughly at the same time. In contrast, background or other speakers are typically not temporally aligned and will reach one microphone earlier or later. By feeding two time synchronized channels into the neural network, this spatial alignment of the sources can be learned from time differences in the signal. This is similar to a delay-and-sum beamforming effect, except the sum is replaced with a deep network.
The output of our lightweight CB-Conv-TasNet often contains audible artifacts (i.e., crackling, static) that reduce the listening experience. Interestingly, these artifacts have little effect on traditional metrics, like Signal-to-Distortion Ratio (SDR), but have a noticeable effect on subjective listening scores (see §5.2). These artifacts are often visible in a frequency representation of the audio. Fig. 5 shows how CB-Conv-TasNet alone contains noticeable artifacts when compared to the ground truth. To address this, we cascade a lightweight causal UNet (Ronneberger et al., 2015) which operates on the mel-scale spectrogram of the input audio. This network, which we call CB-UNet, produces a binary mask which is applied to the output of CB-Conv-TasNet. The combined output, shown in Fig. 5 as CB-Net, reduces these artifacts. The mean opinion scores in our evaluation shows the strength of the cascaded CB-Net when compared to the time-domain component only.
3.3. Neural Network Detailed Description
The input to the network is a binaural mixture given by . The first step is an encoder that transforms the mixture x into with a 1D convolution of size
. This is followed by a ReLU layer. The encoder’s outputs are next fed into a temporal convolution network that consists of stacks of 1-D convolutions with increasing dilation factors. We use 14 convolution layers with dilation factors of 1,2,4,..,64 repeated twice, with a ReLU nonlinearity and skip connection after each convolution. The encoder output is multiplied with the output of the temporal conv-net, before being fed through a fully connected Decoder layer which transforms the output back into.
In a real-world implementation, we do not have access to the full waveform, but only packets of data at a time. Furthermore, we must process these packets with limited access to future input samples. Given 15.625 kHz sampling rate, we choose to process packets of 350 samples at a time (22.4ms), which is our window size . We also use
, or 700 samples of lookahead time (44.8ms) and 1.5s of past samples. Since we have no padding in the temporal convolution net, the network starts with this large temporal context and outputs exactlysamples, corresponding to the desired output for our input packet of samples. When we receive the next packet of size , all intermediate activation from the encoder and temporal conv-net can be shifted over by samples and re-used. We chose , but any divisor of would work. Re-using intermediate outputs from previous packets saves over of the compute time for a new packet in our network.
The frequency domain network is a mono-channel network that outputs a binary mask for each time-frequency bin. The input
is a summation of the binaural left and right channel, which is the equivalent of a broadside beamformer. We first run a STFT, which is a mel-scale fourier transform with hop size of, a window size of including zero padding on the edges, and a bin mel-scale output. The network input is a spectrogram of time bins and frequency bins, corresponding to a receptive field of samples, or . In order to maintain the causality requirement, we use the same lookahead strategy as the time-domain network where we allow samples of lookahead for a target packet of samples. The UNet architecture contains 4 downsampling and upsampling layers, starting with 64 channels and doubling the number of channels at each subsequent layer. The downsampling layers contain a depthwise separable convolution followed by a max pooling, and the upsampling layers contain a depthwise separable convolution followed by a transposed convolution for upsampling. The output is a sigmoid function, which is then thresholded to return a binary mask in . When outputting a spectrogram mask on an input, we predict a mask over the entire input even though we only need the output for a specific slice of samples, or a mask. Further optimizations could be made by caching intermediate outputs or only computing the mask for the target samples. However CB-UNet’s run-time was so small compared to the rest of the network that these optimizations were not necessary.
3.3.3. Combining the Outputs
At each time step, the output of CB-Conv-Tasnet is an audio waveform in , and the output of CB-UNet is a spectrogram mask in . We run the same fourier transform on the buffered conv-tasnet outputs to produce a spectrogram . Our output can then be computed by . Our empirical results show that this gives the best results compared to other methods such as ratio masking.
CB-Conv-TasNet is trained with an -based loss over the waveform along with the multi-resolution spectrogram loss. Formally, provided is our target speaker and is the output from the network, our loss is:
STFT denotes the magnitude of the short time Fourier transform, and denotes the Frobenius norm. and represent spectral convergence and magnitude losses, which gave better results than L1 loss alone.
For training CB-UNet, for each time frequency bin, the training target M is 1 if the target voice is the dominant component, and 0 otherwise. Formally, . The network is then trained with the binary cross entropy of the output compared to the target mask.
3.3.5. Hyperparameters and Training Details
We use a learning rate of along with the ADAM optimizer (Kingma and Ba, 2014) for training the network. The network was trained on a single Nvidia TITAN Xp GPU. Because of the small size of the network, training could be completed within a single day and generally required
50 epochs to reach convergence.As an additional data augmentation step we make the following perturbations to the data: High-shelf and low-shelf gain of up toare randomly added using the sox library.
3.4. Synchronized wireless earbuds
We seek to capture speech from the target speaker’s mouth which sits on the sagittal plane roughly equidistant to the ears. Given an ear-to-ear spacing of 17.5cm, to effectively isolate this central plane we require a distance precision on the order of a few centimeters. An interaural time difference of 100s would correspond to source maximally 3.43 cm off this central plane, therefore we target a synchronization accuracy under 100s.
Our custom hardware design contains a pulse-density modulated (PDM) microphone (Invensense ICS-41350) and a Bluetooth Low Energy (BLE) microcontroller (Nordic nRF52840).111For future research applications, an ultra low-power accelerometer (Bosch BMA400), a 1Gbit NAND flash for local data collection (Winbond W25N01GVZEIG), and support for speaker and an additional microphone are included. The system is powered off of a CR2032 coin cell battery and programmed via SWD over a Micro-USB connector. Each ClearBud has an integrated PDM microphone set to a clock frequency of 2MHz. With an internal PDM decimation ratio of 64, this provides us a sampling frequency of 31.25kHz. As most HD voice applications and wideband codecs are limited to 16kHz (C. and M.H., 2009), we decimate further in firmware by a factor of 2, giving us a final sampling frequency of 15.625kHz.
Two 16-bit 180 sample size Pulse-Code Modulation (PCM) buffers are round-robined: one is filled with incoming PCM data while the other is processed. The DMA is responsible for both clocking in the PDM data and converting it into PCM. One buffer is always connected to the DMA, while the other is freed for processing for the rest of the data pipeline. When the buffer connected to the DMA fills, the buffers switch roles and we begin processing data on the newly freed buffer, and connect the other buffer back to the DMA. With this design we always have a continuous PCM stream to operate on. Both ClearBuds transmit the PCM microphone data to a mobile phone for input into our neural network. To maximize throughput, we use the highest Bluetooth rate and packet sizes supported by iOS, which is 2Mbps and 182 bytes, respectively. We design a lightweight wireless protocol where the first 2 bytes represent a monotonically-increasing sequence number, while the other 180 bytes are reserved for the 16-bit PCM audio samples. The sequence number is used on the phone so that we can zero-pad PCM data in the occasional event that a packet is dropped either over-the-air or by the radio hardware. This zero-padding keeps the left and right microphone data aligned on the host side in areas of poor radio performance or interference in the environment.
The hardware schematic and layout for ClearBuds was designed using the open source eCAD tool KiCad. A 2-layer flexible printed circuit was fabricated and assembled by PCBWay. The 3D printed enclosures were designed using AutoDesk Fusion 360 and printed with a Phrozen Sonic Mini using a liquid resin fabrication process. The MEMS microphone sits behind the lid on the earbud’s outer surface. A single button on the enclosure provides access to turn on and off the earbuds.
3.4.2. Microphone synchronization
Three components are necessary for maintaining microphone synchronization: (1) As each of our earbuds has its own local clock source, we need to establish a common clock between them so that they have the same reference of time, (2) a synchronized startup so each earbud starts recording from their respective microphone at the exact same time, and (3) a rate encoding scheme to control the earbud’s sampling rate to match each other.
In our system, each earbud has its own respective 32MHz clock source with a total +/- 20ppm frequency tolerance budget. So, in the worst case scenario, the earbuds will have 2.4 milliseconds of drift each minute. We use the Nordic’s TimeSlot API (59), which grants us access to the underlying radio hardware in between Bluetooth transmissions. This provides us a transport to transmit and receive accurate time sync beacons (77). Each ClearBud keeps a free-running 16MHz hardware timer with a max value of 800,000, overflowing and wrapping around at a rate of about 20 Hz. One ClearBud is assigned as the timing master while the other ClearBud will synchronize its free-running timer to the master’s. The primary ClearBud (timing master) transmits time sync packets at a rate of 200 Hz. These packets contain the value of the free-running timer at the time of the radio packet transmission. When the secondary ClearBud receives this packet, it can then add or subtract an offset to its own free-running timer for a common clock.
Once each ClearBud is connected to the mobile phone, the phone sends a
START command to both ClearBuds over BLE. Each ClearBud contains firmware which arms a programmable peripheral interconnect (PPI) to launch the PDM bus once the 16MHz free-running timer wraps around at 800,000. By using this method, we bypass the CPU and trigger a synchronized startup entirely at the hardware layer. One caveat is that the mobile phone could write to one ClearBud right before its clock wraps around at 800,000, and the other ClearBud right after it wraps around at 800,000. With a clock that wraps around at 20Hz, this would trigger a mismatched startup and cause an alignment error of 50ms. To correct for this, each ClearBud reports its common clock timer value to the phone once it has received the
START command. The phone can then remove the first 781 audio samples (781 samples / 15.625kHz = 50ms) if one ClearBud started streaming 50ms before the other.
The final component to keeping the audio streams aligned is to create a rate encoding scheme between the ClearBuds. With the time sync beacons from the primary ClearBud, the other ClearBud now has both its local clock and the common clock (primary ClearBud’s local clock). With these two clocks, the secondary ClearBud can identify how much faster or slower its PDM clock is running in relation to the primary ClearBud. We note that with a 2MHz PDM clock and a PDM decimation ratio of 64, each audio sample occupies 32 us. The non-primary ClearBud can then add or remove a sample to its PDM buffer every time the difference between the clocks exceeds a multiple of 32 us. By doing this, the secondary ClearBud ensures that its PDM buffer starts filling up at the exact same time as the primary ClearBud’s PDM buffer, with a tolerance of 32 us.
4. Training methodology
Training the network in a supervised way requires clean ground truth speech samples as training targets. This is difficult to obtain in fully natural settings since the ground truth speech is corrupted with background noise and voices. Training a network that generalizes to in-the-wild scenarios also requires the training data to mimic the dynamics of real speech as closely as possible. This includes reverb, voice resonance, and microphone response. Synthetically rendered spatial data is the easiest type of data to obtain, but most different from real recordings, while real speakers wearing the headset in an anechoic chamber provide the best ground-truth training targets, but are the most costly to obtain. Synthetic data can simulate various reverb and multipath that are not captured in an anechoic chamber. We adopt a hybrid training methodology where we first train on a large amount of synthetic data and fine-tune on real data recorded with our hardware. Our training method is based on the commonly used mix-and-separate framework (Zhao et al., 2018), where clean speech and noise samples are recorded separately and combined randomly to form noisy mixtures. Our results show that our network trained this way generalizes to naturally recorded noisy data in real-world environments.
Synthetic data. This type of data is the easiest to obtain, since a wide variety of voice types and physical setups can be generated instantly. Many machine learning baselines, e.g., (Luo et al., 2020; Jenrungrot et al., 2020; Tzirakis et al., 2021), only train and evaluate on synthetic data generated in this manner. To generate the synthetic dataset, we create multi-speaker recordings in simulated environments with reverb and background noises. All voices come from the VCTK dataset (Veaux et al., 2016) (110 unique speakers with over 44 hours), and background sounds come from the WHAM! dataset (Wichern et al., 2019), with 58 hours of recordings from a variety of noise environments such as a restaurant, crowd, and music.
To synthesize a single example, we create a 3 second mixture as follows: two virtual microphones are placed 17.5 cm apart, which is the average distance between human ears (Risoud et al., 2018). The target speaker’s voice is placed at the center between the two virtual microphones, and a second voice is placed randomly between and meters away and at a random angle. A randomly chosen background noise is also placed in the scene. We then simulate room impulse responses (RIRs) for a randomly sized room using the image source method implemented in the pyroomacoustics library (Allen and Berkley, 1979; Scheibler et al., ). The room is rectangular with sides randomly chosen between 5 and 20 meters, and the RT60 values are randomly chosen between 0 and 1 second. All signals are convolved with the RIR and rendered to the two channel microphone array. The volumes of the background are randomly chosen so that the input signal-to-distortion ratio is roughly between -5 and 5 dB. For training, we use 10,000 mixtures generated in this manner.
Hardware data. While a large amount of synthetic data can be easily rendered to train the network, it does not contain characteristics such as the microphone response of physical hardware and imperfections in the time-of-arrival. To address this, we also train on a set of recorded voice samples from our earbuds. We set up a foam mannequin head with an artificial mouth speaker (Sony SBS-XB12) that plays VCTK samples as the spoken ground truth. For background voice recordings, the speaker is placed in varying locations within a one meter radius of the foam head. Physically recorded background noise is provided by binaural version of the WHAM! dataset (Wichern et al., 2019), which was recorded in real environments using a binaural mannequin like ours. We record 2 hours each of clean speech, and background voices. 2000 random mixtures are then created for training.
Human data. The spoken hardware data above still does not contain natural voice resonance since it is played out of an electronic speaker. Furthermore, the background sounds recorded by a mannequin wearing earbuds still misses some of the physical filtering of the human body. To better capture desired output of real scenarios, we collect a ground-truth speech dataset in an anechoic chamber with human speakers (5 male, 4 female) and a noise dataset in real environments with human listeners. For the voice data, each human speaker wore our ClearBuds prototypes, and uttered 15 minutes of text from Project Gutenberg in the anechoic chamber. The purpose of this anechoic data is to provide clean training targets for the network, modelling the resonance of human speakers wearing our hardware. For the real world noise dataset, individuals wore ClearBuds and recorded various noisy scenarios such as washing dishes, loud indoor/outdoor restaurants, and busy traffic intersections. 2000 random mixtures of clean voice and recorded noise were generated for this dataset.
Our network is jointly trained using all these datasets. Note that testing and evaluation is done outside the anechoic chamber.
5. Experiments and Results
We first compare our end-to-end system performance against a commercial wireless earbud system. We then present in-the-wild evaluation of our system. Next, we compare numerical results against various speech enhancement baselines. Finally, we present system-level evaluations. Our work is approved by the IRB.
5.1. Comparison with Beamforming Earbuds
We evaluate our end-to-end system against the Apple AirPods Pro headset connected to a iPhone 12 Pro in a repeatable physical set up. In our evaluation, as is typical, there is no overlap between training and test datasets.
Procedure. We use the popular metric scale-invariant signal-to-distortion ratio (SI-SDR) (Roux et al., 2018). While SI-SDR provides a repeatable metric used in the acoustic community, it requires a clean, sample-aligned ground truth (target voice) as the basis for evaluation. Therefore, we create a repeatable soundscape for our test setup where a sample-aligned ground truth can be obtained. A foam mannequin head with a speaker (Sony SBS-XB12) inserted into its artificial mouth uttered one hundred VCTK samples with identities and samples unseen in the training set. The mannequin wore ClearBuds and AirPods Pro in subsequent experiments, and the outputs of the two systems could be directly compared. Ambient environmental sound (from WHAM! dataset) was played via four monitors (PreSonus Eris E3.5) positioned to fill 3 meter by 4 meter room, and background voice (also VCTK) was played from a monitor positioned 0.4 meters from head on the right. All speakers were driven through a common USB interface (PreSonus 1810c) ensuring the same time-alignment and loudness between the two test conditions. Since Apple AirPods Pro beamforming cannot be toggled on and off, we cannot calculate an SI-SDR increase (SI-SDRi), and therefore report output SI-SDR. To establish the ground truth voice against which to calculate SI-SDR, we record clean target voice through each headset. Ambient noise SNR ranged between 0dB and 16dB with respect to target voice. Qualitatively, this sounded like a second person speaking loudly in a noisy bar or cafe. Finally, background voice SNR ranged between 6dB and 12dB, qualitatively sounding like a person speaking from a meter or two away.
Results. We report output SI-SDR from the two systems in Fig. 9. To calculate output SI-SDR, we align individual one second chunks and take the logarithmic mean across 250 chunks. We find that ClearBuds achieves higher output SI-SDR across all test conditions when compared to the beamforming utilized by the Apple AirPods Pro. For a qualitative comparison of AirPods Pro versus ClearBuds performance with human speakers, see video: https://clearbuds.cs.washington.edu/videos/airpods_comparison.mp4.
5.2. In-the-Wild Evaluation
We perform in-the-wild evaluation in indoor and outdoor scenarios as well as users not in the training data. The procedure and results are described in the following sections.
In-the-wild experiments. Eight individuals (four male, four female, mean age 25) with a variety of accents wore a pair of ClearBuds and read excerpts from Project Gutenberg (51) while in four noisy environments: a coffee shop, a noisy intersection, an outdoor plaza, and a classroom (see Fig. 10). The environments featured ringing phones, cross-talk from other people, ambient music, a crying baby, opening/closing doors, driving vehicles, and street noise, amongst other common sounds. These experiments were uncontrolled in that the background voices and noise were naturally occurring sounds that are typical to these real-world scenarios and were mobile.
Evaluation procedure. In-the-wild evaluation precludes access to clean, sample-aligned truth to compute SI-SDR. Instead, the common (and expensive) procedure is to perform a user study and compute the mean opinion score. Since this is a time-consuming process, prior works on binaural networks, e.g., (Luo et al., 2020; Tan et al., 2019; Jenrungrot et al., 2020), avoid in-the-wild evaluation. Since our goal is to design and evaluate an in-ear system in real scenarios, we recruit thirty-seven participants (11 female, 26 male, mean age 29) for a user study. Each participant listened to between 6 and 11 in-the-wild audio samples (avg. 9.38 samples, each between 10–60 seconds). Each speech sample was processed and presented three ways: (1) the original input, (2) CB-Conv-TasNet, and (3) CB-Net, yielding a total of 379.383 1,041 rating samples.
Participants were encouraged to use audio equipment they would typically use for a call. Fourteen used earbuds, thirteen used computer speakers, seven used headphones, and three used phone speakers. The study took about 25 minutes per participant. As is typical with noise suppression systems, participants were asked to give ratings in two categories: the intrusiveness of the noise and overall quality (mean opinion score - MOS):
Noise suppression: How INTRUSIVE/NOTICEABLE were the BACKGROUND sounds? 1 - Very intrusive, 2 - Somewhat intrusive, 3 - Noticeable, but not intrusive, 4 - Slightly noticeable, 5 - Not noticeable
Overall MOS: If this were a phone call with another person, How was your OVERALL experience? 1 - Bad, 2 - Poor, 3 - Fair, 4 - Good, 5 - Excellent
Results. Fig. 11 shows the noise intrusiveness and MOS values for the original microphone, CB-Conv-TasNet, and CB-Net. As expected, applying CB-Conv-TasNet to the original audio helped suppress noise dramatically, increasing opinion score from 2.02 (slight better than 2 - Somewhat intrusive) to 3.28 (between 3 - Noticeable, but not intrusive and 4 - Slightly noticeable) (p<0.01). The light-touch, spectrogram-masking clean up method featured in CB-Net increased noise suppression opinion score significantly (p<0.001) to 3.77, indicating the method did indeed further suppress perceptually annoying noise artifacts. Importantly, this step also increased overall MOS. While users only slightly preferred (p<0.05) CB-Conv-TasNet (2.67) to the original input (2.49) due to artifacts introduced, they more significantly (p<0.001) preferred our CB-Net (3.10), an increase of 0.61 opinion score points from the input. For context, in the flagship ICASSP 2021 Deep Suppression Noise Challenge (Reddy et al., 2021), with state-of-the-art, real-time algorithms run on a quad-core desktop CPU, the winning submission increased MOS by 0.57 (28) from input.
|SI-SDR increase (SI-SDRi)||Output PESQ|
|Method||Target with BG||Target with BV||Target with BV + BG||Target with BG||Target with BV||Target with BV + BG|
|CB-Conv-TasNet Single Mic||6.15||0.13||2.34||1.82||1.84||1.53|
|DTLN (Westhausen and Meyer, 2020)||7.02||0.06||2.13||2.08||1.95||1.67|
|Causal Demucs (Defossez et al., 2020)||6.62||-0.03||2.11||1.80||1.88||1.43|
|Ideal Ratio Mask (IRM, oracle)||11.41||11.53||12.04||2.53||3.00||2.44|
|Ideal Binary Mask (IBM, oracle)||9.97||11.05||10.85||2.30||2.90||2.21|
Note that in our in-the-wild experiments, the background noise and voices were not static. The speakers themselves can also be mobile (see Fig. 12). Our network was able to adaptively remove the background noise and achieve speech enhancement with mobility.
5.3. Benchmarking our Neural Network
The conventional evaluation in the machine learning and acoustic community is to evaluate models and techniques on synthetic data against baselines. For completeness, we compare our method against a variety of speech enhancement baselines using the synthetic dataset. For evaluation, an additional 1000 mixtures of 3 seconds each were generated such that there was no overlapping identities or samples between the train and test splits.
Evaluation Procedure. For comparisons to other baseline methods, we use the popular SI-SDR and PESQ metrics. Unlike the AirPods experiment, where the original noisy mixture could not be recorded since AirPods beamforming cannot be toggled off, here we compute SI-SDR of the ground truth relative to both the input noisy mixture and then to the network output. When reporting the increase from the input SI-SDR to output SI-SDR, we use the SI-SDR improvement (SI-SDRi).
For a deep learning baseline in the waveform domain, we choose the causal Demucs model (Defossez et al., 2020). This is a single channel method which was recently shown to outperform many other deep learning baselines and runs real-time on a laptop CPU. We also compare with Dual-signal Transformation LSTM Network (DTLN) (Westhausen and Meyer, 2020). This method also runs on a laptop or mobile phone in real-time. To compare with spectrogram based methods, we use the oracle baselines, ideal ratio mask (IRM) and ideal binary mask (IBM) (Stöter et al., 2018; Wang, 2005), that use the ground truth voice to calculate the best possible result that can be obtained by masking a noisy spectrogram.
As an ablation study, we report results with each individual component of the network, CB-Conv-TasNet and CB-UNet. We also show results when the multi-channel part of our network, CB-Conv-TasNet, only has access to one microphone, labeled as CB-Conv-TasNet Single Mic. This explicitly shows the advantage of using two microphones. There are only a few deep learning methods that tackle binaural speech separation for mobile processing, and the most relevant ones, such as (Tan et al., 2019) and (Han et al., 2020), do not have publicly available code to test against.
Results. As shown in Table 1, our binaural method is comparable to the best possible results that can be obtained by a spectrogram masking method (IBM, IRM). We also show an improvement over waveform based deep learning methods that only use a single microphone input. In particular, the improvement is greatest when there are two speakers present (Target Voice + Background Voice). This is because single channel methods can only rely on voice characteristics, whereas our network also uses spatial cues to separate the speaker of interest. Although CB-Net shows similar or worse performance to CB-Conv-TasNet, subjective evaluation on in-the-wild hardware data shows that CB-Net is far superior to human listeners (see §5.2).
Examples of the synthetic dataset, outputs from all the methods and qualitative comparisons against Krisp (78), a commercial noise suppression system, can be found linked from our project website: https://clearbuds.cs.washington.edu.
5.3.1. Additional neural network evaluations
We numerically evaluate various aspects of the design by changing the angle of background voice, reverberance in the environment, and microphone separation.
Angle of background voice. The ability of our network to separate the target voice from a background voice is based on utilizing the time difference of arrival to the binaural microphones. Because we only have two microphones, this ability is limited when the background voice is in the front-back plane of the speaker. In this case, the background voice will arrive at each microphone simultaneously, and there will be no spatial cues to separate the two voices. To illustrate this effect, we graph the separation performance as a function of the angle of the background voice in Fig. 13.
Multipath and reverberant environments. While our in-the-wild experiments show the performance in various indoor and outdoor environments, we benchmark our system in different reverberant conditions, including those more reverberant than seen during training. Synthetically generated mixtures are generated using the pyroomacoustics library with the RT60 value randomly chosen between 0 and 4s. We generate 200 examples and plot the SI-SDRi compared to the RT60 in Fig. 13. Our method shows only a slight decrease in performance as the reverberation of the environment increases. Because the target speaker is physically close to the microphone array, our setup is generally less affected by reverberations than other kinds of source separation problems where the target speaker may be further away.
Separation between microphones. Our in-the-wild evaluation across 8 participants showed generalization across facial features. Here, we benchmark our method to different head sizes where the distance between the microphones may be different. We generate 200 synthetic samples, where the distance between the microphones is randomly chosen between 10 and 25 cm. Because the target speaker is in the middle of the microphone array, the target signal will arrive at both mics simultaneously regardless of the microphone distance. Fig. 13 show little change in performance even with microphone distances greatly different than used during training.
5.4. System Evaluation
Synchronization. In order to evaluate this, we place both ClearBuds roughly equidistant from a speaker. A click tone is played every 15 seconds for 5 minutes, and recorded on both ClearBuds with time sync disabled and enabled. We calculate the sample error on each recorded click offline and convert it into time error with a sampling rate of 15.625kHz. Fig. 14(a) shows the synchronization results across a five minute interval. With time sync enabled, the sample error never exceeds 1 sample at 15,625 kHz, or 64 s. Fig. 14(b) also shows the CDF of the timing error across experiments of 5 minutes each conducted with other Bluetooth devices in the environment, with and without time synchronization.
Run-time and end-to-end latency. Mouth-to-ear delay is defined as the time it takes from speech to exit the speaker’s mouth and reach the listener’s ear on the other end of the call. The International Telecommunication Union Telecommunication Standardization Sector (ITU-T) G.114 recommendation regarding mouth-to-ear delay indicates that most users are “very satisfied” as long as the latency does not exceed 200 ms (58). In our end-to-end system, we targeted a one-way latency of 100 ms prior to uplink, leaving up to 100 ms of network delay to move an IP packet from the source to the destination.
With a 180-sample PCM buffer being filled at 31.25 kHz, there is a 5.76 ms delay prior to the samples reaching BLE stack. Once these samples reach the radio hardware, there is a worst-case additional latency of 7.5 ms as defined by the minimum BLE connection interval supported by Bluetooth 5.0 (3). At the time of writing, the latest iOS supports a minimum BLE connection interval of 15 ms. After the samples reach the mobile phone, we wait for 67.2 ms to receive enough samples to run a forward pass of our network. Our network has a run-time of 21.4 ms on an iPhone 12 Pro (see Table 2). The number of FLOPs is computed over each packet of 350 samples. Together, we have a latency of 109 ms, leaving 91 ms for one-way network delay (RTT=182ms).
Power analysis. CB-Net uses an order of magnitude lower FLOPs per second compared to Conv-TasNet on the smartphone, significantly reducing the computational and corresponding power consumption. We also measure the power consumption of the ClearBuds hardware. We measure current consumption by powering our system through its Micro-USB port with a DC power supply set to 3V, which goes through the same power path as our coin cell battery. While continuously wirelessly streaming microphone data, we measure average current consumed to be 5 mA. With the CR2032’s nominal capacity of 210 mAh, this translates to approximately 42 hours of operation. Table 3 shows a breakdown by component of the system’s power consumption. The accelerometer (BMA400) and flash (W25N01GVZEIG) are omitted as they are power gated during streaming.
|iPhone 12 Pro||155.5ms||17.5ms||21.4ms|
6. Limitations & Future work
The first limitation is that the user must be wearing both wireless earbuds to benefit from our binaural noise suppression network. Second, with only two microphones, there is an opportunity for background voices to remain in the uplink channel if the voice is within a few degrees of the target speaker’s sagittal plane (see Fig. 13). The underlying assumption of our network is that the mouth is in the middle of the user’s ears, though as seen in Fig. 13
and our in-the-wild evaluation, some variance is permissible.
Future work could integrate two microphones in each earbud, so that each earbud could beamform toward the user’s mouth prior to processing in the neural network. We also had to develop a custom wireless audio protocol to stream two microphones to a single phone. While this prevents this architecture from being deployed on today’s commodity wireless earbuds, adoption may be imminent as Bluetooth 5.2 shows promise with the introduction of Multi-Stream Audio and Audio Broadcast (27).
Our network could also be deployed on other multi-microphone mobile or resource-constrained edge systems such as smart watches, augmented reality glasses, or smart speakers to allow for enhanced voice control or telephony in noisy environments. The hardware and firmware for Clearbuds could be leverage to produce wireless, synchronized microphone arrays for telephony, acoustic activity recognition or for swarm robot localization and control.
Real-time speech enhancement has been an open research challenge for multiple decades. The recent proliferation of wireless earbuds and neural network architectures provides an opportunity to build systems that bridge neural networks and wireless earbuds to create new capabilities. Here, we present ClearBuds, the first deep learning based system to achieve real-time speech enhancement with binaural wireless earbuds. At its core is a new open-source wireless earbud design capable of operating as a synchronized binaural microphone array and a lightweight cascaded neural network. In-the-wild experiments show that ClearBuds can achieve background noise suppression, background speech removal, and speaker separation using wireless earbuds.
Acknowledgments. This research is funded by the UW Reality Lab, Moore Inventor Fellow award #10617 and the researchers are also funded by the National Science Foundation. We thank our shepherd, Youngki Lee, and the anonymous reviewers for their feedback on our submission.
|BLE SoC (nRF52840)||12.02 mW|
|Microphone (ICS-41350)||0.77 mW|
|Ideal Diode (LM66100DCKT)||0.27 W|
|Buck Efficiency Loss (MAX38640)||1.75 mW|
-  (1979) Image method for efficiently simulating small-room acoustics. The Journal of the Acoustical Society of America 65 (4), pp. 943–950. Cited by: §4.
-  Apple airpods. https://www.apple.com/airpods/. Cited by: §1, §2, §2.
-  (2016) Bluetooth core specification v5.0. Cited by: §5.4.
-  (2001) Microphone arrays: signal processing techniques and applications. Springer Science & Business Media. Cited by: §2.
-  (2021-07) EBP: an ear-worn device for frequent and comfortable blood pressure monitoring. Commun. ACM 64 (8), pp. 118–125. External Links: Cited by: §2.
-  (2009) ITU-t coders for wideband, superwideband, and fullband speech communication. Cited by: §3.4.1.
-  (2020-08) Brain-informed speech separation (biss) for enhancement of target speaker in multitalker speech perception. NeuroImage 223, pp. 117282. External Links: Cited by: §2.
-  (2022-06) Performing tympanometry using smartphones. Communications Medicine. Cited by: §2.
-  (2019-05) Detecting middle ear fluid using smartphones. Science Translational Medicine 11, pp. eaav1102. External Links: Cited by: §2.
-  (2018) Multi-channel overlapped speech recognition with location guided speech extraction network. pp. 558–565. Cited by: §2.
Multichannel audio front-end for far-field automatic speech recognition. In 2018 EUSIPCO, pp. 1527–1531. Cited by: §1.
-  (2019) Phase-aware speech enhancement with deep complex u-net. External Links: Cited by: §2.
-  (2020) Real time speech enhancement in the waveform domain. External Links: Cited by: §2, §5.3, Table 1.
-  Speech enhancement by online non-negative spectrogram decomposition in non-stationary noise environments. INTERSPEECH 2012, pp. 594–597. External Links: Cited by: §2.
-  (2020-03) Earbuds that put sound first. https://en-de.sennheiser.com/newsroom/earbuds-that-put-sound-first. Cited by: §2.
-  Echo (3rd gen). https://www.amazon.com/all-new-echo/dp/b07nftvp7p. Amazon. Cited by: §2.
-  (2020-10) TinyLSTMs: efficient neural speech enhancement for hearing aids. Interspeech 2020. External Links: Cited by: §2.
-  (1972) An algorithm for linearly constrained adaptive array processing. Proceedings of the IEEE 60 (8), pp. 926–935. Cited by: §2.
MetricGAN: generative adversarial networks based black-box metric scores optimization for speech enhancement. External Links: Cited by: §2.
-  (2014-06) Galaxy s5 explained: audio. https://news.samsung.com/global/galaxy-s5-explained-audio. Cited by: §2, §2.
Speech denoising with deep feature losses. External Links: Cited by: §2.
-  (2020) Enhancing end-to-end multi-channel speech separation via spatial feature learning. arXiv preprint arXiv:2003.03927. Cited by: §2.
-  (2020) Real-time binaural speech separation with preserved spatial cues. External Links: Cited by: §2, §5.3.
-  (2017) Open signal processing software platform for hearing aid research ( openmha ). Cited by: §2.
MobileNets: efficient convolutional neural networks for mobile vision applications. External Links: Cited by: §3.2.1.
-  Https://appleinsider.com/articles/21/03/30/apple-airpods-beats-dominated-audio-wearable-market-in-2020. Cited by: §1.
-  Https://www.bluetooth.com/media/le-audio/le-audio-faqs. Cited by: §6.
-  Https://www.microsoft.com/en-us/research/academic-program/deep-noise-suppression-challenge-interspeech-2021/. Cited by: §5.2.
-  (2013-12) Microphone array beamforming. Technical report Technical Report AN-1140-00, InvenSense Inc., 1745 Technology Drive, San Jose, CA 95110 U.S.A. Cited by: §2.
-  (2020) The cone of silence: speech separation by localization. External Links: Cited by: §2, §4, §5.2.
-  (2018) Earables for personal-scale behavior analytics. IEEE Pervasive Computing 17 (3), pp. 83–89. External Links: Cited by: 1st item, §2.
-  (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §3.3.5.
-  (1950) Binaural localization and masking. The Journal of the Acoustical Society of America 22 (6), pp. 801–804. Cited by: §2.
-  (1996) Two decades of array signal processing research: the parametric approach. IEEE signal processing magazine 13 (4), pp. 67–94. Cited by: §1.
-  (2011) Two-stage binaural speech enhancement with wiener filter for high-quality speech communication. Speech Communication 53 (5), pp. 677–689. Cited by: §2.
-  (2020) End-to-end microphone permutation and number invariant multi-channel speech separation. External Links: Cited by: §4, §5.2.
-  (2019) Conv-tasnet: surpassing ideal time–frequency magnitude masking for speech separation. IEEE/ACM Transactions on Audio, Speech, and Language Processing. Cited by: 2nd item, §1, §2, §3.2.1.
-  (1983) A computational model of binaural localization and separation. pp. 1148–1151. Cited by: §2.
-  (2021) OESense: employing occlusion effect for in-ear human sensing. MobiSys ’21, pp. 175–187. External Links: Cited by: §2.
-  (2021) OESense: employing occlusion effect for in-ear human sensing. In MobiSys, pp. 175–187. External Links: Cited by: §2.
-  (2018) Improved speech enhancement with the wave-u-net. External Links: Cited by: §2.
-  Meet.google.com. Cited by: §2.
-  Microphone array beamforming. https://invensense.tdk.com/wp-content/uploads/2015/02/microphone-array-beamforming.pdf. Cited by: §2.
-  (2018) Exploring audio and kinetic sensing on earable devices. WearSys ’18, pp. 5–10. External Links: Cited by: §2.
-  (2013) Supervised and unsupervised speech enhancement using nonnegative matrix factorization. IEEE Transactions on Audio, Speech, and Language Processing 21 (10), pp. 2140–2151. External Links: Cited by: §2.
-  (2020) Deep residual-dense lattice network for speech enhancement. External Links: Cited by: §2.
-  (2016) Fast wavenet generation algorithm. External Links: Cited by: §3.2.1.
-  (2017) SEGAN: speech enhancement generative adversarial network. External Links: Cited by: §2.
-  (2018) Open portable platform for hearing aid research. The Journal of the Acoustical Society of America 143 (3), pp. 1738–1738. External Links: Cited by: §2.
-  (2019) A data sharing platform for earables research. In Proceedings of the 1st International Workshop on Earable Computing, EarComp’19, New York, NY, USA, pp. 30–35. External Links: Cited by: §2.
-  Project gutenberg. Note: https://www.gutenberg.org/Accessed: 2021-12-20 Cited by: §5.2.
-  (2021) ICASSP 2021 deep noise suppression challenge. pp. 6623–6627. External Links: Cited by: §5.2.
-  (2010) Speech enhancement for binaural hearing aids based on blind source separation. pp. 1–6. Cited by: §2.
-  (2018) Sound source localization. European Annals of Otorhinolaryngology, Head and Neck Diseases 135 (4), pp. 259–264. External Links: Cited by: §4.
-  (2015) U-net: convolutional networks for biomedical image segmentation. External Links: Cited by: §3.2.2.
-  (2018) SDR - half-baked or well done?. CoRR abs/1811.02508. External Links: Cited by: §5.1.
-  Pyroomacoustics: a python package for audio room simulation and array processing algorithms. pp. 351–355. Cited by: §4.
-  (2003) Series g: transmission systems and media, digital systems and networks. Note: ITU-T Rec. G.114 Cited by: 3rd item, §5.4.
-  (2015-07) Setting up the timeslot api. https://devzone.nordicsemi.com/nordic/short-range-guides/b/software-development-kit/posts/setting-up-the-timeslot-api. Cited by: §3.4.2.
Efficient two-microphone speech enhancement using basic recurrent neural network cell for hearing and hearing aids. The Journal of the Acoustical Society of America 148, pp. 389–400. External Links: Cited by: §2.
-  Time-frequency masking-based speech enhancement using generative adversarial network. Cited by: §2.
-  (2018) The 2018 signal separation evaluation campaign. External Links: Cited by: §5.3.
-  (2021) Attention is all you need in speech separation. External Links: Cited by: §1.
-  A deep learning based binaural speech enhancement approach with spatial cues preservation. pp. 5766–5770. External Links: Cited by: §2.
-  (2019) Real-time speech enhancement using an efficient convolutional recurrent network for dual-microphone mobile phones in close-talk scenarios. In ICASSP 2019, Vol. , pp. 5751–5755. External Links: Cited by: §2, §5.2, §5.3.
-  (2021) Deep learning based real-time speech enhancement for dual-microphone mobile phones. IEEE/ACM Transactions on Audio, Speech, and Language Processing (), pp. 1–1. External Links: Cited by: §2.
-  (2020-04) Hands-free profile: bluetooth® profile specification. Technical report Technical Report v1.8, Bluetooth SIG. Cited by: §2.
-  (2021) Multi-channel speech enhancement using graph neural networks. External Links: Cited by: §2, §4.
-  (2008) Binaural speech unmasking and localization in noise with bilateral cochlear implants using envelope and fine-timing based strategies. The Journal of the Acoustical Society of America 123 (4), pp. 2249–2263. Cited by: §2.
-  (1988) Beamforming: a versatile approach to spatial filtering. IEEE assp magazine 5 (2), pp. 4–24. Cited by: §1, §2.
-  (2016) Superseded-cstr vctk corpus: english multi-speaker corpus for cstr voice cloning toolkit. Cited by: §4.
-  (AAAI 2022) Hybrid neural networks for on-device directional hearing. External Links: Cited by: §2.
-  (2005) On ideal binary mask as the computational goal of auditory scene analysis. In Speech separation by humans and machines, pp. 181–197. Cited by: §5.3.
-  (2015) Speech enhancement with lstm recurrent neural networks and its application to noise-robust asr. In ICASSP 20182018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)2018 IEEE Spoken Language Technology Workshop (SLT)ICASSP 2019ICASSP’83. IEEE International Conference on Acoustics, Speech, and Signal Processing2010 4th International Symposium on Communications, Control and Signal Processing (ISCCSP)2018 ICASSPICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), LVA/ICA 2015, Vol. 8. External Links: Cited by: §2.
-  (2020) Dual-signal transformation lstm network for real-time noise suppression, arxiv. arXiv. External Links: Cited by: §5.3, Table 1.
-  (2019) WHAM!: extending speech separation to noisy environments. arXiv preprint arXiv:1907.01160. Cited by: §4, §4.
-  (2016-07) Wireless timer synchronization among nrf5 devices. https://devzone.nordicsemi.com/nordic/short-range-guides/b/bluetooth-low-energy/posts/wireless-timer-synchronization-among-nrf5-devices. Cited by: §3.4.2.
-  Www.krisp.ai. Cited by: §1, §2, §5.3.
-  (2015) A regression approach to speech enhancement based on deep neural networks. IEEE/ACM Transactions on Audio, Speech, and Language Processing 23 (1), pp. 7–19. External Links: Cited by: §2.
-  (2021) Personalizing head related transfer functions for earables. SIGCOMM ’21, New York, NY, USA, pp. 137–150. External Links: Cited by: §2.
-  (2018) Multi-microphone neural speech separation for far-field multi-talker speech recognition. pp. 5739–5743. Cited by: §2.
-  (2017) Deep learning based binaural speech separation in reverberant environments. IEEE/ACM transactions on audio, speech, and language processing 25 (5). Cited by: §2, §2.
-  (2018) The sound of pixels. External Links: Cited by: §4.