GPVAD: Towards noise robust voice activity detection via weakly supervised sound event detection
Traditional voice activity detection (VAD) methods work well in clean and controlled scenarios, with performance severely degrading in real-world applications. One possible bottleneck for such supervised VAD training is its requirement for clean training data and frame-level labels. In contrast, we propose the GPVAD framework, which can be easily trained from noisy data in a weakly supervised fashion, requiring only clip-level labels. We proposed two GPVAD models, one full (GPV-F), which outputs all possible sound events and one binary (GPV-B), only splitting speech and noise. We evaluate the two GPVAD models and a CRNN based standard VAD model (VAD-C) on three different evaluation protocols (clean, synthetic noise, real). Results show that the GPV-F demonstrates competitive performance in clean and noisy scenarios compared to traditional VAD-C. Interestingly, in real-world evaluation, GPV-F largely outperforms VAD-C in terms of frame-level evaluation metrics as well as segment-level ones. With a much lower request for data, the naive binary clip-level GPV-B model can still achieve a comparable performance to VAD-C in real-world scenarios.
READ FULL TEXT