A Two-student Learning Framework for Mixed Supervised Target Sound Detection
Target sound detection (TSD) aims to detect the target sound from mixture audio given the reference information. Previous work shows that a good detection performance relies on fully-annotated data. However, collecting fully-annotated data is labor-extensive. Therefore, we consider TSD with mixed supervision, which learns novel categories (target domain) using weak annotations with the help of full annotations of existing base categories (source domain). We propose a novel two-student learning framework, which contains two mutual helping student models (𝑠_𝑠𝑡𝑢𝑑𝑒𝑛𝑡 and 𝑤_𝑠𝑡𝑢𝑑𝑒𝑛𝑡) that learn from fully- and weakly-annotated datasets, respectively. Specifically, we first propose a frame-level knowledge distillation strategy to transfer the class-agnostic knowledge from 𝑠_𝑠𝑡𝑢𝑑𝑒𝑛𝑡 to 𝑤_𝑠𝑡𝑢𝑑𝑒𝑛𝑡. After that, a pseudo supervised (PS) training is designed to transfer the knowledge from 𝑤_𝑠𝑡𝑢𝑑𝑒𝑛𝑡 to 𝑠_𝑠𝑡𝑢𝑑𝑒𝑛𝑡. Lastly, an adversarial training strategy is proposed, which aims to align the data distribution between source and target domains. To evaluate our method, we build three TSD datasets based on UrbanSound and Audioset. Experimental results show that our methods offer about 8% improvement in event-based F score.
READ FULL TEXT