Environmental sound analysis with mixup based multitask learning and cross-task fusion
Environmental sound analysis is currently getting more and more attentions. In the domain, acoustic scene classification and acoustic event classification are two closely related tasks. In this letter, a two-stage method is proposed for the above tasks. In the first stage, a mixup based MTL solution is proposed to classify both tasks in one single convolutional neural network. Artificial multi-label samples are used in the training of the MTL model, which are mixed up using existing single-task datasets. The multi-task model obtained can effectively recognize both the acoustic scenes and events. Compared with other methods such as re-annotation or synthesis, the mixup based MTL is low-cost, flexible and effective. In the second stage, the MTL model is modified into a single-task model which is fine-tuned using the original dataset corresponding to the specific task. By controlling the frozen layers carefully, the task-specific high level features are fused and the performance of the single classification task is further improved. The proposed method has confirmed the complementary characteristics of acoustic scene and acoustic event classifications. Finally, enhanced by ensemble learning, a satisfactory accuracy of 84.5 percent on TUT acoustic scene 2017 dataset and an accuracy of 77.5 percent on ESC-50 dataset are achieved respectively.
READ FULL TEXT