City classification from multiple real-world sound scenes
The majority of sound scene analysis work focuses on one of two clearly defined tasks: acoustic scene classification or sound event detection. Whilst this separation of tasks is useful for problem definition, they inherently ignore some subtleties of the real-world, in particular how humans vary in how they describe a scene. Some will describe the weather and features within it, others will use a holistic descriptor like 'park', and others still will use unique identifiers such as cities or names. In this paper, we undertake the task of automatic city classification to ask whether we can recognize a city from a set of sound scenes? In this problem each each city has recordings from multiple scenes. We test a series of methods for this novel task and show that whilst a simple convolutional neural network (CNN) can achieve accuracy of 50 which is less than the acoustic scene classification task baseline in the DCASE 2018 ASC challenge (on the same data), with a simple adaptation to the class labels to use paired city labels with grouped scenes, accuracy increases to 52 the problem in a multi-task learning framework and achieve an accuracy of 56 outperforming the aforementioned approaches.
READ FULL TEXT