A simple model for detection of rare sound events

08/20/2018
by   Weiran Wang, et al.
0

We propose a simple recurrent model for detecting rare sound events, when the time boundaries of events are available for training. Our model optimizes the combination of an utterance-level loss, which classifies whether an event occurs in an utterance, and a frame-level loss, which classifies whether each frame corresponds to the event when it does occur. The two losses make use of a shared vectorial representation the event, and are connected by an attention mechanism. We demonstrate our model on Task 2 of the DCASE 2017 challenge, and achieve competitive performance.

READ FULL TEXT
research
02/03/2021

Impact of Sound Duration and Inactive Frames on Sound Event Detection Performance

In many methods of sound event detection (SED), a segmented time frame i...
research
02/14/2020

A Comparison of Pooling Methods on LSTM Models for Rare Acoustic Event Classification

Acoustic event classification (AEC) and acoustic event detection (AED) r...
research
10/29/2018

Learning How to Listen: A Temporal-Frequential Attention Model for Sound Event Detection

In this paper, we propose a temporal-frequential attention model for sou...
research
09/11/2022

A point process model for rare event detection

Detecting rare events, those defined to give rise to high impact but hav...
research
11/30/2021

SP-SEDT: Self-supervised Pre-training for Sound Event Detection Transformer

Recently, an event-based end-to-end model (SEDT) has been proposed for s...
research
07/27/2018

Large-Scale Weakly Labeled Semi-Supervised Sound Event Detection in Domestic Environments

This paper presents DCASE 2018 task 4. The task evaluates systems for th...
research
08/14/2023

DiffSED: Sound Event Detection with Denoising Diffusion

Sound Event Detection (SED) aims to predict the temporal boundaries of a...

Please sign up or login with your details

Forgot password? Click here to reset