DeepSafety:Multi-level Audio-Text Feature Extraction and Fusion Approach for Violence Detection in Conversations
Natural Language Processing has recently made understanding human interaction easier, leading to improved sentimental analysis and behaviour prediction. However, the choice of words and vocal cues in conversations presents an underexplored rich source of natural language data for personal safety and crime prevention. When accompanied by audio analysis, it makes it possible to understand the context of a conversation, including the level of tension or rift between people. Building on existing work, we introduce a new information fusion approach that extracts and fuses multi-level features including verbal, vocal, and text as heterogeneous sources of information to detect the extent of violent behaviours in conversations. Our multilevel multimodel fusion framework integrates four types of information from raw audio signals including embeddings generated from both BERT and Bi-long short-term memory (LSTM) models along with the output of 2D CNN applied to Mel-frequency Cepstrum (MFCC) as well as the output of audio Time-Domain dense layer. The embeddings are then passed to three-layer FC networks, which serve as a concatenated step. Our experimental setup revealed that the combination of the multi-level features from different modalities achieves better performance than using a single one with F1 Score=0.85. We expect that the findings derived from our method provides new approaches for violence detection in conversations.
READ FULL TEXT