Collaborative Noisy Label Cleaner: Learning Scene-aware Trailers for Multi-modal Highlight Detection in Movies

03/26/2023
by   Bei Gan, et al.
0

Movie highlights stand out of the screenplay for efficient browsing and play a crucial role on social media platforms. Based on existing efforts, this work has two observations: (1) For different annotators, labeling highlight has uncertainty, which leads to inaccurate and time-consuming annotations. (2) Besides previous supervised or unsupervised settings, some existing video corpora can be useful, e.g., trailers, but they are often noisy and incomplete to cover the full highlights. In this work, we study a more practical and promising setting, i.e., reformulating highlight detection as "learning with noisy labels". This setting does not require time-consuming manual annotations and can fully utilize existing abundant video corpora. First, based on movie trailers, we leverage scene segmentation to obtain complete shots, which are regarded as noisy labels. Then, we propose a Collaborative noisy Label Cleaner (CLC) framework to learn from noisy highlight moments. CLC consists of two modules: augmented cross-propagation (ACP) and multi-modality cleaning (MMC). The former aims to exploit the closely related audio-visual signals and fuse them to learn unified multi-modal representations. The latter aims to achieve cleaner highlight labels by observing the changes in losses among different modalities. To verify the effectiveness of CLC, we further collect a large-scale highlight dataset named MovieLights. Comprehensive experiments on MovieLights and YouTube Highlights datasets demonstrate the effectiveness of our approach. Code has been made available at: https://github.com/TencentYoutuResearch/HighlightDetection-CLC

READ FULL TEXT

page 1

page 5

page 8

page 9

page 10

page 11

research
03/23/2022

UMT: Unified Multi-modal Transformers for Joint Video Moment Retrieval and Highlight Detection

Finding relevant moments and highlights in videos according to natural l...
research
01/15/2022

Tailor Versatile Multi-modal Learning for Multi-label Emotion Recognition

Multi-modal Multi-label Emotion Recognition (MMER) aims to identify vari...
research
08/26/2021

Cross-category Video Highlight Detection via Set-based Learning

Autonomous highlight detection is crucial for enhancing the efficiency o...
research
02/07/2018

Applying Cooperative Machine Learning to Speed Up the Annotation of Social Signals in Large Multi-modal Corpora

Scientific disciplines, such as Behavioural Psychology, Anthropology and...
research
07/16/2022

SVGraph: Learning Semantic Graphs from Instructional Videos

In this work, we focus on generating graphical representations of noisy,...
research
10/03/2020

Uncertainty-Aware Multi-Modal Ensembling for Severity Prediction of Alzheimer's Dementia

Reliability in Neural Networks (NNs) is crucial in safety-critical appli...
research
08/07/2017

Video Highlights Detection and Summarization with Lag-Calibration based on Concept-Emotion Mapping of Crowd-sourced Time-Sync Comments

With the prevalence of video sharing, there are increasing demands for a...

Please sign up or login with your details

Forgot password? Click here to reset