A Hybrid Attention Mechanism for Weakly-Supervised Temporal Action Localization

by   Ashraful Islam, et al.

Weakly supervised temporal action localization is a challenging vision task due to the absence of ground-truth temporal locations of actions in the training videos. With only video-level supervision during training, most existing methods rely on a Multiple Instance Learning (MIL) framework to predict the start and end frame of each action category in a video. However, the existing MIL-based approach has a major limitation of only capturing the most discriminative frames of an action, ignoring the full extent of an activity. Moreover, these methods cannot model background activity effectively, which plays an important role in localizing foreground activities. In this paper, we present a novel framework named HAM-Net with a hybrid attention mechanism which includes temporal soft, semi-soft and hard attentions to address these issues. Our temporal soft attention module, guided by an auxiliary background class in the classification module, models the background activity by introducing an "action-ness" score for each video snippet. Moreover, our temporal semi-soft and hard attention modules, calculating two attention scores for each video snippet, help to focus on the less discriminative frames of an action to capture the full action boundary. Our proposed approach outperforms recent state-of-the-art methods by at least 2.2 mAP at IoU threshold 0.5 on the THUMOS14 dataset, and by at least 1.3 IoU threshold 0.75 on the ActivityNet1.2 dataset. Code can be found at: https://github.com/asrafulashiq/hamnet.


page 3

page 12


Foreground-Action Consistency Network for Weakly Supervised Temporal Action Localization

As a challenging task of high-level video understanding, weakly supervis...

Boundary-Denoising for Video Activity Localization

Video activity localization aims at understanding the semantic content i...

Adversarial Background-Aware Loss for Weakly-supervised Temporal Activity Localization

Temporally localizing activities within untrimmed videos has been extens...

Tragedy Plus Time: Capturing Unintended Human Activities from Weakly-labeled Videos

In videos that contain actions performed unintentionally, agents do not ...

Weakly Supervised Action Selection Learning in Video

Localizing actions in video is a core task in computer vision. The weakl...

Segregated Temporal Assembly Recurrent Networks for Weakly Supervised Multiple Action Detection

This paper proposes a segregated temporal assembly recurrent (STAR) netw...

Hybrid Dynamic-static Context-aware Attention Network for Action Assessment in Long Videos

The objective of action quality assessment is to score sports videos. Ho...

Code Repositories


PyTorch implementation of AAAI 2021 paper: A Hybrid Attention Mechanism for Weakly-Supervised Temporal Action Localization

view repo

Please sign up or login with your details

Forgot password? Click here to reset