Leveraging Local Temporal Information for Multimodal Scene Classification

10/26/2021
by   Saurabh Sahu, et al.
0

Robust video scene classification models should capture the spatial (pixel-wise) and temporal (frame-wise) characteristics of a video effectively. Transformer models with self-attention which are designed to get contextualized representations for individual tokens given a sequence of tokens, are becoming increasingly popular in many computer vision tasks. However, the use of Transformer based models for video understanding is still relatively unexplored. Moreover, these models fail to exploit the strong temporal relationships between the neighboring video frames to get potent frame-level representations. In this paper, we propose a novel self-attention block that leverages both local and global temporal relationships between the video frames to obtain better contextualized representations for the individual frames. This enables the model to understand the video at various granularities. We illustrate the performance of our models on the large scale YoutTube-8M data set on the task of video categorization and further analyze the results to showcase improvement.

READ FULL TEXT

page 2

page 4

research
03/18/2021

Enhancing Transformer for Video Understanding Using Gated Multi-Level Attention and Temporal Adversarial Training

The introduction of Transformer model has led to tremendous advancements...
research
04/18/2023

SViTT: Temporal Learning of Sparse Video-Text Transformers

Do video-text transformers learn to model temporal relationships across ...
research
11/20/2020

ConvTransformer: A Convolutional Transformer Network for Video Frame Synthesis

Deep Convolutional Neural Networks (CNNs) are powerful models that have ...
research
06/15/2023

Revealing the Illusion of Joint Multimodal Understanding in VideoQA Models

While VideoQA Transformer models demonstrate competitive performance on ...
research
06/15/2022

Structured Video Tokens @ Ego4D PNR Temporal Localization Challenge 2022

This technical report describes the SViT approach for the Ego4D Point of...
research
03/29/2021

ViViT: A Video Vision Transformer

We present pure-transformer based models for video classification, drawi...
research
02/07/2020

Exploiting Temporal Coherence for Multi-modal Video Categorization

Multimodal ML models can process data in multiple modalities (e.g., vide...

Please sign up or login with your details

Forgot password? Click here to reset