Attention is all you need for Videos: Self-attention based Video Summarization using Universal Transformers

06/06/2019
by   Manjot Bilkhu, et al.
8

Video Captioning and Summarization have become very popular in the recent years due to advancements in Sequence Modelling, with the resurgence of Long-Short Term Memory networks (LSTMs) and introduction of Gated Recurrent Units (GRUs). Existing architectures extract spatio-temporal features using CNNs and utilize either GRUs or LSTMs to model dependencies with soft attention layers. These attention layers do help in attending to the most prominent features and improve upon the recurrent units, however, these models suffer from the inherent drawbacks of the recurrent units themselves. The introduction of the Transformer model has driven the Sequence Modelling field into a new direction. In this project, we implement a Transformer-based model for Video captioning, utilizing 3D CNN architectures like C3D and Two-stream I3D for video extraction. We also apply certain dimensionality reduction techniques so as to keep the overall size of the model within limits. We finally present our results on the MSVD and ActivityNet datasets for Single and Dense video captioning tasks respectively.

READ FULL TEXT

page 1

page 2

page 6

page 7

page 11

page 12

page 13

research
08/02/2022

Two-Stream Transformer Architecture for Long Video Understanding

Pure vision transformer architectures are highly effective for short vid...
research
09/22/2021

Hierarchical Multimodal Transformer to Summarize Videos

Although video summarization has achieved tremendous success benefiting ...
research
01/31/2021

Classification Models for Partially Ordered Sequences

Many models such as Long Short Term Memory (LSTMs), Gated Recurrent Unit...
research
10/17/2016

Spatio-Temporal Attention Models for Grounded Video Captioning

Automatic video captioning is challenging due to the complex interaction...
research
03/18/2021

Enhancing Transformer for Video Understanding Using Gated Multi-Level Attention and Temporal Adversarial Training

The introduction of Transformer model has led to tremendous advancements...
research
11/25/2021

SwinBERT: End-to-End Transformers with Sparse Attention for Video Captioning

The canonical approach to video captioning dictates a caption generation...
research
05/30/2019

A Lightweight Recurrent Network for Sequence Modeling

Recurrent networks have achieved great success on various sequential tas...

Please sign up or login with your details

Forgot password? Click here to reset