Discerning Generic Event Boundaries in Long-Form Wild Videos

by   Ayush K. Rai, et al.

Detecting generic, taxonomy-free event boundaries invideos represents a major stride forward towards holisticvideo understanding. In this paper we present a technique forgeneric event boundary detection based on a two stream in-flated 3D convolutions architecture, which can learn spatio-temporal features from videos. Our work is inspired from theGeneric Event Boundary Detection Challenge (part of CVPR2021 Long Form Video Understanding- LOVEU Workshop).Throughout the paper we provide an in-depth analysis ofthe experiments performed along with an interpretation ofthe results obtained.


Generic Event Boundary Detection: A Benchmark for Event Segmentation

This paper presents a novel task together with a new benchmark for detec...

Temporal Perceiver: A General Architecture for Arbitrary Boundary Detection

Generic Boundary Detection (GBD) aims at locating general boundaries tha...

MAE-GEBD:Winning the CVPR'2023 LOVEU-GEBD Challenge

The Generic Event Boundary Detection (GEBD) task aims to build a model f...

Generic Event Boundary Detection in Video with Pyramid Features

Generic event boundary detection (GEBD) aims to split video into chunks ...

Progressive Attention on Multi-Level Dense Difference Maps for Generic Event Boundary Detection

Generic event boundary detection is an important yet challenging task in...

UBoCo : Unsupervised Boundary Contrastive Learning for Generic Event Boundary Detection

Generic Event Boundary Detection (GEBD) is a newly suggested video under...

Please sign up or login with your details

Forgot password? Click here to reset