Temporal Aggregate Representations for Long Term Video Understanding

06/01/2020
by   Fadime Sener, et al.
0

Future prediction requires reasoning from current and past observations and raises several fundamental questions. How much past information is necessary? What is a reasonable temporal scale to process the past? How much semantic abstraction is required? We address all of these questions with a flexible multi-granular temporal aggregation framework. We show that it is possible to achieve state-of-the-art results in both next action and dense anticipation using simple techniques such as max pooling and attention. To demonstrate the anticipation capabilities of our model, we conduct experiments on the Breakfast Actions, 50Salads and EPIC-Kitchens datasets where we achieve state-of-the-art or comparable results. We also show that our model can be used for temporal video segmentation and action recognition with minimal modifications.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset