Fast forwarding Egocentric Videos by Listening and Watching

06/12/2018
by   Vinicius S. Furlan, et al.
0

The remarkable technological advance in well-equipped wearable devices is pushing an increasing production of long first-person videos. However, since most of these videos have long and tedious parts, they are forgotten or never seen. Despite a large number of techniques proposed to fast-forward these videos by highlighting relevant moments, most of them are image based only. Most of these techniques disregard other relevant sensors present in the current devices such as high-definition microphones. In this work, we propose a new approach to fast-forward videos using psychoacoustic metrics extracted from the soundtrack. These metrics can be used to estimate the annoyance of a segment allowing our method to emphasize moments of sound pleasantness. The efficiency of our method is demonstrated through qualitative results and quantitative results as far as of speed-up and instability are concerned.

READ FULL TEXT

page 2

page 3

research
08/14/2017

Fast-Forward Video Based on Semantic Extraction

Thanks to the low operational cost and large storage capacity of smartph...
research
08/14/2017

Towards Semantic Fast-Forward and Stabilized Egocentric Videos

The emergence of low-cost personal mobiles devices and wearable cameras ...
research
06/10/2020

A gaze driven fast-forward method for first-person videos

The growing data sharing and life-logging cultures are driving an unprec...
research
11/09/2017

Making a long story short: A Multi-Importance Semantic for Fast-Forwarding Egocentric Videos

The emergence of low-cost, high-quality personal wearable cameras combin...
research
09/21/2020

A Sparse Sampling-based framework for Semantic Fast-Forward of First-Person Videos

Technological advances in sensors have paved the way for digital cameras...
research
05/10/2019

Towards Unsupervised Familiar Scene Recognition in Egocentric Videos

Nowadays, there is an upsurge of interest in using lifelogging devices. ...
research
04/05/2023

How good Neural Networks interpretation methods really are? A quantitative benchmark

Saliency Maps (SMs) have been extensively used to interpret deep learnin...

Please sign up or login with your details

Forgot password? Click here to reset