Visual Summary of Egocentric Photostreams by Representative Keyframes

05/05/2015
by   Marc Bolaños, et al.
0

Building a visual summary from an egocentric photostream captured by a lifelogging wearable camera is of high interest for different applications (e.g. memory reinforcement). In this paper, we propose a new summarization method based on keyframes selection that uses visual features extracted by means of a convolutional neural network. Our method applies an unsupervised clustering for dividing the photostreams into events, and finally extracts the most relevant keyframe for each event. We assess the results by applying a blind-taste test on a group of 20 people who assessed the quality of the summaries.

READ FULL TEXT

page 4

page 5

research
12/15/2022

Summary-Oriented Vision Modeling for Multimodal Abstractive Summarization

The goal of multimodal abstractive summarization (MAS) is to produce a c...
research
06/04/2023

A Comparative Evaluation of Visual Summarization Techniques for Event Sequences

Real-world event sequences are often complex and heterogeneous, making i...
research
12/21/2022

Generating Multiple-Length Summaries via Reinforcement Learning for Unsupervised Sentence Summarization

Sentence summarization shortens given texts while maintaining core conte...
research
10/05/2016

Summarizing Situational and Topical Information During Crises

The use of microblogging platforms such as Twitter during crises has bec...
research
09/14/2022

ESSumm: Extractive Speech Summarization from Untranscribed Meeting

In this paper, we propose a novel architecture for direct extractive spe...
research
04/24/2015

Cultural Event Recognition with Visual ConvNets and Temporal Models

This paper presents our contribution to the ChaLearn Challenge 2015 on C...
research
04/09/2018

Viewpoint-aware Video Summarization

This paper introduces a novel variant of video summarization, namely bui...

Please sign up or login with your details

Forgot password? Click here to reset