For Your Eyes Only: Learning to Summarize First-Person Videos

11/24/2017 ∙ by Hsuan-I Ho, et al. ∙ 0

With the increasing amount of video data, it is desirable to highlight or summarize the videos of interest for viewing, search, or storage purposes. However, existing summarization approaches are typically trained from third-person videos, which cannot generalize to highlight the first-person ones. By advancing deep learning techniques, we propose a unique network architecture for transferring spatiotemporal information across video domains, which jointly solves metric-learning based feature embedding and keyframe selection via Bidirectional Long Short-Term Memory (BiLSTM). A practical semi-supervised learning setting is considered, i.e., only fully annotated third-person videos, unlabeled first-person videos, and a small amount of annotated first-person ones are required to train our proposed model. Qualitative and quantitative evaluations are performed in our experiments, which confirm that our model performs favorably against baseline and state-of-the-art approaches on first-person video summarization.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.