For Your Eyes Only: Learning to Summarize First-Person Videos
With the increasing amount of video data, it is desirable to highlight or summarize the videos of interest for viewing, search, or storage purposes. However, existing summarization approaches are typically trained from third-person videos, which cannot generalize to highlight the first-person ones. By advancing deep learning techniques, we propose a unique network architecture for transferring spatiotemporal information across video domains, which jointly solves metric-learning based feature embedding and keyframe selection via Bidirectional Long Short-Term Memory (BiLSTM). A practical semi-supervised learning setting is considered, i.e., only fully annotated third-person videos, unlabeled first-person videos, and a small amount of annotated first-person ones are required to train our proposed model. Qualitative and quantitative evaluations are performed in our experiments, which confirm that our model performs favorably against baseline and state-of-the-art approaches on first-person video summarization.
READ FULL TEXT