Sensor-Augmented Egocentric-Video Captioning with Dynamic Modal Attention

09/07/2021
by   Katsuyuki Nakamura, et al.
0

Automatically describing video, or video captioning, has been widely studied in the multimedia field. This paper proposes a new task of sensor-augmented egocentric-video captioning, a newly constructed dataset for it called MMAC Captions, and a method for the newly proposed task that effectively utilizes multi-modal data of video and motion sensors, or inertial measurement units (IMUs). While conventional video captioning tasks have difficulty in dealing with detailed descriptions of human activities due to the limited view of a fixed camera, egocentric vision has greater potential to be used for generating the finer-grained descriptions of human activities on the basis of a much closer view. In addition, we utilize wearable-sensor data as auxiliary information to mitigate the inherent problems in egocentric vision: motion blur, self-occlusion, and out-of-camera-range activities. We propose a method for effectively utilizing the sensor data in combination with the video data on the basis of an attention mechanism that dynamically determines the modality that requires more attention, taking the contextual information into account. We compared the proposed sensor-fusion method with strong baselines on the MMAC Captions dataset and found that using sensor data as supplementary information to the egocentric-video data was beneficial, and that our proposed method outperformed the strong baselines, demonstrating the effectiveness of the proposed method.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 6

page 8

page 9

page 10

research
02/18/2022

Multi-view and Multi-modal Event Detection Utilizing Transformer-based Multi-sensor fusion

We tackle a challenging task: multi-view and multi-modal event detection...
research
02/07/2022

CZU-MHAD: A multimodal dataset for human action recognition utilizing a depth camera and 10 wearable inertial sensors

Human action recognition has been widely used in many fields of life, an...
research
11/18/2020

Neuro-Symbolic Representations for Video Captioning: A Case for Leveraging Inductive Biases for Vision and Language

Neuro-symbolic representations have proved effective in learning structu...
research
04/16/2018

Multi-modality Sensor Data Classification with Selective Attention

Multimodal wearable sensor data classification plays an important role i...
research
03/08/2020

Object-Oriented Video Captioning with Temporal Graph and Prior Knowledge Building

Traditional video captioning requests a holistic description of the vide...
research
11/27/2019

Non-Autoregressive Video Captioning with Iterative Refinement

Existing state-of-the-art autoregressive video captioning methods (ARVC)...
research
08/09/2023

Prompting In-Context Operator Learning with Sensor Data, Equations, and Natural Language

In the growing domain of scientific machine learning, in-context operato...

Please sign up or login with your details

Forgot password? Click here to reset