Modality Compensation Network: Cross-Modal Adaptation for Action Recognition

01/31/2020
by   Sijie Song, et al.
8

With the prevalence of RGB-D cameras, multi-modal video data have become more available for human action recognition. One main challenge for this task lies in how to effectively leverage their complementary information. In this work, we propose a Modality Compensation Network (MCN) to explore the relationships of different modalities, and boost the representations for human action recognition. We regard RGB/optical flow videos as source modalities, skeletons as auxiliary modality. Our goal is to extract more discriminative features from source modalities, with the help of auxiliary modality. Built on deep Convolutional Neural Networks (CNN) and Long Short Term Memory (LSTM) networks, our model bridges data from source and auxiliary modalities by a modality adaptation block to achieve adaptive representation learning, that the network learns to compensate for the loss of skeletons at test time and even at training time. We explore multiple adaptation schemes to narrow the distance between source and auxiliary modal distributions from different levels, according to the alignment of source and auxiliary data in training. In addition, skeletons are only required in the training phase. Our model is able to improve the recognition performance with source data when testing. Experimental results reveal that MCN outperforms state-of-the-art approaches on four widely-used action recognition benchmarks.

READ FULL TEXT

page 1

page 4

page 7

page 8

page 10

research
08/24/2022

Modality Mixer for Multi-modal Action Recognition

In multi-modal action recognition, it is important to consider not only ...
research
08/20/2021

MM-ViT: Multi-Modal Video Transformer for Compressed Video Action Recognition

This paper presents a pure transformer-based approach, dubbed the Multi-...
research
04/17/2018

PM-GANs: Discriminative Representation Learning for Action Recognition Using Partial-modalities

Data of different modalities generally convey complimentary but heteroge...
research
05/06/2020

Hybrid and hierarchical fusion networks: a deep cross-modal learning architecture for action recognition

Two-stream networks have provided an alternate way of exploiting the spa...
research
09/20/2019

Coupled Generative Adversarial Network for Continuous Fine-grained Action Segmentation

We propose a novel conditional GAN (cGAN) model for continuous fine-grai...
research
11/04/2020

Mutual Modality Learning for Video Action Classification

The construction of models for video action classification progresses ra...
research
06/19/2018

Modality Distillation with Multiple Stream Networks for Action Recognition

Diverse input data modalities can provide complementary cues for several...

Please sign up or login with your details

Forgot password? Click here to reset