AdaMML: Adaptive Multi-Modal Learning for Efficient Video Recognition

by   Rameswar Panda, et al.

Multi-modal learning, which focuses on utilizing various modalities to improve the performance of a model, is widely used in video recognition. While traditional multi-modal learning offers excellent recognition results, its computational expense limits its impact for many real-world applications. In this paper, we propose an adaptive multi-modal learning framework, called AdaMML, that selects on-the-fly the optimal modalities for each segment conditioned on the input for efficient video recognition. Specifically, given a video segment, a multi-modal policy network is used to decide what modalities should be used for processing by the recognition model, with the goal of improving both accuracy and efficiency. We efficiently train the policy network jointly with the recognition model using standard back-propagation. Extensive experiments on four challenging diverse datasets demonstrate that our proposed adaptive approach yields 35 traditional baseline that simply uses all the modalities irrespective of the input, while also achieving consistent improvements in accuracy over the state-of-the-art methods.


page 2

page 3

page 8

page 13


AR-Net: Adaptive Frame Resolution for Efficient Action Recognition

Action recognition is an open and challenging problem in computer vision...

What Makes Training Multi-Modal Networks Hard?

Consider end-to-end training of a multi-modal vs. a single-modal network...

Noise-Tolerant Learning for Audio-Visual Action Recognition

Recently, video recognition is emerging with the help of multi-modal lea...

Dynamic Network Quantization for Efficient Video Inference

Deep convolutional networks have recently achieved great success in vide...

AMSER: Adaptive Multi-modal Sensing for Energy Efficient and Resilient eHealth Systems

eHealth systems deliver critical digital healthcare and wellness service...

Please sign up or login with your details

Forgot password? Click here to reset