Multi-modal Feature Fusion with Feature Attention for VATEX Captioning Challenge 2020

06/05/2020
by   Ke Lin, et al.
0

This report describes our model for VATEX Captioning Challenge 2020. First, to gather information from multiple domains, we extract motion, appearance, semantic and audio features. Then we design a feature attention module to attend on different feature when decoding. We apply two types of decoders, top-down and X-LAN and ensemble these models to get the final result. The proposed method outperforms official baseline with a significant gap. We achieve 76.0 CIDEr and 50.0 CIDEr on English and Chinese private test set. We rank 2nd on both English and Chinese private test leaderboard.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset