Non-Autoregressive Video Captioning with Iterative Refinement
Existing state-of-the-art autoregressive video captioning methods (ARVC) generate captions sequentially, which leads to low inference efficiency. Moreover, the word-by-word generation process does not fit human intuition of comprehending video contents (i.e., first capturing the salient visual information and then generating well-organized descriptions), resulting in unsatisfied caption diversity. In order to press close to the human manner of comprehending video contents and writing captions, this paper proposes a non-autoregressive video captioning (NAVC) model with iterative refinement. We then further propose to exploit external auxiliary scoring information to assist the iterative refinement process, which can help the model focus on the inappropriate words more accurately. Experimental results on two mainstream benchmarks, i.e., MSVD and MSR-VTT, show that our proposed method generates more felicitous and diverse captions with a generally faster decoding speed, at the cost of up to 5% caption quality compared with the autoregressive counterpart. In particular, the proposal of using auxiliary scoring information not only improves non-autoregressive performance by a large margin, but is also beneficial for the caption diversity.
READ FULL TEXT