Masked Non-Autoregressive Image Captioning

06/03/2019 ∙ by Junlong Gao, et al. ∙ 0

Existing captioning models often adopt the encoder-decoder architecture, where the decoder uses autoregressive decoding to generate captions, such that each token is generated sequentially given the preceding generated tokens. However, autoregressive decoding results in issues such as sequential error accumulation, slow generation, improper semantics and lack of diversity. Non-autoregressive decoding has been proposed to tackle slow generation for neural machine translation but suffers from multimodality problem due to the indirect modeling of the target distribution. In this paper, we propose masked non-autoregressive decoding to tackle the issues of both autoregressive decoding and non-autoregressive decoding. In masked non-autoregressive decoding, we mask several kinds of ratios of the input sequences during training, and generate captions parallelly in several stages from a totally masked sequence to a totally non-masked sequence in a compositional manner during inference. Experimentally our proposed model can preserve semantic content more effectively and can generate more diverse captions.



There are no comments yet.


page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Image captioning aims at generating natural captions automatically for images, where most recent works adopt the encoder-decoder framework (Vinyals et al., 2015; Xu et al., 2015; Anderson et al., 2017; Lu et al., 2017). In general, an encoder encodes images into visual features, and a decoder decodes the image features to generate captions. Most models use autoregressive decoders, such as LSTM (Hochreiter and Schmidhuber, 1997), which generate each token conditioned on the sequence of generated tokens in a sequential manner. This process is not parallelizable, which results in slow generation, and is prone to sequential error accumulation once the preceding tokens are inappropriate during inference (Bengio et al., 2015; Lamb et al., 2016). This process is unable to model the inherent hierarchical structures of natural languages (Manning, 1999)

, which makes autoregressive decoders heavily favor the frequent n-grams in the training data

(Bo et al., 2017). Hence, autoregressive decoders also risk improper semantics and lack of diversity. While non-autoregressive decoding (Gu et al., 2018) has been proposed for neural machine translation (NMT) to tackle slow generation of autoregressive decoding, but another problem, dubbed as "multimodality problem", due to the indirect modeling of the true target distribution, is inevitably introduced.

In this paper, we propose masked non-autoregressive decoding for image captioning to address the problems of autoregressive decoding and non-autoregressive decoding. In the masked non-autoregressive decoding, motivated by the masking strategy in BERT (Devlin et al., 2018), during training we randomly mask the input sequences with certain ratios to train a masked language model to address multimodality problem, and during inference the model generates captions parallelly in several stages from a totally masked sequence to a totally non-masked sequence in a compositional manner, which is faster and can generate more semantically correct captions than autoregressive decoding. It is a first visual then linguistic generation process, i.e. the salient visual information is generated in early stages and linguistic information assists to form the final caption with the masked language model. The experimental results suggest that masked non-autoregressive decoding can generate captions with richer semantics and more diversity compared to autoregressive decoding.

2 Background

2.1 Autoregressive decoding

Given an image , image captioning aims to generate a token sequence , , where is the dictionary and is denoted as the sequence length. In image captioning, the standard encoder-decoder architecture encodes an image to the visual feature using an encoder, and decodes to a token sequence using a decoder. In autoregressive decoding, a decoder generates a token conditioned on the sequence of tokens generated thus far. The decoder predicts a sequence starting with token [BOS] and ending with token [EOS] . It corresponds to the word-by-word nature of human language generation and effectively captures the conditional distribution of real captions. Formally, given an image , the model factors the distribution over possible output sentences

into a chain of conditional probabilities with a left-to-right structure:



is a probability distribution of the token

given the generated tokens and the visual feature . During training, given a ground-truth sequence , the model parameters are trained to minimize the conditional cross entropy loss as follows:


The mainstream autoregressive decoders include recurrent neural networks (RNNs)

(Vinyals et al., 2015)

, masked convolutional neural networks (CNNs)

(Aneja et al., 2017) and transformer (Vaswani et al., 2017). RNNs adopt hidden states to model the temporal state transformation of input tokens, and thus models cannot be trained parallelly. Since the entire target sequence is known during training, the preceding target tokens can be used in the calculation of later conditional probabilities and their corresponding losses. CNNs and transformer can take advantage of this parallelism during training. However, during inference, at each step a token is generated and then fed into the decoder to predict the next token. Therefore, all autoregressive decoders must remain sequential rather than parallel during inference. Moreover, sequential decoding is prone to copying tokens from the training data to enhance the grammatical accuracy, which easily causes semantic error and is lack of diversity regarding the generated captions in image captioning.

2.2 Non-autoregressive decoding

In order to tackle slow generation of autoregressive decoders during inference, non-autoregressive decoder is proposed to produce the whole sentence parallelly in only one step, allowing an order of magnitude lower latency. Non-autoregressive decoding was first proposed by (Gu et al., 2018) to deal with NMT. The model has an explicit likelihood function as follows and is trained using independent cross entropy loss


However, this naive approach is unable to achieve desirable results, since complete conditional independence results in the poor approximation to the true target distribution, which is dubbed as "multimodality problem". To tackle this issue, they utilized "fertility" and several complicated tricks to indirectly model the true target distribution, which is still inferior to autoregressive decoding.

3 Masked non-autoregressive image captioning

In this paper, in order to tackle the problems of autoregressive decoding and non-autoregressive decoding, we introduce a novel image captioning model Masked Non-autoregressive Image Captioning (MNIC), which generates captions through masked non-autoregressive decoding. MNIC can produce the entire output captions parallelly with richer semantics in constant stages, the number of which is much smaller than the sequence length .

3.1 Model architecture

The model architecture of MNIC is modified from the traditional transformer (Vaswani et al., 2017), where it composes of an encoder and decoder. In this work, the encoder and decoder are devised as follows.

In image captioning, the input of an encoder is an image, which is a low-level semantic data rather than a high-level semantic sequence in NMT. Therefore, we directly adopt a different encoder, that firstly extracts feature maps from CNNs or object features from object detection models, then maps the dimension of input visual features to the model dimension of the decoder using a Multi-Layer Perceptron (MLP). As for the decoder, we adopt the original decoder of transformer with necessary modifications. In order to generate captions non-autoregressively and parallelly, we remove the autoregressive mask in the masked multi-head attention layer of each decoder layer, such that self-attention layers in the decoder allow each position to attend to all positions in the decoder.

3.2 Masked non-autoregressive decoding

Figure 1: An overview of MNIC paradigm under the setting of and masking ratio set. The final caption is generated through stages from an entirely masked sequence and visual feature generated by the encoder. In each stage, a masked sequence is fed into the decoder with visual feature, and then the output sequence is selectively masked again and fed to the decoder in the next stage until the last one. It is worth noting that the masking ratios of these input masked captions in stages are , and respectively, which correspond to the masking ratio set during training. The decoders in blue boxes are identical.

In masked non-autoregressive decoding, during the training process we randomly mask the input sequence with kinds of ratios to train a masked bidirectional language model, and during the inference process the model generates captions parallelly in stages from a totally masked sequence to a totally non-masked sequence in a compositional manner, which is faster and can generate more semantically correct captions than autoregressive decoding. Herein, the masked tokens are replaced with [MASK] token, the kinds of ratios form a ratio set , and is denoted as the masked input sequence, which is generated by randomly masking the target sequence with the masking ratio . Therefore, given the visual features and , the masked language model has a likelihood function of the entire tokens as follows which can be trained using cross entropy loss,


When is , the model is forced to train a fully non-autoregressive decoder, and thus the non-autoregressive decoding is a special case of our model. When is small, the model is a bidirectional language model, which is proved in BERT (Devlin et al., 2018) to be more powerful than the unidirectional model.

Here, we use an example to further clarify the training and inference phases. Assume that and the set of kinds of ratios can be . During training, the input sequence containing these ratios [MASK] tokens respectively will be fed into the model, and the masked positions are randomly selected. During inference, as depicted in Fig. 1, firstly the entirely masked sequence (ratio ) will be fed into the model to generate a coarse caption, then the coarse caption is processed by replacing less informative tokens with [MASK] token to generate a masked coarse caption, and the masked coarse caption is sent into the model to generate a finer caption. Finally the finer caption is masked again with tokens to generate a masked finer caption, and the masked finer caption is sent into the model to generate a complete sequence. The distributions of the input data of training and inference phases are consistent. Therefore, an entire token sequence can be generated parallelly in only stages, which is similar to the traditional non-autoregressive decoders but at a cost of more times of computation. However, using masked input sequence can train a bidirectional language model to directly model the true target distribution and thus address multimodality problem of non-autoregressive decoding, in analogous to BERT.

However, there are several differences between MNIC and BERT (Devlin et al., 2018). BERT has only one kind masking ratio and is relatively small (

) to train deep bidirectional representation for natural language understanding. However, MNIC is designed for generating complete token sequences from scratch for natural language generation. As such, various masking ratios uniformly ranging from

have been adopted.

We also choose some ratios of random words to replace [MASK] token or ground-truth token. In our experiment, since the length of caption is relatively short, we simply replace a word in each non totally masked input sequence with a random word. Using random words can enhance the contextual representation of tokens, and improve the robustness of the inference process through introducing noise tokens during training, as the model easily generates wrong tokens in early stages of inference.

3.3 Analysis

Here we provide more analyses and discussions regarding what the model learns with different ratios for masking sequences and what the model predicts in different stages during inference. We further discuss the intrinsic differences between autoregressive and masked non-autoregressive decoding.

During training, when the input data are masked with the larger ratio it is difficult to force the model to predict complete and well organized captions given few clues. One potential reason lies in that expressions are protean in training data, especially that the captions are full of twists and turns. The model will learn some words that can easily be inferred from visual features and some words of high frequency in early stages. Therefore, it is difficult to learn a non-autoregressive decoder for tasks that have no input data with well organization in high-level semantic (e.g., image captioning). By contrast, it could be easier for tasks that offer the input data with well organization in high-level semantic and meanwhile aim to generate sequences to just restate the input data (e.g., machine translation). With the input data masked with the smaller ratio, the model will be given more clues to reconstruct the entire sequences and trained as a bidirectional language model, such that it is believed to generate a grammatically correct sentence.

During inference, the masked non-autoregressive decoding process well reflects what the model learns during training. In early stages the model tends to generate a caption containing tokens of high frequency (e.g., "a", "on") and salient visual clues (e.g., objects, color nouns and important verbs) in images with poor language organization, and in later stages the model can generate a semantically and grammatically correct captions through adopting the trained bidirectional language model to select the most suitable words to concatenate both sides of sub-sequence. As illustrated in Fig. 1, the output sequence of the first stage is disordered in terms of grammatical rules, but includes some important keywords such as "two", "ducks", "swimming", "water", "green", and the later stages select some keywords to form a semantically and grammatically correct caption gradually.

The difference between autoregressive decoding and masked non-autoregressive decoding during inference is that masked non-autoregressive decoding is naturally close to language generation of human beings. More specifically, human beings firstly generate keywords of the visual scene in brain, then choose other words to connect different pieces and compose the entire sentences obeying linguistic rules. It is a first visual then linguistic generation process, and the visual information lays the foundation of captions and linguistic information assists to form the final caption in a compositional manner rather than a sequential manner, which will be better at preserving meaningful semantic information. Moreover, stages of iterative selection and generation enable the inference process to obtain the deliberation ability to repeatedly adjust the caption according to the visual feature and generate a complete caption in the last stage. For example, in Fig. 1, the first output caption contains "two ducks and ducks", then the second caption modifies to "two couple of ducks" and finally adjusts to "a couple of ducks" in the last captions, since only two ducks exist in the image. Last but not least, masked non-autoregressive decoding generates the entire sentence at one step, and thus the quality of the preceding tokens will not significantly influence that of the later ones, which fundamentally eases the sequential error accumulation existing in autoregressive decoding. However, masked non-autoregressive decoding also suffers from error accumulation between stages. By contrast, autoregressive decoding is a left-to-right and word-by-word generation process, and thus the generated tokens of the later step heavily depends on that of the preceding steps, which is prone to sequential error accumulation once the preceding tokens are inappropriate. Worse still, it has only one chance to generate the entire caption without the capability to adjust the preceding inappropriate tokens. Therefore, autoregressive decoding is reasonably good at maintaining fluency but difficult at accurately telling the rich salient semantic content of the images.

3.4 Inference rules

It is crucial to preserve the most informative tokens and mask other positions in the captions generated by each stage to generate a newly masked input sequence. In this paper, we adopt a straightforward approach. In this approach, those tokens that are not included in the token set of high frequency, as well as the tokens that are of high probability and not repetitive with the selected tokens thus far, are assigned as high preference to preserve. In Fig. 1, for example, we preserve "two", "ducks", "swimming" and "water" of the first stage output sequence. Moreover, the generated captions of the last stage are processed by choosing tokens that are not repetitive with the previously selected ones.

Regarding the determination of the sequence lengths of captions during inference, we firstly calculate the distribution of different lengths of the training data, and then choose a random length for a caption from this distribution. Subsequently, the continual [MASK] tokens are fed to the model, such that a complete caption can be finally decoded. The other alternative is that we directly set a fixed sequence length for all images. The model will be automatically forced to generate coarse or fine captions depending on the length but with similar semantic information.

Moreover, when the output captions of the last stage are masked with second largest ratio and fed into the model again to start a new round ( stages), the generated captions could be better and finer in principle. The reason is that the model could start a new round with more accurate and informative masked sequences. Experimentally we observe that two rounds is slightly better than just one round, and the performance of more than two rounds tends to become saturated.

4 Experiments

4.1 Dataset and implementation details

To evaluate our captioning model, we use the standard MSCOCO dataset (Lin et al., 2014). We adopt the Karpathy split (Karpathy and Li, 2015), where images with captions for each image are used for training, images are for validation and images are for offline testing. We preprocess all captions, including lowercasing all captions, truncating captions longer than 16 words, and replacing words occurring no more than

times with [UNK] token. In terms of implementation details, we use the visual features extracted from Faster R-CNN

(Ren et al., 2017) in TopDown (Anderson et al., 2017). The decoder has layers and attention heads. The hidden size is set to , and the feed-forward size is . Since the decoder stacks contain several layers, we calculate cross entropy losses of nd, th and th layers and obtain the final average total loss with different weights, namely , , . In this manner, we force the whole decoding stacks to converge to predict the target tokens, which speeds up the convergence. Adam optimizer (Kingma and Ba, 2014) is adopted along with warmup (Vaswani et al., 2017) to adjust the learning rate. During inference, the token set of high frequency is {"a", "on"}, which accounts for around of the whole tokens.

4.2 Experimental settings

The model architectures of AIC, NAIC and MNIC are identical, as they all adopt the transformer decoder. In addition, in terms of different variants, we also conduct an extensive ablation study. More specifically, when introducing random words into input sequences, the "Noise" tag is ticked. Moreover, we explore the performance of different combinations of masking percentages during training and inference respectively. All models use the same hyper-parameters and training strategy, and are evaluated using greedy decoding. To compare with state-of-the-art captioning models, we also report the results of the models that are trained using cross entropy loss in their paper. Adaptive (Lu et al., 2017), TopDown (Anderson et al., 2017) and Skeleton (Wang et al., 2017) adopt LSTM as the decoder and use beam search, which is always better than greedy decoding. Adaptive and TopDown use different kinds of attention mechanisms. Skeleton decomposes the original caption into skeleton sentence and attributes using two LSTMs. CompCap (Dai et al., 2018) directly introduces a novel compositional paradigm to factorize semantics and syntax.

Models B1 B2 B3 B4 MT RG CD SP Latency SpeedUp
Adaptive (Lu et al., 2017) 74.2 58.0 43.9 33.2 26.6 - 108.5 19.5 - -
TopDown (Anderson et al., 2017) 77.2 - - 36.2 27.0 56.4 113.5 20.3 - -
Skeleton (Wang et al., 2017) 74.2 57.7 44.0 33.6 26.8 55.2 107.3 19.6 - -
CompCap (Dai et al., 2018) - - - 25.1 24.3 47.8 86.2 19.9 - -
AIC 74.0 57.3 42.9 31.8 26.9 54.7 106.6 20.2 171ms 1.00
NAIC 72.5 51.4 34.2 22.2 22.6 53.0 79.7 16.7 18ms 9.50
MNIC (R) 75.4 57.7 42.6 30.9 27.5 55.6 108.1 21.0 61ms 2.80
MNIC (R) 75.5 57.9 43.0 31.5 27.5 55.7 108.5 21.1 103ms 1.66
Table 1:

Performance comparisons with different evaluation metrics in offline testing. The masking ratio set of MNIC are all

during training and inference, where R and R indicate first and second round during inference, respectively. The sequence length for all images of MNIC and NAIC is fixed at , which occupies a large portion of length distribution. Latency is computed as the time to decode a single sentence without minibatching, and the values are averaged over the whole offline test set. The decoding is implemented on a single NVIDIA GeForce GTX 1080 Ti.

4.3 Experiment results

General comparisons

We compare the performance of different captioning models in the test portion of Karpathy splits of MSCOCO on BLEU 1,2,3,4 (B1, B2, B3, B4)  (Papineni et al., 2002), METEOR (MT) (Banerjee and Lavie, 2005), ROUGE (RG) (Lin, 2004), CIDEr (CD) (Vedantam et al., 2015) and SPICE (SP) (Anderson et al., 2016). In particular, SPICE focuses on semantic analysis and has higher correlation with human judgment, and other metrics favor frequent training n-grams and measure the overall sentence fluency, which are more preferred in sequential decoding based methods. As shown in Table 1, MNIC with different rounds obtain the best results on SP and MT, but is inferior to LSTM based autoregressive decoders in terms of some other metrics. AIC and TopDown use the same visual features and both adopt autoregressive decoding. However, TopDown is better than AIC in terms of all metrics, which demonstrates that the LSTM based decoder may be more suitable than the transformer based decoder for image captioning. However, though MNIC adopts the transformer based decoder, the SPICE score of MNIC largely surpasses that of TopDown and AIC, which indicates the compositional manner of MNIC can preserve semantic content more effectively than the sequential manner of TopDown and AIC. In terms of different decoding methods, masked non-autoregressive decoding of MNIC totally outperforms non-autoregressive decoding of NAIC and mainly outperforms autoregressive decoding of AIC except for a slightly drop on B4, which demonstrates that MNIC can address multimodality problem of non-autoregressive decoding and exhibits the advantage of the bidirectional language model over the unidirectional language model of AIC. By comparing different compositional paradigms, MNIC largely outperforms CompCap on all metrics, which indicates that MNIC can offer better consideration of both semantics and fluency of generated captions.

When comparing the latency regarding the models, it is shown that NAIC achieves a speedup of around a factor of 10 over AIC. More specifically, MNIC (R) and MNIC (R) achieve factors of and , respectively. It is worth mentioning that NAIC processes only using stage, and MNIC uses stages and stages respectively for round and round for ratio set . As such, the smaller stage number is, the more significant speedup can be obtained.

No. Training set Inference Set Noise B1 B4 MT RG CD SP
73.0 29.0 26.8 54.0 102.4 20.9
72.8 28.7 26.8 53.8 100.8 20.9
73.0 28.8 26.9 54.0 102.0 20.9
73.1 29.2 27.0 54.4 102.2 21.0
74.0 30.2 27.2 54.8 103.8 21.0
74.0 29.6 26.8 54.6 101.7 20.5
75.2 30.5 26.4 54.8 101.1 20.2
74.5 28.9 25.6 54.1 97.3 19.2
73.5 29.7 26.9 54.4 103.1 21.0
73.7 29.9 26.7 54.6 101.2 20.1
Table 2: Ablation performance in offline testing. Different masking ratio sets are adopted during training and inference. The "Noise" tag is ticked if random words are introduced into input sequences during training. The sequence lengths of MNICs are randomly set using an identical random seed.
Figure 2: Investigations of the influences of different stages and lengths in terms of SP and CD.

Ablation study and analysis

In Table 2, when comparing the experimental results of No. and No. , MNIC with Noise outperforms MNIC without Noise in terms of all metrics, since introducing random words into the input sequence can enhance the inference process and lead to significant performance improvement. Regarding different combinations of masking ratios in Table 2, the configuration achieves the best comprehensive score. To investigate the influence of different smallest masking ratios, we conduct experiments as shown in No. -. It is interesting to find that the metric scores first increase and reach a comprehensive peak at No. , after which the performance decreases. The results suggest that when the smallest masking ratio in the ratio set is intermediate, the ratio set achieves the best performance, and two extremes perform even worse. The results could be influenced by the ratio set towards training, as well as the ratio set towards inference. Hence, we additionally conduct experiment as shown in No. -. With the same inference ratio set of No. and No. , the training set outperforms , which indicates the former learns a better bidirectional language model than the latter. With the same training set of No. and No. , the inference set outperforms , which indicates the former helps to generate sequences of higher quality than the latter. It is the same case if the smallest ratio in the ratio set is sufficiently large as in No. .

Skeleton (Wang et al., 2017) TopDown (Anderson et al., 2017) CompCap (Dai et al., 2018) AIC MNIC
Novel Caption () 52.24 45.05 90.48 71.16 82.54
Unique Caption () 66.96 61.58 83.86 79.32 91.62
Vocabulary Usage () - 7.97 9.18 12.53 11.62
Table 3: Illustration of the diversity of different methods from different aspects, where the scores of Skeleton are extracted from (Wang et al., 2017), and TopDown (Anderson et al., 2017) and CompCap (Dai et al., 2018) are extracted from (Dai et al., 2018). The masking ratio set of MNIC is and the sequence lengths are randomly set.
GT: Cows grazing on the grass in front of a building.
AIC: three cows grazing in a field.
MNIC1: the cows are grazing in the green field.
MNIC2: a group of cows standing on top of a lush green hillside.
GT: A black cat laying on a white lap top.
AIC: a cat laying on top of a computer desk.
MNIC1: a cat is sitting on a computer desk.
MNIC2: a black cat laying on a desk in front of a computer monitor.
GT: A couple of men riding horses down a street with tall buildings.
AIC: a couple of horses down a street.
MNIC1: a couple of men riding horses down a city street.
MNIC2: a couple of men riding on the backs of horses down a street.
Figure 3: Example of ground truth captions, the generated captions of AIC and MNIC using different sequence lengths.

Performance in different stages

In Fig. 2 and Fig. 2, we report the results of each stage in rounds of MNIC with the setting No.  in Table 2, where the output sequences of the last stage of round 1 (1R) is masked with ratio and then fed to the model to start round 2 (2R). The scores within round 1 or round 2 continually improve across different stages in terms of both metrics, which indicates that the generated sequences of early stages are poor in quality but can be continually adjusted and improved in the later stages due to the bidirectional language model. It is also worth noting that the nd stage of round 1 and round 2 has the same input masking ratio, but the latter largely outperforms the former. This implies that regarding the generated sequences of the last stage of round 1, more important semantic information is injected into the model than those of the first stage of round 1, even though most tokens are masked. The final result of round 2 is only slightly superior to round 1, suggesting that only one round can also achieve a good trade-off between performance and computational complexity, which coincides with the observations in Table 1.

Performance in different sequence lengths

In Fig. 2 and Fig. 2, we compare the performances of different fixed sequence lengths (denoted as MNIC (F)) with random sequence lengths (denoted as MNIC (R)) and AIC, where the ratio set of MNIC is . In general, MNIC (F) outperforms MNIC (R) and AIC in terms of SP and CD. Fig. 2 suggests that when the sequence length increases, SP score can be continually improved, which demonstrates that when the sequence length is short, MNIC tends to generate a coarse caption that contains the most salient semantic content, in analogous to AIC. On the other hand, MNIC will generate finer captions that contain more semantic information if long sequence length is set, as illustrated in quantitative examples in Fig. 3. On the contrary, CD score favors intermediate sequence length, since it is easier for intermediate length to generate the syntactically and semantically correct captions.

Diversity study

Since masked non-autoregressive decoding is a first visual then linguistic generation process and can better preserve semantics, it is contrary to autoregressive decoding that more diverse captions are expected to be generated. To analyze the diversity of generated captions, we compute novel caption percentage, unique caption percentage and vocabulary usage, which respectively account for the percentage of captions that have not been seen in the training data, the percentage of captions that are unique in the whole generated captions and the percentage of words in the vocabulary that are adopted to generate captions. In Table 3, it is obvious that MNIC and CompCap can achieve better results, which suggest that compositional methods can generate more diverse captions than sequential methods, such as Skeleton (Wang et al., 2017), TopDown (Anderson et al., 2017) and AIC, though Skeleton decomposes into the skeleton and attribute LSTM to generate a caption.

5 Conclusions

In this paper, we propose a novel decoding method for image captioning. In contrast to the typical methods which generate captions using autoregressive and non-autoregressive decoding, the novelty of this paper lies in that the masked language model is trained by masking certain ratios of the input data, and the captions are generated in a compositional instead of sequential manner. As such, the proposed masked non-autoregressive decoding can effectively tackle the issues including sequential error accumulation, slow generation, improper semantics, lack of diversity and multimodality problem arising from the typical autoregressive and non-autoregressive decoding. The experimental results provide evidence regarding the effectiveness and efficiency of the proposed scheme.


  • Vinyals et al. [2015] Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. In Computer Vision and Pattern Recognition, pages 3156–3164, 2015.
  • Xu et al. [2015] Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In

    International conference on machine learning

    , pages 2048–2057, 2015.
  • Anderson et al. [2017] Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. Bottom-up and top-down attention for image captioning and vqa. arXiv preprint arXiv:1707.07998, 2017.
  • Lu et al. [2017] Jiasen Lu, Caiming Xiong, Devi Parikh, and Richard Socher. Knowing when to look: Adaptive attention via a visual sentinel for image captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 6, page 2, 2017.
  • Hochreiter and Schmidhuber [1997] S Hochreiter and J Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780, 1997.
  • Bengio et al. [2015] Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. Scheduled sampling for sequence prediction with recurrent neural networks. In Advances in Neural Information Processing Systems, pages 1171–1179, 2015.
  • Lamb et al. [2016] Alex M Lamb, Anirudh Goyal ALIAS PARTH GOYAL, Ying Zhang, Saizheng Zhang, Aaron C Courville, and Yoshua Bengio. Professor forcing: A new algorithm for training recurrent networks. In Advances In Neural Information Processing Systems, pages 4601–4609, 2016.
  • Manning [1999] Christopher D. Manning.

    Foundations of statistical natural language processing

  • Bo et al. [2017] Dai Bo, Dahua Lin, Raquel Urtasun, and Sanja Fidler. Towards diverse and natural image descriptions via a conditional gan. 2017.
  • Gu et al. [2018] Jiatao Gu, James Bradbury, Caiming Xiong, Victor O. K. Li, and Richard Socher. Non-autoregressive neural machine translation. 2018.
  • Devlin et al. [2018] Jacob Devlin, Ming Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. 2018.
  • Aneja et al. [2017] Jyoti Aneja, Aditya Deshpande, and Alexander Schwing. Convolutional image captioning. 2017.
  • Vaswani et al. [2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. 2017.
  • Lin et al. [2014] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer, 2014.
  • Karpathy and Li [2015] Andrej Karpathy and Fei Fei Li. Deep visual-semantic alignments for generating image descriptions. In Computer Vision and Pattern Recognition, pages 3128–3137, 2015.
  • Ren et al. [2017] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis Machine Intelligence, 39(6):1137–1149, 2017.
  • Kingma and Ba [2014] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • Wang et al. [2017] Yufei Wang, Lin Zhe, Xiaohui Shen, Scott Cohen, and Garrison W. Cottrell. Skeleton key: Image captioning by skeleton-attribute decomposition. In Computer Vision Pattern Recognition, 2017.
  • Dai et al. [2018] Bo Dai, Sanja Fidler, and Dahua Lin. A neural compositional paradigm for image captioning. 2018.
  • Papineni et al. [2002] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics, 2002.
  • Banerjee and Lavie [2005] Satanjeev Banerjee and Alon Lavie. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pages 65–72, 2005.
  • Lin [2004] Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out, 2004.
  • Vedantam et al. [2015] Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4566–4575, 2015.
  • Anderson et al. [2016] Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. Spice: Semantic propositional image caption evaluation. In European Conference on Computer Vision, pages 382–398. Springer, 2016.