Most existing captioning models learn an autoregressive model, either LSTM or transformer, in which explicit control of generation process is difficult. In particular, the length of the caption is determined only after the End Of Sentence (eos) is generated. It is hard to know and control the length beforehand. However, length can be an important property of a caption. A model that allows control of output length would provide more options for the end users of the captioning model. By controling the length, we can influence the style and descriptiveness of the caption: short, simple captions vs. longer, more complex and detailed descriptions for the same image.
Previous work includes captioning models that allow control for other aspects.  controls the caption by inputting a different set of image regions.  can generate a caption controlled by assigning POS tags. Length control has been studied in abstract summarization [11, 8, 17], but to our knowledge not in the context of image capitoning.
To control the length of the generated caption, we build the model borrowing existing ideas from summarization work, by injecting length information into the model. To generate captions without an explicit length specification, we add a length prediction module that can predict the optimal length for the input image at hand. We show that the length models can successfully generate catpion ranging from 7 up to 28 words long. We also show that length models perform better than non-controled model (with some special decoding methods) when asked to generate long captions.
We consider repurposing existing methods in summarization for captioning. In general, the length is treated as an intermediate variable: . , and are caption, image and length, respectively. We introduce how we build and as follows. Note that, the following methods can be used in conjunctions with any standard captioning model.
into a vector that is the same size as the word embedding. Then the word embedding of the previous word, added with the length embedding (rather than concatenated, as in ), is fed as the input to the rest of the LSTM model.
Learning to predict length We add a length prediction module to predict the length given the image features (in this case the averaging region features) while no desired length is provided. We treat it as a classification task and train it with the reference caption length.
We also implement Marker model from . The desired length is fed as a special token at the beginning of generation as the “first word”. At training time, the model needs to learn to predict the length at the first step the same way as other words (no extra length predictor needed). At test time, the length token is sampled in the same way as other words if no desired length specified.
We use COCO  to evaluate our models. For train/val/test split we follows . The base captioning model is Att2in . The images features are bottom-up features. For evaluation, we use BLEU , METEOR , ROUGE , CIDEr , SPICE  and bad ending rate [9, 1]. We train the models with cross-entropy loss. Unless specified otherwise, decoding is beam search with beam size 5, and evaluation on Karpthy test set.
3.1 Generation with predicted lengths
For fair comparison on general image captioning task, we predict the length and generate the caption conditioned on the predicted length for length models. Results in Table 1 show that the length models are comparable to the base model.
Length distribution(Fig.1) While the scores are close, the length distribution is quite different. Length models tend to generate longer captions than normal auto-regressive models. However neither is close to the real caption length distribution(“test” in the figure).
3.2 Generation with controlled lengths
For baseline model, we use the method fixLen in 
where probability manipulation are used to avoid generating eos token until desired length.
The original CIDEr-D promotes short and generic captions because it computes average similarity between the generated and the references. We report a modified CIDEr (mCIDEr): 1) removing length penalty term in the CIDEr-D; 2) combining the ngram counts from all the reference captions to compute similarity .
Fluency The high bad ending rate for Att2in indicates it can’t generate fluent sentences; when increasing beam size, the bad ending rate becomes lower. For length models, Marker performs well when length is less than 20 but collapse after, while LenEmb performs consistently well.
Accuracy The length models perform better than base model when length. The base model performs better between 10-16 which are the most common lengths in the dataset. For larger length, the LenEmb performs the best on both mCIDEr and SPICE, incidcating it’s covering more information in the reference captions.
Controllability We use the mean square error between the desired length and the actual length (LenMSE) to evaluate the controllability. When using predicted length, the length models perfectly achieve the predicted length (in Table 1). When desired length is fed, Fig. 2 shows that LenEmb can perfectly obey the length while Marker fails for long captions probabily due to poor long-term dependency.
Quatlitative results(Fig. 3) show that the LenEmb model, when generating longer captions, changes the caption structure and covers more detail, while the base model tends to have the same prefix for different lengths and repetition. More results can be browsed onine .
|7||a motorcycle parked on a dirt road|
|10||a motorcycle is parked on the side of a road|
|16||a motorcycle parked on the side of a dirt road with a fence in the background|
|22||a motorcycle parked on the side of a dirt road in front of a fence with a group of sheep behind it|
|28||a motorcycle is parked in a dirt field with a lot of sheep on the side of the road in front of a fence on a sunny day|
|7||a motorcycle parked on a dirt road|
|10||a motorcycle parked on a dirt road near a fence|
|16||a motorcycle parked on a dirt road in front of a group of people behind it|
|22||a motorcycle parked on a dirt road in front of a group of people on a dirt road next to a fence|
|28||a motorcycle parked on a dirt road in front of a group of people on a dirt road in front of a group of people in the background|
|7||an airplane is parked at an airport|
|10||an airplane is parked on the tarmac at an airport|
|16||an airplane is parked on a runway with a man standing on the side of it|
|22||an airplane is parked on a runway with a man standing on the side of it and a person in the background|
|28||an airplane is parked on the tarmac at an airport with a man standing on the side of the stairs and a man standing next to the plane|
|7||a plane is sitting on the tarmac|
|10||a plane is sitting on the tarmac at an airport|
|16||a plane that is sitting on the tarmac at an airport with people in the background|
|22||a plane is sitting on the tarmac at an airport with people in the background and a man standing in the background|
|28||a plane is sitting on the tarmac at an airport with people in the background and a man standing on the side of the road in the background|
3.3 Failure on CIDEr optimization
We apply SCST training for length models. However, SCST doesn’t work well. While the CIDEr scores can be improved, the generated captions tend to be less fluent, including bad endings (ending with ’with a’) or repeating words (like ’a a’).
We present two captioning models that can control the length and shows their effectiveness to generate good captions of different lengths. The code will be released at link111https://github.com/ruotianluo/self-critical.pytorch/tree/length_goal.
-  Bad ending rate evaluation. https://github.com/ruotianluo/self-critical.pytorch/blob/master/eval_utils.py#L31.
-  Modified cider. https://github.com/ruotianluo/coco-caption/commit/f415e0.
Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould.
Spice: Semantic propositional image caption evaluation.
European Conference on Computer Vision, pages 382–398. Springer, 2016.
Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen
Gould, and Lei Zhang.
Bottom-up and top-down attention for image captioning and visual
Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6077–6086, 2018.
-  Marcella Cornia, Lorenzo Baraldi, and Rita Cucchiara. Show, control and tell: a framework for generating controllable and grounded captions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8307–8316, 2019.
-  Michael Denkowski and Alon Lavie. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the ninth workshop on statistical machine translation, pages 376–380, 2014.
-  Aditya Deshpande, Jyoti Aneja, Liwei Wang, Alexander G Schwing, and David Forsyth. Fast, diverse and accurate image captioning guided by part-of-speech. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10695–10704, 2019.
-  Angela Fan, David Grangier, and Michael Auli. Controllable abstractive summarization. arXiv preprint arXiv:1711.05217, 2017.
-  Tszhang Guo, Shiyu Chang, Mo Yu, and Kun Bai. Improving reinforcement learning based image captioning with natural language prior. arXiv preprint arXiv:1809.06227, 2018.
-  Andrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3128–3137, 2015.
-  Yuta Kikuchi, Graham Neubig, Ryohei Sasano, Hiroya Takamura, and Manabu Okumura. Controlling output length in neural encoder-decoders. arXiv preprint arXiv:1609.09552, 2016.
-  Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out: Proceedings of the ACL-04 workshop, volume 8. Barcelona, Spain, 2004.
-  Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer, 2014.
-  Ruotian Luo and Greg Shakhnarovich. Colab notebook: Controlling length in image captioning. https://colab.research.google.com/drive/1TM_KZBixY-L47gHRfXiavgU_T-WfzI3I?usp=sharing.
-  Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics, 2002.
-  Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jerret Ross, and Vaibhava Goel. Self-critical sequence training for image captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7008–7024, 2017.
-  Sho Takase and Naoaki Okazaki. Positional encoding to control output sequence length. arXiv preprint arXiv:1904.07418, 2019.
-  Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4566–4575, 2015.