Exploiting Cross-Modal Prediction and Relation Consistency for Semi-Supervised Image Captioning

10/22/2021 ∙ by Yang Yang, et al. ∙ 0

The task of image captioning aims to generate captions directly from images via the automatically learned cross-modal generator. To build a well-performing generator, existing approaches usually need a large number of described images, which requires a huge effects on manual labeling. However, in real-world applications, a more general scenario is that we only have limited amount of described images and a large number of undescribed images. Therefore, a resulting challenge is how to effectively combine the undescribed images into the learning of cross-modal generator. To solve this problem, we propose a novel image captioning method by exploiting the Cross-modal Prediction and Relation Consistency (CPRC), which aims to utilize the raw image input to constrain the generated sentence in the commonly semantic space. In detail, considering that the heterogeneous gap between modalities always leads to the supervision difficulty of using the global embedding directly, CPRC turns to transform both the raw image and corresponding generated sentence into the shared semantic space, and measure the generated sentence from two aspects: 1) Prediction consistency. CPRC utilizes the prediction of raw image as soft label to distill useful supervision for the generated sentence, rather than employing the traditional pseudo labeling; 2) Relation consistency. CPRC develops a novel relation consistency between augmented images and corresponding generated sentences to retain the important relational knowledge. In result, CPRC supervises the generated sentence from both the informativeness and representativeness perspectives, and can reasonably use the undescribed images to learn a more effective generator under the semi-supervised scenario.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 8

page 9

page 10

page 12

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In real-world applications, object can always be represented by multiple source information, i.e., multiple modalities [5, 11]. For example, the news always contains image and text information, the video can be divided into image, audio and text information. Along this line, the study of cross-modal learning has emerged for bridging the connections among different modalities, so as to better perform downstream tasks, in which the image captioning is one of the important research directions. Specifically, image captioning aims to automatically generate natural language descriptions for images, and has emerged as a prominent research problem in both academia and industry [24, 43, 37, 8]

. For example, we can automatically broadcast road conditions by learning visual images to assist driving, and can also help visually impaired users to read more conveniently. In fact, the challenge of image captioning is to learn the generator between two heterogeneous modalities (i.e., the image and text modalities), which needs to recognize salient objects in an image using computer vision techniques and generate coherent descriptions using natural language processing.

To solve this problem, researchers firstly explored the neural encoder-decoder models [24, 44]

, which are composed of a CNN encoder and a LSTM (or Transformer) decoder. In detail, these methods firstly encode the image into a set of feature vectors using a CNN based model, each segmentation captures semantic information about an image region, then decode these feature vectors to words sequentially via a LSTM-based or Transformer-based network. Furthermore,

[43, 29, 20] adopted the single or hierarchical attention mechanism that enables the model to focus on particular image regions during decoding process. To mitigate the incorrect or repetitive content, several researches consider to edit inputs independently from the problem of generating inputs [17, 37]. However, note that all these methods require full image-sentence pairs in advance, i.e., all the images need to be described manually, which is hard to accomplish in real-world applications. A more general scenario is shown in Figure 1, we have limited described images with corresponding label ground-truths, and a large number of undescribed images. Therefore, a resulting challenge is the “Semi-Supervised Image Captioning”, which aims to conduct the captioning task by reasonably using the huge number of undescribed images and limited supervised data.

The key difficulty of semi-supervised image captioning is to design the pseudo supervision for the generated sentences. Actually, there have been some preliminary attempts recently. For example, [12, 16] proposed unsupervised captioning methods, which combined the adversarial learning [14] with traditional encoder-decoder models to evaluate the quality of generated sentences. In detail, based on the traditional encoder-decoder models, these approaches employ adversarial training to generate sentences such that they are indistinguishable from the sentences within auxiliary corpus. In order to ensure that the generated captions contain the visual concepts, they additionally distill the knowledge provided by a visual concept detector into the image captioning model. However, the domain discriminator and visual concept distiller do not fundamentally evaluate the matching degree and structural rationality of the generated sentence, so the captioning performance is poor. As for semi-supervised image captioning, a straightforward way is directly utilizing the undescribed images together with their machine-generated sentences [31, 22] as the pseudo image-sentence pair, to fine-tune the model. However, limited amount of parallel data can hardly establish a proper initial generator to generate precisely pseudo descriptions, which may have negative affection to the fine-tuning of visual-semantic mapping function.

To circumvent this issue, we attempt to utilize the raw image as pseudo supervision. However, heterogeneous gap between modalities always leads the supervision difficulty if we directly constrain the consistency between global embedding of image and sentence. Thereby, we switch to use the broader and effective semantic prediction information, rather than directly utilize the embedding, and introduce a novel approach, dubbed semi-supervised image captioning by exploiting the Cross-modal Prediction and Relation Consistency

(CPRC). In detail, there are two common approaches for traditional semi-supervised learning: 1) Pseudo labeling: it minimizes the entropy of unlabeled data using predictions; 2) Consistency regularization: it transforms the unlabeled raw images using data augmentation techniques, then constrains the consistency of transformed instances’ outputs. Different form these two techniques, we design cross-modal prediction and relation consistency by comprehensively considering the informativeness and representativeness: 1) Prediction consistency: we utilize the soft label of image to distill effective supervision for generated sentence; 2) Relation consistency: we work on encouraging the generated sentences to have similar relational distribution to the augmented image inputs. The central tenet is that the relations of learned representations can better present the consistency than individual data instance 

[33]. Consequently, CPRC can effectively qualify the generated sentences from both the prediction confidence and distribution alignment perspectives, thereby to learn more robust mapping function. Note that CPRC can be implemented with any current captioning model, and we adopt several typical approaches for verification [36, 46]. Source code is available at https://github.com/njustkmg/CPRC.

In summary, the contributions in this paper can be summarized as follows:

  • We propose a novel semi-supervised image captioning framework for processing undescribed images, which is universal for any captioning model;

  • We design the cross-modal prediction and relation consistency to measure the undescribed images, which maps the raw image and corresponding generated sentence into the shared semantic space, and supervises the generated sentence by distilling the soft label from image prediction and constraining the cross-modal relational consistency;

  • In experiments, our approach improves the performance under semi-supervised scenario, which validates that knowledge hidden in the content and relation is effective for enhancing the generator.

Ii Related Work

Ii-a Image Captioning

Image captioning approaches can be roughly divided into three categories: 1) Template based methods, which generate slotted captioning templates manually, and then utilize the detected keywords to fill the templates [45]

, but their expressive power is limited because of the need for designing templates manually; 2) Encoder-decoder based methods, which are inspired by the neural machine translation 

[9]. For example, [41] proposed an end-to-end framework with a CNN encoding the image to feature vector and a LSTM decoding to caption; [20] added an attention-on-attention module after both the LSTM and the attention mechanism, which can measure the relevance between attention result and query; and 3) Editing based methods, which consider editing inputs independent from generating inputs. For example, [17] learned a retrieval model that embeds the input in a task-dependent way for code generation; [37] introduced a framework that learns to modify existing captions from a given framework by modeling the residual information. However, all these methods need huge amount of supervised image-sentence pairs for training, whereas the scenario with large amount of undescribed images is more general in real applications. To handle the undescribed images, several attempts propose unsupervised image captioning approaches. [12] distilled the knowledge in visual concept detector into the captioning model to recognize the visual concepts, and adopted sentence corpus to teach the captioning model; [16] developed an unsupervised feature alignment method with adversarial learning that maps the scene graph features from the image to sentence modality. Nevertheless, these methods mainly depend on employing the domain discriminator for learning plausible sentences, that are difficult for generating matched sentences. On the other hand, considering the semi-supervised image captioning, [31, 22] proposed to extract regional semantics from un-annotated images as additional weak supervision to learn visual-semantic embeddings. However, the generated pseudo sentences are always unqualified to fine-tune the generator in real experiments.


Fig. 2:

Diagram of the proposed unsupervised loss. For example, three weakly-augmented images and the raw image are fed into the encoder to obtain image region embeddings, then four corresponding sentences are generated by the decoder. Then, the embeddings of image inputs and generated sentences are fed into the shared classifier to obtain the predictions. The model is trained by considering two objectives: 1)

supervised loss includes the generation cross-entropy and prediction cross-entropy for described images. In detail, generation cross-entropy measures the quality of generated sentence sequence, and prediction cross-entropy considers the multi-label prediction loss of generated sentence. 2) unsupervised loss includes the prediction consistency and relation consistency for undescribed images. In detail, prediction consistency utilizes the image’s prediction as pseudo labels for corresponding generated sentence, and relation consistency consist the generated sentences’ distribution with image inputs’ distribution.

Ii-B Semi-Supervised Learning

Recently, deep networks achieve strong performance by supervised learning, which requires a large number of labeled data. However, it comes at a significant cost when labeling by human labor, especially by domain experts. To this end, semi-supervised learning, which concerns combining supervised and unsupervised learning techniques to perform certain learning tasks and permits harnessing the large amounts of unlabeled data in combination with typically smaller sets of labeled data, attracts more and more attention. Existing semi-supervised learning mainly considers two aspects: 1) Self-training 

[15]. The generality of self-training is to use a model’s predictions to obtain artificial labels for unlabeled data. A specific variant is the pseudo-labeling, which converts the model predictions of unlabeled data to hard labels for calculating the cross-entropy. Besides, pseudo-labeling is often used along with a confidence thresholding that retains sufficiently confident unlabeled instances. In result, pseudo-labeling results in entropy minimization, which has been used as a component for many semi-supervised algorithms, and has been validated to produce better results [2]. 2) Consistency regularization [3]. Early extensions include exponential moving average of model parameters [39] or using previous model checkpoints [26]. Recently, data augmentation, which integrates these techniques into the self-training framework, has shown better results [42, 7]. A mainstream technology is to produce random perturbations with data augmentation [13], then enforce consistency between the augmentations. For example, [42] proposed unsupervised data augmentation with distribution alignment and augmentation anchoring, which encourages each output to be close to the weakly-augmented version of the same input; [7] used a weakly-augmented example to generate an artificial label and enforce consistency against strongly-augmented example. Furthermore, [38] combined the pseudo labeling and consistency regularization into a unified framework, which generates pseudo-labels using the model’s predictions on weakly-augmented unlabeled images, and constrain the prediction consistency between weakly-augmented and strongly-augmented version. Note that the targets in previous semi-supervised methods are uniform and simple, i.e., the label ground-truths. However, cross-modal semi-supervised learning is more complicated, e.g., each image has the corresponding sentence and label ground-truth. It is more difficult for building cross-modal generator than single modal classifier with limited supervised data, thereby it may causes noise accumulation if we directly employ the traditional semi-supervised technique for the generated sentences.

The remainder of this paper is organized as follows. Section III presents the proposed method, including the model, solution, and extension. Section IV shows the experimental results on COCO dataset, under different semi-supervised setting. Section VI concludes this paper.

Iii Proposed Method

Iii-a Notations

Without any loss of generality, we define the semi-supervised image-sentence set as: , where denotes the th image instance, represents the aligned sentence instance, denotes the instance label, if th instance belongs to the th label, otherwise is . is the th undescribed image. and () are the number of described and undescribed instances, respectively.

Definition 1

Semi-Supervised Image Captioning. Given limited parallel image-sentence pairs and a huge number of undescribed images , we aim to construct a generator for image captioning by reliably utilizing the undescribed images.

Iii-B The Framework

It is notable that CPRC focuses on employing the undescribed images, and is a general semi-supervised framework. Thereby the image-sentence generator, i.e.,

, can be represented as any state-of-the-art captioning model. In this paper, considering the effectiveness and reproducibility, we adopt the attention model, i.e., AoANet 

[20], for as base model. In detail, the is an encoder-decoder based captioning model, which always includes an image encoder and a text decoder. Given an image , the target of is to generate a natural language sentence describing the image. The formulation can be represented as: , where the encoder

is usually a convolutional neural network 

[18, 35] for extracting the embedding of raw image input. Note that usually includes refining module such as attention mechanism [4], which aims to refine the visual embedding for suiting the language generation dynamically. The decoder is widely used RNN-based model for the sequence prediction .

The learning process of CPRC is shown in Figure 2. Specifically, CPRC firstly samples a mini-batch of images from the dataset (including described and undescribed images), and adopts the data augmentation techniques for each undescribed image (i.e., each image has variants). Then we can acquire the generated sentences for both augmented images and the raw image using the , and compute the predictions for image inputs and generated sentences using the shared prediction classifier . The model is trained through two main objects: 1) supervised loss, which is designed for described images, i.e., supervised image-sentence pairs. In detail, supervised loss considers both the label and sentence predictions, including: a) generation cross-entropy

, which employs the cross-entropy loss or reinforcement learning based reward 

[36] for generated sentence sequence and ground-truth sentence. b) prediction cross-entropy, which calculates the multi-label loss between image/sentence’s prediction and label ground-truth. 2) unsupervised loss, which is designed for undescribed images. In detail, unsupervised loss considers both the informativeness and representativeness: a) prediction consistency, which uses the image’s prediction as pseudo label to distill effective information for generated sentence, so as to measure the instance’s informativeness; b) relation consistency, which adopts the relational structure of the augmented images as the supervision distribution for generated sentences, so as to measure the instance’s representativeness. Therefore, in addition to the traditional loss for described images, we constrain the sentences generated from undescribed images by comprehensively using the raw image inputs as pseudo labels. The details are described as follows.

Iii-C Supervised Loss

Iii-C1 Generation Loss

Given an image , the decoder (Figure 2) generate a sequence of sentence describing the image, is the length of sentence. Then, we can minimize the cross-entropy loss (i.e., ) or maximize a reinforcement learning based reward [36] (i.e., ), according to ground truth caption :

(1)

where denotes the target ground truth sequence,

is the prediction probability. the reward

is a sentence-level metric for the sampled sentence and the ground-truth, which always uses the score of some metric (e.g. CIDEr-D [40]). In detail, as introduced in [36], captioning approaches traditionally train the models using the cross entropy loss. On the other hand, to directly optimize NLP metrics and address the exposure bias issue. [36] casts the generative models in the Reinforcement Learning terminology as [34]. In detail, traditional decoder (i.e., LSTM) can be viewed as an “agent” that interacts with the “environment” (i.e., words and image features). The parameters of the network define a policy, that results in an “action” (i.e., the prediction of the next word). After each action, the agent updates its internal “state” (i.e., parameters of the LSTM, attention weights etc). Upon generating the end-of-sequence (EOS) token, the agent observes a “reward” that is, e.g., the CIDEr score of the generated sentence.

Iii-C2 Prediction Loss

On the other hand, we can measure the generation with classification task using label ground-truth . We extract the embeddings of image input and generated sentence from the representation output layer. Considering that the image and corresponding sentence share the same semantic representations, the embeddings of image input and generated sentence can be further put into the shared classifier for predicting. Thereby, the forward prediction process can be represented as:

where and are normalized prediction distribution of image input and generated sentence. denotes the shared classification model for text and image modalities. Without any loss of generality, we utilize three fully connected layer network here. represents the embeddings of image input and generated sentence. Note that and are the final embeddings of image/text region embedding with operator. The commonly used image captioning dataset (i.e., COCO dataset) is a multi-label dataset, i.e., different from multi-class dataset that each instance only has one ground-truth, each instance has multiple labels. Therefore, we utilize the binary cross entropy loss (BCELoss) here:

(2)

where denotes the BCELoss for multi-label prediction, and the model’s predictions are encouraged to be low-entropy (i.e., high-confidence) on supervised data.

Iii-D Unsupervised Loss

Iii-D1 Prediction Consistency

First, we introduce the augmentation technique for transforming the images. Existing methods usually leverage two kinds of augmentations: a) Weak augmentation is a standard flip-and-shift strategy, which does not significantly change the content of the input. b) Strong augmentation always refers to the AutoAugment [10] and its variant, which uses reinforcement learning to find an augmentation strategy comprising transformations from the Python Imaging Library111https://www.pythonware.com/products/pil/. Considering that “strong” augmented (i.e., heavily-augmented) instances are almost certainly outside the data distribution, which leads to the low quality of generated sentence, we leverage the “weak” augmentation instead. In result, each image can be expanded to variants, i.e., , denotes the raw input.

Then, we input the augmented image set to the image-sentence generator , and extract the embeddings of generated sentences from the representation output layer. The embeddings are further put into the shared classifier for prediction. Thereby, the prediction process can be represented as:

(3)

where denotes the shared classification model for text and image modalities. represents the embedding of generated sentence. Similarly, we can acquire the prediction of image inputs: , represents the embedding of image. The commonly used image captioning dataset (i.e., COCO dataset) is a multi-label dataset, i.e., different from multi-class dataset that each instance only has one ground-truth, each instance in COCO has multiple labels. Therefore, traditional pseudo-labeling that leverages “hard” labels (i.e., the of model’s output) is inappropriate, because it is difficult to determine the number of “hard” label for each instance. As a consequence, we directly utilize the prediction of image for knowledge distillation [28] in the multi-label BCEloss:

(4)

where denotes the binary cross entropy loss (BCELoss), and the model’s predictions are encouraged to be low-entropy (i.e., high-confidence) on unsupervised data.


Fig. 3: The relational consistency. The blue and orange rectangles represent image domain and text domain, respectively. Any point inside the rectangles represents a specific instance in that domain. Relational Consistency: for example, given a tuple of image instances , relational consistency loss requires that the generated sentences, , should share the similar relation structure with the raw inputs.

Iii-D2 Relation Consistency

Inspired by the linguistic structuralism [30] that relations can better present the knowledge than individual example, the primary information actually lies in the structure of the data space. Therefore, we define a new relation consistency loss, , using a metric learning-based constraint, which calculates the KL divergence of the similarity vectors between the image inputs and generated sentences. The relation consistency aims to ensure the structural knowledge using mutual relations of data examples in the raw inputs. Specifically, each image input can be denoted as a bag of instances, i.e., , while the corresponding generated sentences can also be represented as a bag of instances, i.e., . With the shared classifier, the image and sentence prediction can be formulated as:

With the predictions of image inputs and generated sentences, the objective of relational consistency can be formulated as:

(5)

is the KL divergence that penalizes difference between the similarity distributions of image inputs and the similarity distributions of generated sentences. is a relation prediction function, which measures a relation energy of the given tuple. In detail, aims to measure the similarities formed by the examples in semantic prediction space:

(6)

where measures the distance between and between respectively, and . and denote the relative instance-wise similarity. Finally, we pull the and into vector form. In result, the relation consistency loss can deliver the relationship of examples by penalizing structure differences. Since the structure has higher-order properties than single output, it can transfer knowledge more effectively, and is more suitable for consistency measure.

Iii-E Overall Function

In summary, with the limited amount of parallel image-sentence pairs and large amount of undescribed images, we define the total loss by combining the Eq. 1, Eq. 2, Eq. 4 and Eq. 5:

(7)

where denotes the captioning loss, which can be adopted as or in Eq. 1. Note that and are with same order of magnitude, so we do not add hyper-parameter here. and are scale values that control the weights of different losses. In , we use labeled images and sentences to jointly train the shared classifier , which increases the amount of training data, as well as adjusts the classifier to better suit subsequent prediction of augmented images and generated sentences. Furthermore, considering that the pseudo labels may exist noises, we can also adopt a confidence threshold that retains confident generated sentences. The Eq. 7 can be reformulated as:

(8)

where denotes the prediction probability of the th raw image input,

is a scalar hyperparameter denoting the threshold above which we retain the generated sentences. The details are shown in Algorithm

1.

Input:
Data:
Parameters: ,
Output:
Image captioning mapping function:

1:  Initialize the and randomly;
2:  while stop condition is not triggered do
3:     for mini-batch sampled from  do
4:        Calculate according to Eq. 1 and Eq. 2;
5:        Calculate according to Eq. 4;
6:        Calculate according to Eq. 5;
7:        Calculate according to Eq. 7 or Eq. 8;
8:        Update model parameters of using SGD;
9:     end for
10:  end while
Algorithm 1 The Code of CPRC
Methods Cross-Entropy Loss CIDEr-D Score Optimization
B1 B2 B3 B4 M R C S B1 B2 B3 B4 M R C S
SCST 56.8 38.6 25.4 16.3 16.0 42.4 38.9 9.3 59.4 39.5 25.3 16.3 17.0 42.9 43.7 9.9
AoANet 67.9 49.8 34.7 23.2 20.9 49.2 69.2 14.3 66.8 48.6 34.1 23.6 21.8 48.7 70.4 15.2
AAT 63.2 45.8 31.7 21.3 19.0 47.6 58.0 12.4 66.7 48.1 33.3 22.7 20.4 47.8 63.5 13.2
ORT 63.6 45.8 31.7 21.4 19.4 46.9 61.1 12.6 65.3 46.5 31.9 21.3 20.3 47.2 62.0 13.3
GIC 63.0 46.8 33.2 20.0 19.2 50.3 50.5 12.3 64.7 46.9 32.0 20.7 19.0 47.8 55.7 12.5
Graph-align - - - - - - - - 67.1 47.8 32.3 21.5 20.9 47.2 69.5 15.0
UIC - - - - - - - - 41.0 22.5 11.2 5.6 12.4 28.7 28.6 8.1
A3VSE 68.0 50.0 34.9 23.3 20.8 49.3 69.6 14.5 67.6 49.6 35.2 24.5 22.1 49.3 72.4 15.3
AoANet+P 67.4 49.7 35.2 24.3 22.3 49.1 71.7 14.9 67.2 49.5 35.9 24.4 21.6 50.1 74.2 15.7
AoANet+C 67.1 49.4 35.2 24.5 22.7 49.5 71.5 14.9 67.8 49.4 35.5 24.7 22.0 50.0 73.9 15.6
PL 67.8 49.6 35.2 24.2 22.0 50.4 74.7 15.6 67.9 50.0 35.6 24.3 22.2 49.7 76.6 16.1
AC 67.8 48.8 34.6 23.7 21.9 49.1 69.7 14.5 67.9 50.0 25.3 24.1 22.1 49.7 73.0 15.5
Embedding+ 65.1 46.4 31.9 21.5 20.7 47.6 65.1 14.1 65.6 47.1 32.3 22.6 20.8 47.8 69.1 14.5
Semantic+ 68.3 49.9 34.9 23.8 21.5 49.9 70.3 14.7 69.3 50.8 35.5 24.1 21.6 50.0 72.7 14.9
Strong+ 68.4 50.8 35.4 24.8 22.5 50.6 77.8 16.2 69.5 51.5 36.7 25.5 23.3 50.6 78.6 16.7
w/o Prediction 68.3 49.6 35.3 24.4 22.2 49.6 70.5 15.0 68.2 50.4 35.8 24.8 22.5 50.1 73.6 15.6
w/o Relation 68.1 50.0 35.5 24.8 22.4 50.5 75.2 15.8 68.3 50.5 35.8 24.9 22.7 50.4 76.9 16.3
w/o 66.9 49.8 34.5 24.2 21.5 49.5 76.2 15.4 68.5 50.8 36.2 25.0 22.5 49.8 77.5 16.2
CPRC 68.8 51.1 35.5 24.9 22.8 50.4 77.9 16.2 69.9 51.8 36.7 25.5 23.4 50.7 78.8 16.8
TABLE I: Performance of comparison methods on MS-COCO “Karpathy” test split, where BN, M, R, C and S are short for BLEU@N, METEOR, ROUGE-L, CIDEr-D and SPICE scores.
     (a) BLEU1      (b) BLEU2      (c) BLEU3      (d) BLEU4      (e) METEOR      (f) ROUGE-L      (g) CIDEr-D      (h) SPICE
Fig. 4: Relationship between captioning performance with different ratio of supervised data.
     (a) BLEU1      (b) BLEU2      (c) BLEU3      (d) BLEU4      (e) METEOR      (f) ROUGE-L      (g) CIDEr-D      (h) SPICE
Fig. 5: Relationship between caption performance with different ratio of supervised data (Cross-Entropy Loss).

Fig. 6: Examples of captions generated by CPRC and baseline models as well as the corresponding ground truths.

Fig. 7: (Best viewed in color) Examples of captions generated by augmented images.
Fig. 8: Relationship between captioning performance with different ratio of unsupervised data (CIDEr-D Score Optimization).
Fig. 9: Examples of captions generated by CPRC and baseline models as well as the corresponding ground truths (GT1-GT5 are the 5 given ground-truth sentences).

Iv Experiments

Iv-a Datasets

We adopt the popular MS COCO dataset [27] for evaluation, as former related methods are mostly practiced exclusively on this dataset  [20, 21, 19, 46, 36]. MS COCO dataset contains 123,287 images (82,783 training images and 40,504 validation images), each labeled with 5 captions. The popular test sets are divided into two categories: online evaluation and offline evaluation. Considering that all methods are evaluated under semi-supervised scenario, online evaluation cannot be used, so we only use offline evaluation. The offline “Karpathy” data split [23] contains 5,000 images for validation, 5,000 images for testing, and the rest for training. To construct the semi-supervised scenario, we randomly selected examples with artificially set proportions as supervised data from the training set, and the rest are unsupervised data.

Iv-B Implementation Details

The target of CPRC is to train the generator . In detail, we employ AoANet [20] structure for as base model. Meanwhile, we adopt fully connected networks for with three fully connected layers (with 1024 dimension for the hidden layers). The dimension of original image vectors is 2048 and we project them to a new space with the dimension of 1024 following [20]. The , i.e., each image has three augmentations using random occlusion technique. As for the training process, we train AoANet for epochs with a mini-batch size of 16, and ADAM [25] optimizer is used with a learning rate initialized by and annealed by every epochs. The parameter and is tuned in , and . The entire network is trained on an Nvidia TITAN X GPU.

Iv-C Baselines and Evaluation protocol

The comparison models fall into three categories: 1) state-of-the-art supervised captioning methods: SCST [36], AoANet [20], AAT [21], ORT [19] and GIC [46]. Note that these methods can only utilize the supervised image-sentence pairs. 2) state-of-the-art unsupervised captioning methods: Graph-align [16] and UIC [12]. These approaches utilize the independent image set and corpus set for training. 3) state-of-the-art semi-supervised method: A3VSE [22].

Moreover, we conduct extra ablation studies to evaluate each term in our proposed CPRC: 1) AoANet+P, we combine the label prediction consistency with the original AoANet generation loss as multi-task loss (only using the supervised data); 2) AoANet+C, we combine the relation consistency loss with the original AoANet generation loss as multi-task loss (only using the supervised data); 3) PL, we replace the prediction consistency with pseudo labeling as traditional semi-supervised methods; 4) AC, we replace the relation consistency with augmentation consistency as traditional semi-supervised methods; 5) Embedding+, we replace the relational consistency loss with embedding consistency loss, which minimizes the difference between the embedding of image inputs and generated sentences; 6) Semantic+, we replace the relational consistency loss with prediction consistency loss, which minimizes the difference between the predictions of image inputs and generated sentences; 7) Strong+, we replace the weak augmentation with strong augmentation for CPRC; 8) w/o Prediction, CPRC only retains the relation consistency loss in Eq. 8; 9) w/o Relation, CPRC only retains the prediction consistency in Eq. 8; and 10) w/o , CPRC removes the confidence threshold as Eq. 7. For evaluation, we use different metrics, including BLEU [32], METEOR [6], ROUGE-L [20], CIDEr-D [40] and SPICE [1], to evaluate the proposed method and comparison methods. All the metrics are computed with the publicly released code222https://github.com/tylin/coco-caption.

In fact, the CIDer-D and SPICE metric is more suitable for the image caption task [1, 40]

. One of the problems with using metrics such as BlEU, ROUGE, CIDEr and METEOR is that these metrics are primarily sensitive to n-gram overlap. However, n-gram overlap is neither necessary nor sufficient for two sentences to convey the same meaning 

[GimenezM07a]. As shown in the example provided by [1], consider the following two captions (a,b) from the MS COCO dataset:

  • A young girl standing on top of a tennis court.

  • A giraffe standing on top of a green field.

The captions describe two different images. However, the mentioned n-gram metrics produces a high similarity score due to the presence of the long 5-gram phrase “standing on top of a” in both captions. Meanwhile, the following captions (c,d) obtained from the same image:

  • A shiny metal pot filled with some diced veggies.

  • The pan on the stove has chopped vegetables in it.

These captions convey almost the same meaning, whereas exhibit low n-gram similarity as they have no words in common.

To solve this problem, SPICE [1]estimated caption quality by transforming both candidate and reference captions into a graph-based semantic representation (i.e., scene graph). The scene graph can explicitly encodes the objects, attributes and relationships found in image captions, abstracting away most of the lexical and syntactic idiosyncrasies of natural language in the process. CIDer-D [40] measured the similarity of a candidate sentence to a majority of how most people describe the image (i.e. the reference sentences).

Iv-D Qualitative Analysis

Table I presents the quantitative comparison results with state-of-the-art methods (i.e., 1 supervised data and 99 unsupervised in the training set), it is notable that supervised captioning methods can only develop the mapping functions with supervised data, and leave out the unsupervised data. For fairness, all the models are first trained under cross-entropy loss and then optimized for CIDEr-D score as [20]. “-” represents the results have not given in the raw paper. The results reveal that: 1) AoANet achieves the best scores on most metrics compared with the existing supervised methods. Therefore, CPRC adopts AoANet as the base image-sentence mapping function. 2) Unsupervised approach, i.e., UIC, achieve the worst performance on all metrics under different loss. This verifies that the generated sentence may mismatch the image with a high probability when only considering the domain discriminator. Graph-align performs better than supervised approaches, but worse than A3VSE on most metrics, because it ignores to measure specific example matching. 3) Semi-Supervised method, i.e., A3VSE, has little effect on improving the captioning performance, e.g., cross-entropy loss/CIDEr-D score optimization only improves 0.4/2.0 and 0.2/0.1 on CIDEr-D and SPICE scores comparing with AoANet, because it is more difficult to ensure the quality of generated sentences. 4) CPRC achieves the highest scores among all compared methods in terms of all metrics, on both the cross-entropy loss and CIDEr-D score optimization stage, except ROUGE-L on cross-entropy loss. For example, CPRC achieves a state-of-the-art performance of 77.9/78.8 (CIDEr-D score) and 16.2/16.8 (SPICE score) under two losses (cross-entropy and CIDEr-D score), that acquires 8.7/8.4 and 1.9/1.6 improvements comparing with AoANet. The phenomena indicates that, with limited amount of supervised data, existing methods cannot construct a well mapping function, whereas CPRC can reliably utilize the undescribed image to enhance the model; and 5) CPRC performs better than w/o on all metrics, which indicates the effectiveness of threshold confidence.

Iv-E Ablation Study

To quantify the impact of proposed CPRC modules, we compare CPRC against other ablated models with various settings. The bottom half of Table I presents the results: 1) AoANet+P and AoANet+C achieve better performance than AoANet, which indicates that the prediction loss and relation consistency loss can improve the generator learning, because the labels can provide extra semantic information; meanwhile, AoANet+P performs better than AoANet+C on most metric, which indicates that prediction loss is more significant than relation consistency; 2) PL and AC perform worse than the w/o Prediction and w/o Relation, which verifies that traditional semi-supervised techniques considering pseudo labeling are not as good as cross-modal semi-supervised techniques considering raw image as pseudo supervision; 3) Embedding+ performs worse than the Semantic+, which reveals that embeddings are more difficult to compare than predictions since image and text have heterogeneous representations; 4) Strong+ performs worse than CPRC, which validates that the strong augmentation may impact the generated sentence, and further affect the prediction as well as causing the noise accumulation; 5) Both the w/o Prediction and w/o Relation can improve the captioning performance on most criteria, especially on the important criteria, i.e., CIDEr-D and SPICE. The results indicate that both the prediction and relation consistencies can provide effective supervision to ensure the quality of generated sentences; 6) The effect of w/o Relation is more obvious, which shows that prediction loss can further improve the scores by comprehensively considering the semantic information; and 7) CPRC achieves the best scores on most metrics, which indicates that it is better to combine the content and relation information.

Iv-F CPRC with Different Captioning Model

Methods Cross-Entropy Loss
B1 B2 B3 B4 M R C S
SCST 56.8 38.6 25.4 16.3 16.0 42.4 38.9 9.3
GIC 63.0 46.8 33.2 20.0 19.2 50.3 50.5 12.3
SCST+CPRC 63.5 45.9 31.7 21.6 19.4 45.8 48.1 10.2
GIC+CPRC 66.8 47.5 34.5 21.4 19.2 50.8 57.7 13.4
Methods CIDEr-D Score Optimization
B1 B2 B3 B4 M R C S
SCST 59.4 39.5 25.3 16.3 17.0 42.9 43.7 9.9
GIC 64.7 46.9 32.0 20.7 19.0 47.8 55.7 12.5
SCST+CPRC 66.5 48.0 33.7 22.7 20.4 47.9 48.7 10.7
GIC+CPRC 66.9 47.9 34.8 21.8 19.8 48.2 58.9 13.6
TABLE II: Performance of CPRC with different caption model on MS-COCO “Karpathy” test split, where BN, M, R, C and S are short for BLEU@N, METEOR, ROUGE-L, CIDEr-D and SPICE scores.

To explore the generality of CPRC, we conduct more experiments by incorporating CPRC with different supervised captioning approaches, i.e., SCST (encoder-decoder based model), GIC (attention based model). Note that we have not adopted the editing based method considering the reproducibility, the results are recorded in Table V. We find that all the methods, i.e., SCST, GIC and AoANet (results can refer to the Table I), have improved the performance after combing the CPRC framework. This phenomena validates that CPRC can well combine the undescribed images for existing supervised captioning models.

Iv-G Influence of the Supervised and Unsupervised Images

To explore the influence of supervised data, we tune the ratio of supervised data, and the results are recorded in Figure 4 and Figure 5 with different metrics. Here, we find that with the percentage of supervised data increase, the performance of CPRC improves faster than other state-of-the-art methods. This indicates that CPRC can reasonably utilize the undescribed images to improve the learning of generator. Furthermore, we validate the influence of unsupervised data, i.e., we fix the supervised ratio to 1, and tune the ratio of unsupervised data in , the results are recorded in Figure 8. Note that one of the problems by using metrics, such as BlEU, ROUGE, CIDEr-D and METEOR to evaluate captions, is that these metrics are primarily sensitive to n-gram overlap [20, 1]. Therefore, we only give the results of CIDer-D and SPICE here (refer to the supplementary for more details). We find that with the percentage of unsupervised data increases, the performance of CPRC also improves. This indicates that CPRC can make full use of undescribed images for positive training.

Methods Cross-Entropy Loss
B1 B2 B3 B4 M R C S
K=1 67.5 48.9 34.6 22.5 21.1 48.4 74.7 15.5
K=2 67.8 49.5 34.9 23.4 21.7 49.5 75.9 15.8
K=3 68.8 51.1 35.5 24.9 22.8 50.4 77.9 16.2
K=4 67.9 49.8 34.8 24.2 22.2 50.1 76.8 16.0
K=5 67.6 49.7 34.5 23.8 22.0 49.8 76.2 16.0
Methods CIDEr-D Score Optimization
B1 B2 B3 B4 M R C S
K=1 68.0 50.1 35.7 24.8 22.0 49.5 77.1 16.1
K=2 68.3 50.5 35.9 25.3 22.1 49.7 77.7 16.5
K=3 69.9 51.8 36.7 25.5 23.4 50.7 78.8 16.8
K=4 68.7 51.4 36.5 25.2 22.8 49.7 77.4 16.3
K=5 68.3 50.8 35.9 25.1 22.7 49.4 77.3 16.2
TABLE III: Performance of CPRC with different augmentation number on MS-COCO “Karpathy” test split, where BN, M, R, C and S are short for BLEU@N, METEOR, ROUGE-L, CIDEr-D and SPICE scores.

Iv-H Influence of the Augmentation Number

To explore the influence of augmentation number, i.e., , we conduct more experiments. In detail, we tune the in and recorded the results in Table III. The results reveal that the CPRC achieves the best performance with , for the reason that additional inconsistent noises between image and sentence may be introduced with the the number of augmentations increase.

Methods Cross-Entropy Loss
B1 B2 B3 B4 M R C S
66.9 49.8 34.5 24.2 21.5 49.5 76.2 15.4
68.8 51.1 35.5 24.9 22.8 50.4 77.9 16.2
66.4 49.5 34.3 24.0 21.1 48.8 75.8 15.2
64.2 48.1 33.4 22.9 20.4 46.5 73.3 15.0
Methods CIDEr-D Score Optimization
B1 B2 B3 B4 M R C S
68.5 50.8 36.2 25.0 22.5 49.8 77.5 16.2
69.9 51.8 36.7 25.5 23.4 50.7 78.8 16.8
68.4 50.2 36.1 24.8 22.1 49.5 77.1 16.1
64.8 48.6 34.2 23.5 20.8 47.3 73.7 15.1
TABLE IV: Performance of CPRC with different on MS-COCO “Karpathy” test split, where BN, M, R, C and S are short for BLEU@N, METEOR, ROUGE-L, CIDEr-D and SPICE scores.

Iv-I Influence of the Confidence Threshold

To explore the influence of confidence threshold, i.e., , we conduct more experiments. In detail, we tune the in and recorded the results in Table IV. The results reveal that the performance of CPRC increases firstly, then decreases with the increasing of . The reason is that fewer undescribed images are used with the increasing of , thereby the generator training has not fully explored the unsupervised data.

Iv-J Visualization and Analysis

Figure 6 shows a few examples with captions generated by our CPRC and two baselines, A3VSE and AoANet, as well as the human-annotated ground truths. From these examples, we find that the generated captions of baseline models lack the logic of language and lose accurate for the image content, while CPRC can generate accurate captions in high quality.

Figure 7 shows an example of augmented images and corresponding generated captions. From these examples, we find that the generated captions basically have similar semantic information, which can help the prediction and relation consistencies for the undescribed images.

V Influence of Label Prediction

To explore the effect of prediction loss, we conduct more experiments and exhibit several cases. Figure 9 shows a few examples with captions generated by our CPRC and two baselines, A3VSE and AoA, as well as the human-annotated ground truths. From these examples, we find that the generated captions of baseline models lack the logic of language and inaccurate for the image content, while CPRC can generate accurate captions in high quality. Meanwhile, it can be clearly seen that the label prediction helps the generator to understand the image from the red part of the sentence generated by CPRC, for example, in figure 9 (a), the content of the image is complicated and the part of bird is not obvious, which causes the sentences generated by AoANet and A3VSE inconsistent with the ground-truths. But CPRC can generate a good description of “bird” and “umbrella” by combining label prediction information.

     (a) BLEU1      (b) BLEU2      (c) BLEU3      (d) BLEU4      (e) METEOR      (f) ROUGE-L      (g) CIDEr-D      (h) SPICE
Fig. 10: Parameter sensitivity of and with Cross-Entropy Loss.
     (a) BLEU1      (b) BLEU2      (c) BLEU3      (d) BLEU4      (e) METEOR      (f) ROUGE-L      (g) CIDEr-D      (h) SPICE
Fig. 11: Parameter sensitivity of and with CIDEr-D Score Optimization.

V-a Sensitivity to Parameters

The main parameters are the and in Eq. 5 of the main body. We vary the parameters in to study its sensitivity for different performance, and record the results in Figure 11 and Figure 11. We find that CPRC always achieves the best performance with small (i.e., ) and large (i.e., ) in terms of all metrics, on both cross-entropy and CIDEr-D score optimization. This phenomenon also validates that the relation consistency loss plays an important role in enhancing the generator.

Methods Cross-Entropy Loss
B1 B2 B3 B4 M R C S
10 68.3 49.5 34.9 23.3 21.4 49.6 71.7 14.6
40 66.9 48.7 34.2 23.4 22.9 49.6 72.9 15.6
70 68.4 50.6 35.6 24.4 22.9 50.5 74.4 15.9
100 68.8 51.1 35.7 24.9 22.9 50.4 77.9 16.2
Methods CIDEr-D Score Optimization
B1 B2 B3 B4 M R C S
10 68.7 51.0 25.6 23.9 22.4 50.6 74.1 14.9
40 69.2 50.2 35.6 24.1 22.9 50.8 75.7 15.9
70 69.4 51.3 36.5 24.8 22.8 50.7 76.5 16.2
100 69.9 51.8 36.7 25.5 23.4 50.7 78.8 16.8
TABLE V: Performance with different ratio data from unsupervised data (i.e., the supervised is fixed with ) on MS-COCO “Karpathy” test split, where BN, M, R, C and S are short for BLEU@N, METEOR, ROUGE-L, CIDEr-D and SPICE scores.

Vi Conclusion

Since traditional image captioning methods are usually working on supervised multi-modal data, in this paper, we investigated how to use undescribed images for semi-supervised image captioning. Specifically, our method can take Cross-modal Prediction and Relation Consistency (CPRC) into consideration. CPRC employs prediction distillation for the predictions of sentences generated from undescribed images, and develops a novel relation consistency between augmented images and generated sentences to retain the important relational knowledge. As demonstrated by the experiments on the MS-COCO dataset, CPRC outperforms state-of-the-art methods in various complex semi-supervised scenarios.

Appendix A Influence of Unsupervised Data

Furthermore, we explore the influence of unsupervised data, i.e., we fix the supervised ratio to 1, and tune the data ratio from unsupervised data in , the results are recorded in Table V. We find that with the percentage of unsupervised data increases, the performance of CPRC also improves in terms of all metrics. This indicates that CPRC can make full use of undescribed images for positive training. But the growth rate slows down in the later period (i.e., after ), probably owing to the interference of pseudo label noise.

Acknowledgment

This research was supported by NSFC (62006118), Natural Science Foundation of Jiangsu Province of China under Grant (BK20200460). NSFC-NRF Joint Research Project under Grant 61861146001, CCF- Baidu Open Fund (CCF-BAIDU OF2020011), Baidu TIC Open Fund.

References

  • [1] P. Anderson, B. Fernando, M. Johnson, and S. Gould (2016) SPICE: semantic propositional image caption evaluation. In ECCV, pp. 382–398. Cited by: §IV-C, §IV-C, §IV-C, §IV-G.
  • [2] E. Arazo, D. Ortego, P. Albert, N. E. O’Connor, and K. McGuinness (2020) Pseudo-labeling and confirmation bias in deep semi-supervised learning. In IJCNN, pp. 1–8. Cited by: §II-B.
  • [3] P. Bachman, O. Alsharif, and D. Precup (2014) Learning with pseudo-ensembles. In NeurIPS, pp. 3365–3373. Cited by: §II-B.
  • [4] D. Bahdanau, K. Cho, and Y. Bengio (2015) Neural machine translation by jointly learning to align and translate. In ICLR, San Diego, CA. Cited by: §III-B.
  • [5] T. Baltrusaitis, C. Ahuja, and L. Morency (2019)

    Multimodal machine learning: A survey and taxonomy

    .
    IEEE TPAMI 41 (2), pp. 423–443. Cited by: §I.
  • [6] S. Banerjee and A. Lavie (2005) METEOR: an automatic metric for MT evaluation with improved correlation with human judgments. In IEEMMT, pp. 65–72. Cited by: §IV-C.
  • [7] D. Berthelot, N. Carlini, E. D. Cubuk, A. Kurakin, K. Sohn, H. Zhang, and C. Raffel (2020) ReMixMatch: semi-supervised learning with distribution matching and augmentation anchoring. In ICLR, Cited by: §II-B.
  • [8] Y. Bin, Y. Yang, F. Shen, N. Xie, H. T. Shen, and X. Li (2019) Describing video with attention-based bidirectional LSTM. IEEE Trans. Cybern. 49 (7), pp. 2631–2641. Cited by: §I.
  • [9] K. Cho, B. van Merrienboer, cCaglar Gulccehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio (2014) Learning phrase representations using RNN encoder-decoder for statistical machine translation. In EMNLP, pp. 1724–1734. Cited by: §II-A.
  • [10] E. D. Cubuk, B. Zoph, D. Mané, V. Vasudevan, and Q. V. Le (2018) AutoAugment: learning augmentation policies from data. CoRR abs/1805.09501. Cited by: §III-D1.
  • [11] E. S. Debie, R. F. Rojas, J. Fidock, M. Barlow, K. Kasmarik, S. G. Anavatti, M. Garratt, and H. A. Abbass (2021) Multimodal fusion for objective assessment of cognitive workload: A review. IEEE Trans. Cybern. 51 (3), pp. 1542–1555. Cited by: §I.
  • [12] Y. Feng, L. Ma, W. Liu, and J. Luo (2019) Unsupervised image captioning. In CVPR, Long Beach, CA, pp. 4125–4134. Cited by: §I, §II-A, §IV-C.
  • [13] G. French, M. Mackiewicz, and M. H. Fisher (2018) Self-ensembling for visual domain adaptation. In ICLR, Cited by: §II-B.
  • [14] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. C. Courville, and Y. Bengio (2014) Generative adversarial nets. In NeurIPS, Montreal, Canada, pp. 2672–2680. Cited by: §I.
  • [15] Y. Grandvalet and Y. Bengio (2004) Semi-supervised learning by entropy minimization. In NeurIPS, pp. 529–536. Cited by: §II-B.
  • [16] J. Gu, S. R. Joty, J. Cai, H. Zhao, X. Yang, and G. Wang (2019) Unpaired image captioning via scene graph alignments. In ICCV, Seoul, Korea, pp. 10322–10331. Cited by: §I, §II-A, §IV-C.
  • [17] T. B. Hashimoto, K. Guu, Y. Oren, and P. Liang (2018) A retrieve-and-edit framework for predicting structured outputs. In NeurIPS, pp. 10073–10083. Cited by: §I, §II-A.
  • [18] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In

    Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition

    ,
    Las Vegas, NV, pp. 770–778. Cited by: §III-B.
  • [19] S. Herdade, A. Kappeler, K. Boakye, and J. Soares (2019) Image captioning: transforming objects into words. In NeurIPS, pp. 11135–11145. Cited by: §IV-A, §IV-C.
  • [20] L. Huang, W. Wang, J. Chen, and X. Wei (2019) Attention on attention for image captioning. In ICCV, pp. 4633–4642. Cited by: §I, §II-A, §III-B, §IV-A, §IV-B, §IV-C, §IV-C, §IV-D, §IV-G.
  • [21] L. Huang, W. Wang, Y. Xia, and J. Chen (2019) Adaptively aligned image captioning via adaptive attention time. In NeurIPS, pp. 8940–8949. Cited by: §IV-A, §IV-C.
  • [22] P. Huang, G. Kang, W. Liu, X. Chang, and A. G. Hauptmann (2019) Annotation efficient cross-modal retrieval with adversarial attentive alignment. In ACMMM, pp. 1758–1767. Cited by: §I, §II-A, §IV-C.
  • [23] A. Karpathy and L. Fei-Fei (2017) Deep visual-semantic alignments for generating image descriptions. TPAMI 39 (4), pp. 664–676. Cited by: §IV-A.
  • [24] A. Karpathy and F. Li (2015) Deep visual-semantic alignments for generating image descriptions. In CVPR, pp. 3128–3137. Cited by: §I, §I.
  • [25] D. P. Kingma and J. Ba (2015) Adam: A method for stochastic optimization. In ICLR, Cited by: §IV-B.
  • [26] S. Laine and T. Aila (2017) Temporal ensembling for semi-supervised learning. In ICLR, Toulon, France. Cited by: §II-B.
  • [27] T. Lin, M. Maire, S. J. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollar, and C. L. Zitnick (2014) Microsoft coco: common objects in context. In ECCV, pp. 740–755. Cited by: §IV-A.
  • [28] Y. Lin, C. Wang, C. Chang, and H. Sun (2021)

    An efficient framework for counting pedestrians crossing a line using low-cost devices: the benefits of distilling the knowledge in a neural network

    .
    Multim. Tools Appl. 80 (3), pp. 4037–4051. Cited by: §III-D1.
  • [29] J. Lu, C. Xiong, D. Parikh, and R. Socher (2017) Knowing when to look: adaptive attention via a visual sentinel for image captioning. In CVPR, pp. 3242–3250. Cited by: §I.
  • [30] P. Matthews (2001) A short history of structural linguistics. Cited by: §III-D2.
  • [31] N. C. Mithun, R. Panda, E. E. Papalexakis, and A. K. Roy-Chowdhury (2018) Webly supervised joint embedding for cross-modal image-text retrieval. In ACMMM, pp. 1856–1864. Cited by: §I, §II-A.
  • [32] K. Papineni, S. Roukos, T. Ward, and W. Zhu (2002) Bleu: a method for automatic evaluation of machine translation. In ACL, pp. 311–318. Cited by: §IV-C.
  • [33] W. Park, D. Kim, Y. Lu, and M. Cho (2019) Relational knowledge distillation. In CVPR, pp. 3967–3976. Cited by: §I.
  • [34] M. Ranzato, S. Chopra, M. Auli, and W. Zaremba (2016)

    Sequence level training with recurrent neural networks

    .
    In ICLR, Y. Bengio and Y. LeCun (Eds.), San Juan, Puerto Rico. Cited by: §III-C1.
  • [35] S. Ren, K. He, R. B. Girshick, and J. Sun (2017) Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39 (6), pp. 1137–1149. Cited by: §III-B.
  • [36] S. J. Rennie, E. Marcheret, Y. Mroueh, J. Ross, and V. Goel (2017) Self-critical sequence training for image captioning. In CVPR, pp. 1179–1195. Cited by: §I, §III-B, §III-C1, §IV-A, §IV-C.
  • [37] F. Sammani and M. Elsayed (2019) Look and modify: modification networks for image captioning. In BMVC, pp. 75. Cited by: §I, §I, §II-A.
  • [38] K. Sohn, D. Berthelot, C. Li, Z. Zhang, N. Carlini, E. D. Cubuk, A. Kurakin, H. Zhang, and C. Raffel (2020) FixMatch: simplifying semi-supervised learning with consistency and confidence. CoRR abs/2001.07685. Cited by: §II-B.
  • [39] A. Tarvainen and H. Valpola (2017)

    Mean teachers are better role models: weight-averaged consistency targets improve semi-supervised deep learning results

    .
    In NeurIPS, Long Beach, CA, pp. 1195–1204. Cited by: §II-B.
  • [40] R. Vedantam, C. L. Zitnick, and D. Parikh (2015) CIDEr: consensus-based image description evaluation. In CVPR, pp. 4566–4575. Cited by: §III-C1, §IV-C, §IV-C, §IV-C.
  • [41] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan (2015) Show and tell: A neural image caption generator. In CVPR, pp. 3156–3164. Cited by: §II-A.
  • [42] Q. Xie, Z. Dai, E. H. Hovy, T. Luong, and Q. Le (2020) Unsupervised data augmentation for consistency training. In NeurIPS, Cited by: §II-B.
  • [43] K. Xu, J. Ba, R. Kiros, K. Cho, A. C. Courville, R. Salakhutdinov, R. S. Zemel, and Y. Bengio (2015) Show, attend and tell: neural image caption generation with visual attention. In ICML, pp. 2048–2057. Cited by: §I, §I.
  • [44] Z. Yang, Y. Yuan, Y. Wu, W. W. Cohen, and R. Salakhutdinov (2016) Review networks for caption generation. In NeurIPS, pp. 2361–2369. Cited by: §I.
  • [45] B. Z. Yao, X. Yang, L. Lin, M. W. Lee, and S. C. Zhu (2010) I2T: image parsing to text description. Proceedings of the IEEE 98 (8), pp. 1485–1508. Cited by: §II-A.
  • [46] Y. Zhou, M. Wang, D. Liu, Z. Hu, and H. Zhang (2020) More grounded image captioning by distilling image-text matching model. In CVPR, pp. 4776–4785. Cited by: §I, §IV-A, §IV-C.