Log In Sign Up

Order-Free RNN with Visual Attention for Multi-Label Classification

In this paper, we propose the joint learning attention and recurrent neural network (RNN) models for multi-label classification. While approaches based on the use of either model exist (e.g., for the task of image captioning), training such existing network architectures typically require pre-defined label sequences. For multi-label classification, it would be desirable to have a robust inference process, so that the prediction error would not propagate and thus affect the performance. Our proposed model uniquely integrates attention and Long Short Term Memory (LSTM) models, which not only addresses the above problem but also allows one to identify visual objects of interests with varying sizes without the prior knowledge of particular label ordering. More importantly, label co-occurrence information can be jointly exploited by our LSTM model. Finally, by advancing the technique of beam search, prediction of multiple labels can be efficiently achieved by our proposed network model.


Orderless Recurrent Models for Multi-label Classification

Recurrent neural networks (RNN) are popular for many computer vision tas...

Semantic Regularisation for Recurrent Image Annotation

The "CNN-RNN" design pattern is increasingly widely applied in a variety...

Adapting RNN Sequence Prediction Model to Multi-label Set Prediction

We present an adaptation of RNN sequence models to the problem of multi-...

Structured Label Inference for Visual Understanding

Visual data such as images and videos contain a rich source of structure...

Order-free Learning Alleviating Exposure Bias in Multi-label Classification

Multi-label classification (MLC) assigns multiple labels to each sample....

Investigating Class-level Difficulty Factors in Multi-label Classification Problems

This work investigates the use of class-level difficulty factors in mult...


We propose a recurrent neural network (RNN) based model for image multi-label classification. Our model uniquely integrates and learning of visual attention and Long Short Term Memory (LSTM) layers, which jointly learns the labels of interest and their co-occurrences, while the associated image regions are visually attended. Different from existing approaches utilize either model in their network architectures, training of our model does not require pre-defined label orders. Moreover, a robust inference process is introduced so that prediction errors would not propagate and thus affect the performance. Our experiments on NUS-WISE and MS-COCO datasets confirm the design of our network and its effectiveness in solving multi-label classification problems.


Multi-label classification has been an important and practical research topic, since it needs to assign more than one label to each observed instance. From machine learning, data mining, and computer vision, a variety of applications benefit from the development and success of multi-label classification algorithms 

[Zhang and Zhou2014, Boutell et al.2004, Schapire and Singer2000, Godbole and Sarawagi2004, Lin et al.2014, Kang et al.2016b, Kang et al.2016a, Boutell et al.2004, Shao et al.2016]. A fundamental and challenging issue for multi-label classification is to identify and recover the co-occurrence of multiple labels, so that satisfactory prediction accuracy can be expected.

Recently, development of deep convolutional neural networks (CNN) 

[Krizhevsky, Sutskever, and Hinton2012, Szegedy et al.2015, Simonyan and Zisserman2014, He et al.2016] have made a remarkable progress in several research fields. Due to its ability of representation learning with prediction guarantees, CNNs contribute to the recent success in image classification tasks and beyond [Deng et al.2009, Fei-Fei, Fergus, and Perona2007, Griffin, Holub, and Perona2007]. Despite its effectiveness, how to extend CNNs for solving multi-label classification problems is still a research direction to explore.

While a number of research works [Zhang and Zhou2006, Nam et al.2014, Gong et al.2013, Wei et al.2014, Wang et al.2016] start to advance the CNN architecture for multi-label classification, CNN-RNN [Wang et al.2016] embeds image and semantic structures by projecting both features into a joint embedding space. By further utilizing the component of Long Short Term Memory (LSTM) [Hochreiter and Schmidhuber1997], a recurrent neural network (RNN) structure is introduced to memorize long-term label dependency. As a result, CNN-RNN exhibits promising multi-label classification performance with cross-label correlation implicitly preserved.

Figure 1: Illustration of our proposed model for multi-label classification. Note that the joint learning of attention and LSTM layers allows us to identify the label dependency without using any predetermined label order, while the corresponding image regions of interest can be attended to accordingly.

Unfortunately, the above frameworks suffer from the following three different problems. First, due to the use of LSTM, a pre-defined label order is required during training. Take [Wang et al.2016] for example, its label order is determined by the frequencies of labels observed from the training data. In practice, such pre-defined orders of label prediction might not reflect proper label dependency. For example, based on the number of label occurrences, one might obtain the label sequence as {sea, sun, fish}. However, it is obvious that fish is less semantically relevant to sun than sea. For better learning and prediction of such labels, the order of {sea, fish, sun} should be considered. On the other hand, [Jin and Nakayama2016] consider four experimental settings with different label orders: alphabetical order, random order, frequency-first order and rare-first order (note that rare-first is exactly the reverse of frequency-first). It is concluded in [Jin and Nakayama2016] that the rare-first order results in the best performance. Later we will conduct thorough experiments for verification, and show that orders automatically learned by our model would be desirable.

The second concern of the above methods is that, labels of objects which are in smaller scales/sizes in images would often be more difficult to be recovered. As a possible solution, attention map [Xu et al.2015] has been widely considered in image captioning [Xu et al.2015], image question answering [Yang et al.2016b], and segmentation [Hong et al.2016]. Extracted by different kernels from a certain convolutional layer in CNN, the corresponding feature maps contain rich information of different patterns from the input image. By further attending on such feature maps, the resulting attention map is able to identify important components or objects in an image. By exploiting the label co-occurrence between the associated objects in different scales or patterns, the above problem can be properly alleviated. However, this technique could not be easily applied to RNN-based methods for multi-label problems. As noted above, such methods determine the label order based on the occurrence frequency. For example, the class person may appear more often than horse in an image collection, and thus the label sequence would be derived as {person, horse}. Even if the image region of horse is typically larger than that of person, it might not assist in identifying the rider on its back (i.e., requiring the prediction order as {horse, person}).

Thirdly, inconsistency between training and testing procedures would often be undesirable for solving multi-label classification tasks. To be more precise, during the training phase, the labels to be produced at each recurrent layer is selected from the ground truth list during the training phase; however, the labels to be predicted during testing are selected from the entire label set. In other words, if a label is incorrectly predicted during a time step during prediction, such an error would propagate during the recurrent process and thus affect the results.

To resolve the above problems, we present a novel deep learning framework of visually attended RNN, which consists of visual attention and LSTM models as shown in Fig. 


. In particular, we propose a confidence-ranked LSTM which reflects the label dependency with the introduced visual attention model. Our joint learning framework with the introduced attention model allows us to identify the regions of interest associated with each label. As a result, the order of labels can be automatically learned without any prior knowledge or assumption. As verified later in the experiments, even the objects are presented in small scales in the input image, the corresponding image regions would still be visually attended. More importantly, our network architecture can be applied to both training and testing, and thus the aforementioned inconsistency issue is addressed.

The main contributions of this paper are listed below:

  • Without pre-determining the label order for prediction, our method is able to sequentially learn the label dependency using the introduced LSTM model.

  • The introduced attention model in our architecture allows us to focus on image regions of interests associated with each label, so that improved prediction can be expected even if the objects are in smaller sizes.

  • By jointly learning attention and LSTM models in a unified network architecture, our model performs favorably against state-of-the-art deep learning approaches on multi-label classification, even if the ground truth label might not be correctly presented during training.

Related Work

We first review the development of multi-label classification approaches. Intuitively, the simplest way to deal with multi-label classification problems is to decompose them into multiple binary classification tasks [Tsoumakas and Katakis2006]. Despite its simplicity, such techniques cannot identify the relationship between labels.

To learn the interdependency between labels for multi-label classification, approaches based on classifier chains 

[Read et al.2011]

were proposed, which capture label dependency by conditional product of probabilities. However, in addition to high computation cost when dealing with a larger number of labels, classifier chains have limited ability to capture the high order correlations between labels. On the other hand, probabilistic graphical model based methods

[Li, Zhao, and Guo2014, Li et al.2016] learn label dependencies with graphical structure, and latent space methods [Yeh et al.2017, Bhatia et al.2015] choose to project features and labels into a common latent space. Approaches like [Yang et al.2016a] further utilize additional information like bounding box annotations for learning their models.

Figure 2: Architecture of our proposed network architecture for multi-label classification. Note that , , and indicate the layers for feature mapping, attention, and label prediction, respectively. is a set of feature maps extracted from

, and the vector output

represents the preliminary label prediction of to initiate LSTM prediction. and are the attention context vector and the LSTM hidden state, respectively. Finally, denotes the vector output indicating the label probability, updated at every time step.

With the recent progress of neural networks and deep learning, BP-MLL [Zhang and Zhou2006]

is among the first to utilize neural network architectures to solve multi-label classification. It views each output node as a binary classification task, and relies on the architecture and loss function to exploit the dependency across labels. It was later extended by 

[Nam et al.2014] with state-of-the-art learning techniques such as dropout.

Furthermore, state-of-the-art DNN based multi-label algorithms have proposed different loss functions or architectures [Gong et al.2013, Wei et al.2014, Hu et al.2016]. For example, Gong et al. [Gong et al.2013] design a rank-based loss and compensate those with lowest ranks ones, Wei et al. [Wei et al.2014]

generate multi-label candidates on several grids and combine results with max-pooling, and Hu et al. propose structured inference NN

[Hu et al.2016], which uses concept layers modeled with label graphs.

Recurrent neural networks (RNN) is a type of NN structure, which is able to learn the sequential connections and internal states. When RNN has been successfully applied to sequentially learn and predict multiple labels of the data, it typically requires a large number of parameters to observe the above association. Nevertheless RNN with LSTM [Hochreiter and Schmidhuber1997] is an effective method to exploit label correlation. Researches in different fields also apply RNNs to deal with sequential prediction tasks which utilize the long-term dependency in a sequence, such as image captioning [Mao et al.2014], speech recognition [Graves, Mohamed, and Hinton2013], language modeling [Sundermeyer, Schlüter, and Ney2012], and word embedding learning [Le and Zuidema2015]. Among multi-label classification, CNN-RNN [Wang et al.2016] is a representative work with promising performance. However, CNN-RNN requires a pre-defined label order for learning, and its limitation to recognize labels corresponding to objects in smaller sizes would be the major concern.

Our Proposed Method

We first define the goal of the task in this paper. Let denote the training data, where indicates a set of training instances in a dimensional space. The matrix indicates the associated multi-label matrix, where is the number of labels of interest. In other words, each dimension in is a binary value indicating whether belongs to the corresponded label . For multi-label classification, the goal is to predict the multi-label vector for a test input .

A Brief Review of CNN-RNN

CNN-RNN [Wang et al.2016] is a recent deep learning based model for multi-label classification. Since our method can be viewed as an extension, it is necessary to briefly review this model and explain the potential limitations.

As noted earlier, exploiting label dependency would be the key to multi-label classification. Among the first CNN works for tackling this issue, CNN-RNN is composed of a CNN feature mapping layer and a Long Short-Term Memory (LSTM) inference layer. While such an architecture jointly projects the input image and its label vector into a common latent space, the LSTM particularly recovers the correlation between labels. As a result, outputs of multiple labels can be produced at the prediction layer via nearest neighbor search.

Despite its promising performance, CNN-RNN requires a predefined label order for training their models. In addition to the lack of robustness in learning optimal label orders, as confirmed in [Wang et al.2016], labels of objects in smaller sizes would be difficult to predict if their visual attention information is not properly utilized. Therefore, how to introduce the flexibility in learning optimal label order while jointly exploiting the associated visual information would be the focuses of our proposed work.

Order-Free RNN with Visual Attention

As illustrated in Fig. 2, our proposed model for multi-label classification has three major components: feature mapping layer , attention layer , and LSTM inference layer . The feature mapping layer extracts the visual features from the input image using a pre-trained CNN model. With the attention layer , we would observe a set of feature maps , in which each map is learned to describe the corresponding layer of image semantic information. The output of then goes through the LSTM inference process via , followed by a final prediction layer for producing the label outputs.

During the LSTM inference process, the hidden state vector would update the attention layer with the label inference from the previous time step, guiding the network to visually attend the next region of interest in the input image. Thus, such network designs allow one to exploit label correlation using the associated visual information. As a result, the optimal order of label sequences can be automatically observed. In the following subsections, we will detail each component of our proposed model.

Feature Mapping Layer

The feature mapping layer first extracts visual features by pre-trained CNN models. Following the design in [Liu et al.2016], we add a fully-connected layer with the output dimension of after the convolutional layers, which produces the predicted probability for each label as an additional feature vector. Therefore, the CNN probability outputs can be viewed as preliminary label prediction.

With the ground truth labels given during training (note that positive labels as 1 and negative ones as 0), the learning of would update the parameters of the fully-connected layer via observing log-likelihood cross-entropy, while the parameters of the pre-trained CNN remain fixed. By concatenating feature maps of dimension in , we convert into a single input vector learning visual attention. As a result, the output probability vector of can be expressed as follows:


Attention Layer

When predicting multiple labels from an input image, one might suffer from the fact that labels of objects in smaller sizes are not properly identified. For example, person typically occupies a significant portion of an input image, while birds might appear in smaller sizes and around the corners.

In order to alleviate this problem, we introduce an attention layer to our proposed architecture, with the goal of focusing on proper image regions when predicting the associated labels. Inspired by Xu et al. [Xu et al.2015], who advocated a soft attention-based image caption generator, we advance the same network component in our framework. For multi-label classification, this allows us to focus and describe the image regions of interest during prediction, while implicitly exploiting the label co-occurrence information. In our proposed framework, this attention layer would generate a context vector consisting of weights for each feature map, so that the attended image region can be obtained during each iteration. Later we will also explain that, with such network designs, we can observe optimal label order when learning our RNN-based multi-label classification model.

Following the structure of multi-layer perceptron 

[Xu et al.2015], our attention layer is conditioned on the previous hidden state . For each in Eq. 2, the attention layer generates a weight , , which represents the importance weight of feature in the input image, and predicts the label at this time step. To be more specific, we have:


where is the same as the model in [Xu et al.2015], and would be detailed later in next section.

With , we derive the context vector with the soft attention mechanism:


Later, our experiments will visualize and evaluate the contribution of our attention model for multi-label classification.

Input: Feature maps = [,…,] and label vector y of an image
Parameter : Resnet fully-connected layer , attention layer , LSTM layer and prediction layer , iteration number
Output: Soft confidence vector
Randomly initialize parameters
Train by log-likelihood cross-entropy loss and obtain
       for iter do
             Obtain the context vector by (4) and (5)
             Obtain the hidden state by (6)
             Obtain the soft confidence vector by (7)
             Obtain the hard predicted label vector by (9)
             Update the candidate pool by (10)
             Compute the log-likelihood cross-entropy between and y
             Perform gradient descent on , and =
until , and converge
Algorithm 1 Training of Our Proposed Model

Confidence-Ranked LSTM

As an extension of recurrent neural network (RNN), LSTM additionally consists of three gate neurons: forget, input, and output gates. The forget gate is to learn proper weights for erasing the memory cell, the input gate is learned to describe the input data, while the output gate aims to control how the memory should be omitted.

In order to exploit and capture the dependency between labels, the LSTM model in our network architecture needs to identify which label would exhibit a high confidence at each time step. Thus, we concatenate the soft confidence vector from the previous time step (note that = when t=1, and = otherwise), the context vector and the previous predicted hard label vector for deriving the current hidden state vector . This state vector is thus controlled by the aforementioned three gate components. By observing the long-term dependency between labels via the above structure, we can exploit and utilize the resulting label correlation for improved multi-label classification.

We note that, to predict multi-label outputs using LSTM, we pass through an additional prediction layer consisting of two fully-connected layers, and result in a soft confidence vector at time . The hard predicted label indicates the most confident class at the time step , which is then appended to the hard predicted label vector . In the testing phase, by collecting till the ultimate condition, which will be described later in the next section, the final predicted multi-label vector can be obtained.

More specifically, we calculate:


where denotes the LSTM model. In order to predict , we have:


On the other hand, the cross-entropy loss function to minimize at the output layer at time is:


where , , and

is the sigmoid function.

It is worth noting that, the main difficulty of applying LSTM for multi-label classification is its requirement of the ground truth label order during the training process. By simply calculating the cross-entropy between the confidence vector and the ground truth multi-label vector y, one would not be able to define the order of label prediction for learning LSTM. Moreover, it would be desirable if the label order would reflect semantic dependency between labels presented in training image data.

With the above observation, we view our in the proposed network architecture as confidence-ranked LSTM. Once the previous soft confidence vector and hard predicted label vector are produced, our model would update and the attention layer . As a result, we will be able to produce accordingly. In other words, our model achieves the visually attention of objects of semantic interest in the input image, which does not require one to pre-define any specific label order. Therefore, unlike previous works like [Wang et al.2016], the training of our model does not require the selection of ground truth labels in a predetermined order. Instead, we calculate the loss by comparing the soft confidence vector with the ground truth label vector directly. With our visual attention plus LSTM components, the training process would be consistent with the testing stage. Since the above process relies on visual semantic information for multi-label prediction, one of the major advantages of our model that possible error propagation problems when applying RNN-based approaches can be alleviated.

Order-Free Training and Testing


We now explain how our model achieves order-free learning and prediction. As shown in Fig. 2, our network design produces outputs of labels at time , where each denotes the label with the highest confidence at the i-th time step. To avoid duplicate label outputs at different time steps, we apply the concept of candidate label pool as follows.

To initialize the inference process for multi-label learning using our model, the candidate label pool would simply be containing all labels. At each time step, the most confident label would be selected from the candidate pool, and thus this candidate pool will be updated by removing from it. More specifically, for , we denote it as:


where denotes the full set of labels , and is the set of candidate labels to be predicted at time . From the above label update process, the cardinal of the candidate label set would be subtracted by one at each time step.


We note that, the labels to be predicted during the testing stage can be obtained sequentially using the learned model. However, even with the introduction of the attention layers, prediction error at a time step would be propagated and significantly degrade the prediction performance.

Inspired by [Wang et al.2016], we apply the technique of beam search to alleviate the above problem, and thus the predicting process would be more robust to intermediate prediction errors. More precisely, beam search would keep the best- prediction paths at each time step. At time step , it then searches from all successor paths generated from the previous paths, updates the path probability for all successor paths, and maintains the best- candidates for the following time steps.

In our work, a prediction path represents a sequence of predicted labels with a corresponding path probability, which can be calculated by multiplying the probabilities of all the nodes along the prediction path. At each time step given a prediction path and image , its path probability before predicting is calculated as:


Finally, the prediction process via beam search would terminate under the following two conditions:
1. The probability output of a particular prediction path is below a threshold (which is determined by cross-validation).
2. The length of the prediction path reaches a pre-defined maximum length (which is the largest number of labels in the training set).



Method C-P C-R C-F1 O-P O-R O-F1
Frequency-first (w/ atten)
Rare-first (w/ atten)
Ours (w/o atten)
Table 1: Evaluation of NUS-WIDE. Note that Macro/Micro P/R/F1 scores are abbreviated as O/C-P/R/F1, respectively. Ours (w/o attention) and Frequency/Rare-first (w/ atten) denote our method with the attention layer removed and using associated pre-defined label orders, respectively.

To implement our proposed architecture, we apply a ResNet-152 [He et al.2016]

network trained on Imagenet without fine-tuning, and use the bottom fourth convolution layer for visual feature extraction. We also add a fully-connected layer with dimension of

after the convolutional layer. We employ the Adam optimizer with the learning rate at 0.0003, and the dropout rate at 0.8 for updating . We perform validation on the stopping threshold for beam search. As for the parameters for attention and LSTM models, we follow the settings of [Xu et al.2015] for implementation.

To evaluate the performance of our method and to perform comparisons with state-of-the-art methods, we report results on the benchmark datasets of NUS-WIDE and MS-COCO as discussed in the following subsections.


NUS-WIDE is a web image dataset which includes 269,648 images with a total of 5,018 tags collected from Flickr. The collected images are further manually labeled into 81 concepts, including objects and scenes. We follow the setting of WARP [Gong et al.2013] for experiments by removing images without any label, i.e., 150,000 images are considered for training, and the rest for testing.

We compare our result with state-of-the-art NN-based models: WARP [Gong et al.2013] and CNN-RNN [Wang et al.2016]. We also also perform several controlled experiments: (1) removing the attention layer, and (2) fixing orders by different methods as suggested by [Jin and Nakayama2016] during training. Frequency-first indicates the labels are sorted by frequency, from high to low, and rare-first is exactly the reverse of frequency-first. The results are listed in Table 1. From this table, we see that our model performed favorably against baseline and state-of-the art multi-label classification algorithms. This demonstrates the effectiveness of our method in learning proper label ordering for sequential label prediction. Finally, our full model achieved the best performance, which further supports the exploitation of visually attended regions for improved multi-label classification.

Figure 3: Examples images with correct label prediction in NUS-WISE (a) and MS-COCO (c), those with incorrect prediction are shown in (b) and (d), respectively. For each image (with ground truth labels noted below), the associated attention maps are presented at the right hand side, showing the regions of interest visually attended to. Note that some incorrect predicted labels (in red) were expected and reasonable due to noisy ground truth labels, while the resulting visual attention maps successfully highlight the attended regions.

In Fig. 3(a), we present example images with correct label prediction. We see that our model was able to predict labels depending on what it was actually attended to. For example, since ‘person’ is a frequent label in the dataset, CNN-RNN framework tended to predict it first, because their label order was defined by label occurrence frequency observed during the training stage. In contrast, our model was able to predict animal and horses first, which were actually easier to be predicted based on their visual appearance in the input image. On the other hand, examples of incorrect predictions are shown in Fig 3(b). It is worth pointing out that, as can be seen from these results, the prediction results were actually intuitive and reasonable, and the incorrect prediction was due to the noisy ground truth label. From the above observations, it can be successfully verified that our method is able to identify semantic ordering and visually adapt to objects with different sizes, even given noisy or incorrect label data during the training stage.


Method C-P C-R C-F1 O-P O-R O-F1
Frequency-first (w/ atten)
Rare-first (w/ atten)
Ours (w/o atten)
Table 2: Performance comparisons on MS-COCO. Ours (w/o attention) and Ours Frequency/Rare-first (w/ atten) denote our method with the attention layer removed and using associated pre-defined label orders, respectively.

MS-COCO is the dataset typically considered for image recognition, segmentation and captioning. The training set consists of 82,783 images with up to 80 annotated object labels. The test set of this experiment utilizes the validation set of MS-COCO (40,504 images), since the ground truth labels of the original test set in MS-COCO are not provided. , In the experiments, we compare our model with the WARP [Gong et al.2013] and CNN-RNN [Wang et al.2016] models in Table 2. It can be seen that the full version of our model achieved performance improvements over the Resnet-based baseline by 4.1% in C-F1 and by 5.6% in O-F1.

In Figures 3(c) and 3(d), we also present example images with correct and incorrect prediction. It is worth noting that, in the upper left example in Fig. 3(c), although the third attention map corresponded to the label prediction of surfboard, it did not properly focus on the object itself. Instead, it took the surrounding image regions into consideration. Combining the information provided by the hidden state, it still successfully predicted the correct label. This illustrates the ability of our model to utilize both local and global information in an image during multi-label prediction.


We proposed a deep learning model for multi-label classification, which consists of a visual attention model and a confidence-ranked LSTM. Unlike existing RNN-based methods requiring predetermined label orders for training, the joint learning of the above components in our proposed architecture allows us to observe proper label sequences with visually attended regions for performance guarantees. In our experiments, we provided quantitative results to support the effectiveness of our method. In addition, we also verified its robustness in label prediction, even if the training data are noisy and incorrectly annotated.


  • [Bhatia et al.2015] Bhatia, K.; Jain, H.; Kar, P.; Varma, M.; and Jain, P. 2015. Sparse local embeddings for extreme multi-label classification. In Advances in Neural Information Processing Systems, 730–738.
  • [Boutell et al.2004] Boutell, M. R.; Luo, J.; Shen, X.; and Brown, C. M. 2004.

    Learning multi-label scene classification.

    Pattern recognition 37(9):1757–1771.
  • [Deng et al.2009] Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and Fei-Fei, L. 2009. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, 248–255. IEEE.
  • [Fei-Fei, Fergus, and Perona2007] Fei-Fei, L.; Fergus, R.; and Perona, P. 2007. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. Computer vision and Image understanding 106(1):59–70.
  • [Godbole and Sarawagi2004] Godbole, S., and Sarawagi, S. 2004. Discriminative methods for multi-labeled classification. In Pacific-Asia Conference on Knowledge Discovery and Data Mining, 22–30. Springer.
  • [Gong et al.2013] Gong, Y.; Jia, Y.; Leung, T.; Toshev, A.; and Ioffe, S. 2013. Deep convolutional ranking for multilabel image annotation. arXiv preprint arXiv:1312.4894.
  • [Graves, Mohamed, and Hinton2013] Graves, A.; Mohamed, A.-r.; and Hinton, G. 2013. Speech recognition with deep recurrent neural networks. In Acoustics, speech and signal processing (icassp), 2013 ieee international conference on, 6645–6649. IEEE.
  • [Griffin, Holub, and Perona2007] Griffin, G.; Holub, A.; and Perona, P. 2007. Caltech-256 object category dataset.
  • [He et al.2016] He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 770–778.
  • [Hochreiter and Schmidhuber1997] Hochreiter, S., and Schmidhuber, J. 1997. Long short-term memory. Neural computation 9(8):1735–1780.
  • [Hong et al.2016] Hong, S.; Oh, J.; Lee, H.; and Han, B. 2016. Learning transferrable knowledge for semantic segmentation with deep convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3204–3212.
  • [Hu et al.2016] Hu, H.; Zhou, G.-T.; Deng, Z.; Liao, Z.; and Mori, G. 2016. Learning structured inference neural networks with label relations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2960–2968.
  • [Jin and Nakayama2016] Jin, J., and Nakayama, H. 2016. Annotation order matters: Recurrent image annotator for arbitrary length image tagging. In Pattern Recognition (ICPR), 2016 23rd International Conference on, 2452–2457. IEEE.
  • [Kang et al.2016a] Kang, K.; Li, H.; Yan, J.; Zeng, X.; Yang, B.; Xiao, T.; Zhang, C.; Wang, Z.; Wang, R.; Wang, X.; et al. 2016a. T-cnn: Tubelets with convolutional neural networks for object detection from videos. arXiv preprint arXiv:1604.02532.
  • [Kang et al.2016b] Kang, K.; Ouyang, W.; Li, H.; and Wang, X. 2016b. Object detection from video tubelets with convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 817–825.
  • [Krizhevsky, Sutskever, and Hinton2012] Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, 1097–1105.
  • [Le and Zuidema2015] Le, P., and Zuidema, W. 2015. Compositional distributional semantics with long short term memory. arXiv preprint arXiv:1503.02510.
  • [Li et al.2016] Li, Q.; Qiao, M.; Bian, W.; and Tao, D. 2016. Conditional graphical lasso for multi-label image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2977–2986.
  • [Li, Zhao, and Guo2014] Li, X.; Zhao, F.; and Guo, Y. 2014. Multi-label image classification with a probabilistic label enhancement model. In UAI, volume 1,  3.
  • [Lin et al.2014] Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; and Zitnick, C. L. 2014. Microsoft coco: Common objects in context. In European Conference on Computer Vision, 740–755. Springer.
  • [Liu et al.2016] Liu, F.; Xiang, T.; Hospedales, T. M.; Yang, W.; and Sun, C. 2016. Semantic regularisation for recurrent image annotation. arXiv preprint arXiv:1611.05490.
  • [Mao et al.2014] Mao, J.; Xu, W.; Yang, Y.; Wang, J.; Huang, Z.; and Yuille, A. 2014. Deep captioning with multimodal recurrent neural networks (m-rnn). arXiv preprint arXiv:1412.6632.
  • [Nam et al.2014] Nam, J.; Kim, J.; Mencía, E. L.; Gurevych, I.; and Fürnkranz, J. 2014. Large-scale multi-label text classification—revisiting neural networks. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 437–452. Springer.
  • [Read et al.2011] Read, J.; Pfahringer, B.; Holmes, G.; and Frank, E. 2011. Classifier chains for multi-label classification. Machine learning 85(3):333.
  • [Schapire and Singer2000] Schapire, R. E., and Singer, Y. 2000. Boostexter: A boosting-based system for text categorization. Machine learning 39(2-3):135–168.
  • [Shao et al.2016] Shao, J.; Loy, C.-C.; Kang, K.; and Wang, X. 2016. Slicing convolutional neural network for crowd video understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 5620–5628.
  • [Simonyan and Zisserman2014] Simonyan, K., and Zisserman, A. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
  • [Sundermeyer, Schlüter, and Ney2012] Sundermeyer, M.; Schlüter, R.; and Ney, H. 2012. Lstm neural networks for language modeling. In Interspeech, 194–197.
  • [Szegedy et al.2015] Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; and Rabinovich, A. 2015. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1–9.
  • [Tsoumakas and Katakis2006] Tsoumakas, G., and Katakis, I. 2006. Multi-label classification: An overview. International Journal of Data Warehousing and Mining 3(3).
  • [Wang et al.2016] Wang, J.; Yang, Y.; Mao, J.; Huang, Z.; Huang, C.; and Xu, W. 2016. Cnn-rnn: A unified framework for multi-label image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2285–2294.
  • [Wei et al.2014] Wei, Y.; Xia, W.; Huang, J.; Ni, B.; Dong, J.; Zhao, Y.; and Yan, S. 2014. Cnn: Single-label to multi-label. arXiv preprint arXiv:1406.5726.
  • [Xu et al.2015] Xu, K.; Ba, J.; Kiros, R.; Cho, K.; Courville, A. C.; Salakhutdinov, R.; Zemel, R. S.; and Bengio, Y. 2015. Show, attend and tell: Neural image caption generation with visual attention. In ICML, volume 14, 77–81.
  • [Yang et al.2016a] Yang, H.; Tianyi Zhou, J.; Zhang, Y.; Gao, B.-B.; Wu, J.; and Cai, J. 2016a. Exploit bounding box annotations for multi-label object recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 280–288.
  • [Yang et al.2016b] Yang, Z.; He, X.; Gao, J.; Deng, L.; and Smola, A. 2016b. Stacked attention networks for image question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 21–29.
  • [Yeh et al.2017] Yeh, C.-K.; Wu, W.-C.; Ko, W.-J.; and Wang, Y.-C. F. 2017. Learning deep latent space for multi-label classification. In AAAI, 2838–2844.
  • [Zhang and Zhou2006] Zhang, M.-L., and Zhou, Z.-H. 2006. Multilabel neural networks with applications to functional genomics and text categorization. IEEE transactions on Knowledge and Data Engineering 18(10):1338–1351.
  • [Zhang and Zhou2014] Zhang, M.-L., and Zhou, Z.-H. 2014. A review on multi-label learning algorithms. IEEE transactions on knowledge and data engineering 26(8):1819–1837.