Picture What you Read

by   Ignazio Gallo, et al.

Visualization refers to our ability to create an image in our head based on the text we read or the words we hear. It is one of the many skills that makes reading comprehension possible. Convolutional Neural Networks (CNN) are an excellent tool for recognizing and classifying text documents. In addition, it can generate images conditioned on natural language. In this work, we utilize CNNs capabilities to generate realistic images representative of the text illustrating the semantic concept. We conducted various experiments to highlight the capacity of the proposed model to generate representative images of the text descriptions used as input to the proposed model.



There are no comments yet.


page 1

page 2

page 3

page 4

page 6

page 7


A Survey on Neural Machine Reading Comprehension

Enabling a machine to read and comprehend the natural language documents...

Teaching Machines to Read and Comprehend

Teaching machines to read natural language documents remains an elusive ...

Attention-driven read-aloud technology increases reading comprehension in children with reading disabilities

The paper presents the design of an assistive reading tool that integrat...

Convolutional Spatial Attention Model for Reading Comprehension with Multiple-Choice Questions

Machine Reading Comprehension (MRC) with multiple-choice questions requi...

Learning to Skim Text

Recurrent Neural Networks are showing much promise in many sub-areas of ...

Finding Representative Interpretations on Convolutional Neural Networks

Interpreting the decision logic behind effective deep convolutional neur...

Recognizing Implicit Discourse Relations via Repeated Reading: Neural Networks with Multi-Level Attention

Recognizing implicit discourse relations is a challenging but important ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Recent years have seen a surge in multimodal data containing various media types. Typically, users combine text, image, audio or video to sell a product over an e-commence platform or express views on social media. The combination of these media types has been extensively studied to solve various tasks including classification [2, 13, 9], cross-modal retrieval [25] semantic relatedness [12, 16]

, image captioning 

[11, 26]

, multimodal named entity recognition 

[29, 3] and Visual Question Answering [7, 1]. In addition, multimodal data fueled an increased interest in generating images conditioned on natural language [23, 4]. In recent years, generative models based on conditional Generative Adversarial Network (GAN) have remarkably improved text to image generation task [10, 30]

. Furthermore, generative models based on Variational Autoencoders are employed to generate images conditioned on natural language 

[18, 17]

. Generally, image generation from natural language is divided into phases: the first phase learns the distribution from which the images are to be generated while the second phase learns a generator, which in turn produces the image conditioned on a vector from this distribution.

In this work, we are interested in transforming natural language in the form of technical e-commerce product specifications directly into image pixels. For example, image pixels may be generated from the text description such as ‘‘Heavy Duty All Purpose Hammer - Forged Carbon Steel Head’’ as shown in Fig. 1. We assume we are given technical specifications of a set of images available on e-commence platforms, and train the generator block, available inside our model, from the pixel distribution. We propose to use an ‘up-convolutional’ generative block for this task and show that it is capable of generating realistic e-commerce images. Fig. 3 shows some generated images conditioned on technical product specification along with the original images.

Fig. 1: A Neural Model reading natural language (‘‘Heavy Duty All Purpose Hammer - Forged Carbon Steel Head’’) can generates a representative image (‘‘Hammer’’).

Following are the main contributions of our work:

  • We propose a new loss function to transform a text description into a representative image;

  • The proposed model generates images conditioned on technical e-commerce specifications. Moreover, it generates images never seen before;

  • An end-to-end convolutional model capable of classifying the text and at the same time generating a representative image of the text. The generated image can be used as text encoding or as a realistic image representing the object described in the input document;

  • We propose a model that can also be used to transform a multimodal dataset into a single dataset of images.

Fig. 2: A schematic representation of the proposed model. The model extracts features from the text document using an embedding layer and three different convolutive filters. Through different deconvolutive layers the textual features are transformed into an image representative of the text. Finally, through convolutive layers, the image and the encoded text are classified.

Ii Related Work

Recently, image generation conditioned on natural language has drawn a lot of attention from the research community. Various approaches have been proposed based on Variational Autoencoders [18, 17], Auto-regressive models [24] and optimization techniques [20]. Similarly, GANs based approaches have noticeably improved image synthesis conditioned on natural language. These approaches consist of a generator and a discriminator that compete in a two player minimax game: the discriminator tries to distinguish real data from generated images, and the generator tries to trick the discriminator. In the proposed model, the generator and discriminator are part of the same model and are linked by a single loss function, in order to generate images within the classification process. In the following paragraph, we reviewed couple of the ground breaking approaches on image generation conditioned on natural language.

Reed et al. [23] proposed to learn both generator and discriminator conditioned on captions. Zhu et al. [30] proposed a generative method to generate synthesized visual features using the noisy text descriptions about an unseen class. Xu et al. [27] proposed attentional generative network to synthesize fine-grained details at different subregions of the image by paying attentions to the relevant words in the natural language description. Zhang et al. [28] decomposed text to image generation in two stages: the first stage GAN sketches the basic shape and color of the object condition on the natural language, resulting in low resolution image. While the second stage GAN takes first stage results and natural language to generate high-resolution images with photo-realistic details.

Furthermore, various approaches have exploited the capability of ‘up-convolutional’ network to generate realistic images. Dosovitskiy et al.  [6] trained a deconvolutional network with several layers of convolution and upsampling to generate 3D chair renderings given object style, viewpoint and color. In this work we are interested in generating new images through up-sampling but we limit this generative process to a medium resolution.

Iii Model description

Our goal is to train a neural network to generate accurate e-commerce images from a low-level and noisy text description. We develop an effective loss function to transform a text document into a representative image and at the same time exploits the information content of the image and the text, to solve a classification problem.

Formally, we assume that we are given a dataset of examples with targets . The inputs are text descriptions describing the objects showed in the images . The targets are tuples consisting of two elements: the class label

in one-hot encoding and a

image .

Generated Original Input Text
draper expert knipex 27723 side trimmer for electronics for cutting head satin without facet 115 mm
spax universal screw half round head t star plus 4 cut partial shiny thread galvanized with a2j galvanization
screws mustad panel vitals bronzed 3x20 mm. conf. 500 pcs universal countersunk flat head screw suitable for screwing ...
sicutool padlocks 2090p 50 width body 50 mm
sicutool copper hammers 2718 800 total weight 800 copper hammers
sicutool nippers for electronics and fine mechanics 557gf lenght total mm 125 wire cutters for electronics and fine mechanics
sicutool circular saws with teeth shown in hartmetall 4840g 150b type null 150b ø mm 150 thickness mm 2 8 hole mm 16 teeth nr 20
valex wrench 20 x 22mm inclined forks of 15 ° body in chrome vanadium steel with polished finish. size 20x22 mm length 235 mm
syrom adhesive tape in textile tes special 38 mm x 2 7 m black tightly woven plasticized tape to repair binding edges etc.
oem 20 pcs sticker number 6 mm 50 black pvc sticker number 6
Fig. 3: Some examples of generated images from test dataset, associated with a correct classification. In some cases, the generated images are slightly different from the respective expected image but the object shown is very similar.

Neural networks typically produce class probabilities by using a ‘‘softmax’’ output layer that converts the logit,

, computed for each class into a probability , by comparing with the other logits. Softmax function takes an -dimensional vector of real numbers and transforms it into a vector of real number in range which add upto 1.


‘‘Cross entropy’’ indicates the distance between what the model believes the output distribution should be (), and what the original distribution really is. It is defined as


Cross entropy measure is a widely used alternative of squared error. If we want to minimize the pixel-by-pixel distance between the input image , associated with the text document and a CNN’s features layer that has the same dimensions as the image , then we can apply the following formula


or the following mean version


is the output of the last transposed convolutions -- also called fractionally strided convolutions -- used to upsampling the text features to attain a feature layer having the same size of the image


The final loss function we used in this work is the following


but we also performed experiments replacing the with the .


In this last case the contribution of the lambda parameter has a different effect with the same values, this because the interval of variability of and are very different. For example, using the Eq. 6 as loss function, we need much larger values to take advantage of the contribution of and obtain in a realistic image, representative of the object described in the text .

The parameter is important to balance the contribution of against . Setting we minimize only and therefore the feature layer , representing the input text, will be very different from the image . In Fig. 5 we have a graphical representation of the image learned by minimizing Eq. 5, using various values and it is important to note that when the model does not generate realistic images.

Learning proceeds by minimizing the loss function via Adam optimizer [15]. Since we are using the combination of two loss functions to modify the weights of the entire neural network, we know that within the image that we generate, it contains text representations. In fact, in many cases it may happen that the image cannot be interpreted as one of the objects belonging to a particular class but the classification is correct.

We experimented with a network for generating images of size . The structure of the generative network is shown in Fig. 2. Conceptually, the network we propose can be seen as a convolutive classification model for text documents, that incorporates a generative network. The starting point of the proposed model is the classification model of sentences proposed by Kim [14] to transform a text document into a features vector. This is followed by a set of 4 deconvolutive layers that transform textual features into an RGB image that we can then generate. Finally, we used a sequence of 4 convolutive layers that transform the generated image into features for the classification problem.

Iii-A Text features

The input to our model are sequences of words from each input document , where each word is drawn from a vocabulary . Words are represented by distributional vectors looked up in a word embeddings matrix . This matrix is formed by simply concatenating embeddings of all words in .

For each input text , we build a matrix , where each row represents a word embedding at the corresponding position in the document . To capture and compose features of individual words in a given text from low-level word embeddings into higher level semantic concepts, the neural network applies a series of transformations to the input matrix using convolution, non-linearity and pooling operations. The convolution operation between and a filter of height results in a vector . In our model we used three groups of different kernels in parallel, having dimensions , and . In this way we obtained three feature maps , having different lengths. Note that the convolution filter is of the same dimensionality

as the input sentence matrix, so this is like a 1-D convolution operation. To allow the network to learn an appropriate threshold, we also added a bias vector

for each feature map .

Each convolutional layer is followed by the Rectified Linear Unit (ReLU) non-linear activation function, applied element-wise. ReLU speeds up the training process 

[19], defined as to ensure that feature maps are always positive. To capture the most important feature -- one with the highest value -- for each feature map, we apply a max-overtime pooling operation [5] over each feature map. In this way, for each particular filter we take the maximum value as the feature corresponding to this particular filter. The convolutional layer utilizing the activation function and the pooling layer acts as a non-linear feature extractor.

Up to this point we have described the process by which a single feature is extracted from a single filter. The set of these individual features are linked into a single layer and then connected to a subsequent fully-connected layer that has the purpose of connecting the textual features with the next block of deconvolution layers used to transform textual features into an image.

Iii-B Up-sampling

The purpose of the up-sampling block is to transform the features extracted from the text into image format that best represents the description contained in the text.

We use 4 deconvolution layers, each of which doubles the size of the input features. In practice, we start with 512 features maps of size and then move to a second layer with 256 features maps of size , followed by a new layer having 128 features maps o size , another layer with 64 features maps of size and finally, a last layer with 3 features maps of size . The up-sampling blocks consist of the nearest-neighbor up-sampling followed by a

stride 1D convolution. Batch normalization and ReLU activation are applied after every convolution except the last one where we used a sigmoid to guarantee that the values of features maps

were all in the range . This last layer can be interpreted directly as an image.

Iii-C Classification

The last block is similar to a convolutive neural network that feeds the features to 4 successive convolutive layers. Each of these layers produces features maps of size , , and respectively. Starting from the largest layer, we used stride and the following number of filters: 64, 32, 16 and 8 respectively.

The convolutional layers are followed by two fully connected layers having 1024 and 512 neurons respectively.

For regularization we employ dropout before the output layer with a constraint on -norms of the weight vectors. Dropout prevents co-adaptation of hidden units by randomly dropping out a proportion

of the hidden units during foward backpropagation.

The last layer is the output layer that has a number of neurons equal to the number of classes of the problem that we want to learn.

Fig. 4:

11 different executions made on the validation set of the Ferramenta dataset, to estimate the best value for the

parameter of the proposed loss function in Eq. 5. The best value obtained is for .

Iv Datasets

In multimodal dataset, modalities are obtained from multiple input sources. Dataset used in this work consists of images and accompanying text descriptions. We select Ferramenta [9] multimodal dataset that are created from e-commerce website. Table I shows information on this dataset. Ferramenta multi-modal dataset [9] is made up of adverts split in adverts for train set and adverts for test set, belonging to classes. Ferramenta dataset provides a text and a representative image for each commercial advertisement. It is interesting to note that text descriptions in this dataset are in Italian Language.

Dataset #Cls Train Test Lang.
Ferramenta 52 66,141 21,869 IT
TABLE I: Information on multi-modal datasets used in this work. A multi-modal dataset consists of an image and accompanying text description. The last column indicates the text description language.

Another dataset used in our work is the Oxford-102 Flowers dataset [21] containing 8,189 flow images in 102 categories. Each image in this dataset is annotated with 10 descriptions provided by [22]. Because this dataset has class-disjoint training and test sets, with 82 train+val and 20 test classes, we randomly shuffled all the classes and split back into training and test. In this way, all the classes available in the training set are also present in the test set.

Fig. 5: Some examples of generated images from Ferramenta test dataset. The models trained for each by minimizing Eq. 5

, were trained for 20 epochs only, to speed up the experiment.

V Experiments

The proposed approach transforms text descriptions into a representative image. We use standard CNN hyperparameters. The initial learning rate is set to

along with Adam as optimizer. In our experiments, accuracy is used to measure classification performance.

The purpose of the first experimental phase is to analyze the generative capacity of our model. We conducted following experiments with this aim in mind: (1) estimate of the best parameter for Eq. 5, (2) qualitative analysis of the generated images , (3) ability of the model to generate new images according to the description given in input.

As a second group of experiments we have analyzed the capacity of the proposed model to generate embedding in image format of the input text. We conducted following experiments with this second aim in mind: (1) estimate of the best lambda parameter to obtain the most significant encoding, (2) extraction of a new dataset of encoded text in image format to compute the classification accuracy using a well-known CNN.

The first experiment concerned the estimation of the best value to be used in the proposed loss function described in Eq. 5. To achieve this, we first extracted a validation set from our training set and on this we calculated the classification accuracy to extract the best value to assign to the lambda parameter. Fig. 4 shows the results of all the experiments conducted on the validation set. The accuracy results reported in this figure were obtained by averaging the accuracy values of 5 runs. As can be seen from the figure, the best value we obtained is for . To visually analyze the effect of the parameter, we also performed a quick test by training 11 different models for 20 epochs using . Then we compared some of the resulting images as shown in Fig. 5. The best defined image is for while for we have an abstract visual representation since the second part of the loss function described in Eq. 3 has been removed.

As a second experiment, we first trained a model using the entire training set and then visually analyzed generated images with our model on the test set. Fig. 3 shows some examples of generated images beside the images we expected and the text passed as input. As you can see from the figure, many of the images generated are identical to those we expect to find, while some of them represent the same object but arranged differently (see for example the screw and the pliers with the red handle). We observed that some images have no visual meaning and do not represent any of the objects in the training set, even if in many of these cases the classification is correct. This means that the information extracted from the text is still present in the image which is then used by the last block to classify.

In generalization, the proposed model has the ability to generate images that it has never seen in training and this ability is directly correlated with the words we feed in input. In this experiment we tried to mix the tokens of two descriptions belonging to different categories to highlight the capacity. As can be seen from Fig. 8, by taking some tokens from two different descriptions and feeding them to the neural model, in some cases this produces images that are a combination of the objects representing the two descriptions. For example, in the same Fig. 8 you can see an image of an object that is the composition of a screw and a clamp. This is because in the description given as input to the model, the most important tokens of both objects are present.

Fig. 6: Columns (a) and (b) show some examples of images extracted from Flowers and Ferramenta datasets, respectively. The columns to the right of (a) and (b) show the text encodings extracted as features layer using in Eq. 6.
Fig. 7: The top row contains encodings for the same image belonging to the Ferramenta dataset when the model was trained using Eq. 6 with parameters showed on the top. The images on bottom row are created using an image of the Flowers dataset. Below each image is the test accuracy obtained with the corresponding parameter.

Using the loss function of Eq. 6, which has a much more restricted range of variability, it is possible to give more emphasis to the encoding of the input text in image format. To find the best parameter that produces the best encoding we varied the parameter and for each of its values we computed the classification accuracy on the test set of the Ferramenta and Flowers datasets. Fig. 7 shows the encodings and the accuracy obtained when the parameter changes. On these results we performed the last experiment using the parameter.

Fig. 8: A set of images generated by the proposed model. and have generated starting from text documents and respectively. All other images were generated by combining different percentages of tokens extracted simultaneously from and . The two colored rectangles below images are indicative of the percentages of tokens from and used to generate the images.

In this latest experiment we extracted two new image datasets using two different models trained on the two datasets. In Fig. 6 you can see some examples of images generated, alongside the original image of the dataset. It can be seen how different source images correspond to a different text encoding. To analyze the information content of these two new datasets we have trained two CNN AlexNet to calculate their classification accuracy. For the Ferramenta dataset we got while for the Flowers dataset we got . The first result is slightly higher than the one published in [8] while the second result obtained on the Flowers dataset is incredibly high. The reason we got such high accuracy is because our dataset contains 10 different text descriptions associated with the same image. Having divided the training set randomly into training and test, the same images can be found both in the training set and in the test set. Ultimately this means that the new datasets created do not only encode information extracted from the text but also from images.

Vi Conclusion

In this work we have proposed a new approach to generate an image that is representative of a noisy text description available in natural language. The approach we proposed uses a new loss function in order to simultaneously minimize the classification error and the distance between the desired image and a features map of the same model. The qualitative results are very interesting but, for the moment, we have ignored the classification performances because this was not our focus of the present work. In the future we want to exploit the same idea to try to improve the classification accuracy that can be obtained with a single convolutive neural model.

Another interesting aspect emerged from this work is that the same approach we proposed can be used to encode in image format both the information contained in the input text and the information extracted from the image associated with the text. This feature is very interesting to be able to incorporate multimodal information into a single image dataset. In this way, multimodal information can be processed directly by a single CNN normally used to process only images.


  • [1] P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang (2018) Bottom-up and top-down attention for image captioning and visual question answering. In CVPR, Vol. 3, pp. 6. Cited by: §I.
  • [2] J. Arevalo, T. Solorio, M. Montes-y-Gómez, and F. A. González (2017) Gated multimodal units for information fusion. arXiv preprint arXiv:1702.01992. Cited by: §I.
  • [3] O. Arshad, I. Gallo, S. Nawaz, and A. Calefati (2019) Aiding intra-text representations with visual context for multimodal named entity recognition. arXiv preprint arXiv:1904.01356. Cited by: §I.
  • [4] M. Cha, Y. Gwon, and H. Kung (2017) Adversarial nets with perceptual losses for text-to-image synthesis. In

    2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)

    pp. 1–6. Cited by: §I.
  • [5] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa (2011) Natural language processing (almost) from scratch. Journal of machine learning research 12 (Aug), pp. 2493–2537. Cited by: §III-A.
  • [6] A. Dosovitskiy, J. T. Springenberg, M. Tatarchenko, and T. Brox (2016) Learning to generate chairs, tables and cars with convolutional networks. IEEE transactions on pattern analysis and machine intelligence 39 (4), pp. 692–705. Cited by: §II.
  • [7] A. Fukui, D. H. Park, D. Yang, A. Rohrbach, T. Darrell, and M. Rohrbach (2016) Multimodal compact bilinear pooling for visual question answering and visual grounding. Proceedings of Empirical Methods in Natural Language Processing, EMNLP 2016, pp. 457–468. Cited by: §I.
  • [8] I. Gallo, A. Calefati, S. Nawaz, and M. K. Janjua (2018-12) Image and encoded text fusion for multi-modal classification. In 2018 International Conference on Digital Image Computing: Techniques and Applications (DICTA), pp. 1–7. Cited by: §V.
  • [9] I. Gallo, A. Calefati, and S. Nawaz (2017) Multimodal classification fusion in real-world scenarios. In Document Analysis and Recognition (ICDAR), pp. 36–41. Cited by: §I, §IV.
  • [10] S. Hong, D. Yang, J. Choi, and H. Lee (2018) Inferring semantic layout for hierarchical text-to-image synthesis. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    pp. 7986–7994. Cited by: §I.
  • [11] A. Karpathy and L. Fei-Fei (2015) Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3128–3137. Cited by: §I.
  • [12] D. Kiela and L. Bottou (2014) Learning image embeddings using convolutional neural networks for improved multi-modal semantics. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 36–45. Cited by: §I.
  • [13] D. Kiela, E. Grave, A. Joulin, and T. Mikolov (2018) Efficient large-scale multi-modal classification. Proceedings of AAAI 2018. Cited by: §I.
  • [14] Y. Kim (2014) Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP, pp. 1746–1751. Cited by: §III.
  • [15] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §III.
  • [16] C. W. Leong and R. Mihalcea (2011) Going beyond text: a hybrid image-text approach for measuring word relatedness.. In IJCNLP, pp. 1403–1407. Cited by: §I.
  • [17] E. Mansimov, E. Parisotto, J. L. Ba, and R. Salakhutdinov (2015) Generating images from captions with attention. arXiv preprint arXiv:1511.02793. Cited by: §I, §II.
  • [18] E. Mansimov, E. Parisotto, J. Ba, and R. Salakhutdinov (2016) Generating images from captions with attention. In ICLR, Cited by: §I, §II.
  • [19] V. Nair and G. E. Hinton (2010)

    Rectified linear units improve restricted boltzmann machines

    In Proceedings of the 27th international conference on machine learning (ICML-10), pp. 807–814. Cited by: §III-A.
  • [20] A. Nguyen, J. Clune, Y. Bengio, A. Dosovitskiy, and J. Yosinski (2017) Plug & play generative networks: conditional iterative generation of images in latent space. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4467–4477. Cited by: §II.
  • [21] M. Nilsback and A. Zisserman (2008) Automated flower classification over a large number of classes. In 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing, pp. 722–729. Cited by: §IV.
  • [22] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee (2016) Generative adversarial text to image synthesis. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML’16, pp. 1060–1069. Cited by: §IV.
  • [23] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee (2016) Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396. Cited by: §I, §II.
  • [24] S. Reed, A. van den Oord, N. Kalchbrenner, S. G. Colmenarejo, Z. Wang, Y. Chen, D. Belov, and N. de Freitas (2017) Parallel multiscale autoregressive density estimation. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2912–2921. Cited by: §II.
  • [25] L. Wang, Y. Li, and S. Lazebnik (2016) Learning deep structure-preserving image-text embeddings. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5005–5013. Cited by: §I.
  • [26] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, and Y. Bengio (2015) Show, attend and tell: neural image caption generation with visual attention. In International conference on machine learning, pp. 2048–2057. Cited by: §I.
  • [27] T. Xu, P. Zhang, Q. Huang, H. Zhang, Z. Gan, X. Huang, and X. He (2018) Attngan: fine-grained text to image generation with attentional generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1316–1324. Cited by: §II.
  • [28] H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang, and D. N. Metaxas (2017) Stackgan: text to photo-realistic image synthesis with stacked generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 5907–5915. Cited by: §II.
  • [29] Q. Zhang, J. Fu, X. Liu, and X. Huang (2018) Adaptive co-attention network for named entity recognition in tweets. In

    Thirty-Second AAAI Conference on Artificial Intelligence

    Cited by: §I.
  • [30] Y. Zhu, M. Elhoseiny, B. Liu, X. Peng, and A. Elgammal (2018) A generative adversarial approach for zero-shot learning from noisy texts. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1004–1013. Cited by: §I, §II.