MUTATT: Visual-Textual Mutual Guidance for Referring Expression Comprehension

03/18/2020 ∙ by Shuai Wang, et al. ∙ University of South Carolina IEEE Tianjin University 0

Referring expression comprehension (REC) aims to localize a text-related region in a given image by a referring expression in natural language. Existing methods focus on how to build convincing visual and language representations independently, which may significantly isolate visual and language information. In this paper, we argue that for REC the referring expression and the target region are semantically correlated and subject, location and relationship consistency exist between vision and language.On top of this, we propose a novel approach called MutAtt to construct mutual guidance between vision and language, which treat vision and language equally thus yield compact information matching. Specifically, for each module of subject, location and relationship, MutAtt builds two kinds of attention-based mutual guidance strategies. One strategy is to generate vision-guided language embedding for the sake of matching relevant visual feature. The other reversely generates language-guided visual feature to match relevant language embedding. This mutual guidance strategy can effectively guarantees the vision-language consistency in three modules. Experiments on three popular REC datasets demonstrate that the proposed approach outperforms the current state-of-the-art methods.



There are no comments yet.


page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Referring expression comprehension (REC), also known as visual grounding, aims at finding the text-related object in a given image according to the description of referring expressions. As a vision-language problem, REC has widespread applications in real-world scenarios, e.g., in an autopilot system, we need to localize the exact location in images or videos from text expressions like “park the car on the right side”. Although much progress has been made in REC, grounding referring expressions remains challenging because it requires a comprehensive understanding of complex language semantics and various types of visual information simultaneously.

Figure 1: Schematic of the proposed MutAtt. We assume there exist three kinds of consistency between referring expression and target region proposal. MutAtt builds mutual attention-based guidance strategy between visual and language information, which consists of visual-guided language embedding and language-guided visual embedding.

Researches on REC can be categorized into generative methods and discriminative methods. Generative methods, originated from image captioning 

[1, 2]

, generate description for each localized region in searching by maximum posteriori probability 

[3, 4, 5]. However, generative methods over-relies on the local region captioning model, which cannot describe the relative location and relationships with other objects. Discriminative methods try to learn the joint vision-language matching score and select object by ranking all scores [6, 7, 8, 9], which has become the most common ways in REC.

Existing discriminative methods always focus on how to extract more powerful visual and language features. Generally, these methods use Convolutional Neural Networks to encode the visual features for each candidate region, and use Recurrent Neural Networks to encode the referring expression 

[6, 10]. Compositional modular networks [7, 8] decompose the referring expression into three parts: subject, location and relationship, and design three visual feature representations to achieve fine-grained matching. Variational context [9] exploits the reciprocal relation between the referent and context to solve the problem of complex context modeling in referring expression comprehension. Nevertheless, the previous discriminative methods focus on how to build convincing visual and language representations independently, where the referring expressions are always only treated as unrequited queries. This may significantly isolate visual and language information, thus hinders the effective matching between vision and language, especially when the scene or expression are complex.

In our view, REC can only work based on the hypothesis that the referring expression and the target region represent the same semantics, including subject consistency, location consistency and relationship consistency. By considering these three kinds of consistency, REC model can achieve more compact vision and language combination and more accurate prediction. Based on this hypothesis, we design an innovative mutual attention-based guidance method MutAtt in the perspective of vision-language matching by enforcing these three consistencies. Specifically, to ensure effective cross modal consistency, we first treat REC as a vision-language matching problem in order to make visual and language information equal. MutAtt provides two strategies to achieve the above hypothesis as shown in Fig. 1. One strategy uses visual features to guide the language and then matches the guided language features with visual features. While improving the consistency of cross-modal information, it will make the model focus on vision over language. The other strategy uses language embedding to guide vision and then match the generated visual features with language embedding. This allows us to balance the status of vision and language information while further improving cross-model consistency. We apply this approach to subject, location and relationship modules, which significantly guarantees three kinds of consistency while maintains vision and language equality. We conduct experiments on three popular REC datasets to verify the advantages of the proposed method, and the experimental results show the superiority of the proposed MutAtt.

2 Related Work

Referring expression comprehension. Existing REC methods generally fall into two categories: generative model and discriminative model. In generative model, [3, 4, 5]use the encoder-decoder structure to localize the region that can generate the sentence with maximum posteriori probability. Discriminative model [7, 8, 9]

tends to use various feature vectors to represent the expression and the image region, and then measures the similarity of them to select the region with the highest scores. The previous work

[6] separately encodes the entire related expression and the entire image feature , which ignores the complex structures in the language as well in the image. The work in [7, 8] overcomes this limitation through decomposing the expression into sub-components and computing the vision-language matching scores of each module. The method [9] lowers the requirement of joint grounding and reasoning to a holistic association score between the sentence and region features. In addition, recent work [8, 11] uses the attention mechanism to make the model focus on more critical information and achieves significant effectiveness. However, the previous discriminative methods focus on how to build convincing visual and language representations independently, and never consider the information consistency between vision and language. These methods only regard referring expression as a complementary query and overemphasize the importance of visual information. In contrast, we propose to enhance the vision-language consistency by cross-modal attention-based mutual guided matching.

Vision-language matching. Vision-language matching has been studied for years, the key challenge of which is measuring the similarity between vision and language embedding. The most popular vision-language matching methods [12, 13, 14] rely on relatively similar procedures: extract discriminative visual and language features and measure as accurately as possible the distance between the two representations. The work in [15, 16] adopt CNN and Skip-Gram or LSTM to extract feature representations for cross-modal. Then a ranking loss is used to force the model to get closer to the matched vision-language pair and away from the unmatched pair. [17] further improve the learning of cross-view feature embedding by incorporating generative objectives. Through region relationship reasoning and global semantic reasoning, [18] enhance image representation to align with the corresponding text caption better. In this paper, we treat the fusion of visual and language features as a kind of vision-language matching problem to enhance the vision-language consistency to make vision and language play same important role. In this way, the proposed MutAtt can discover more discriminative joint visual-textual representation.

3 Method

Figure 2: Illustration of MutAtt. represent visual feature of region proposal, represent word embedding of sentence and

represent phrase embedding of sentence. The left part shows visual-guided language embedding, where we compute word attention to guide the generation process of language embedding and match it with visual feature by cosine similarity. The right part shows language-guided visual embedding, where we compute attention on visual feature guided by language embedding and match them by MLPs. Finally, we combine the matching result of two parts as score.

3.1 Problem formulation and background

Given an image with a set of region of interest tagged by people or detection algorithm and referring expression , where means the -th word in sentence, the purpose of REC is to find the target region best matching . The effective solution is to match the visual features of each candidate region and the language embedding of expression, and select the region with the highest score. We follow the modular design of MAttNet [8] as our backbone for its capability to handle subject, location and relationship information in referring expressions. MAttNet decomposes expression embedding into three modular components, i.e. , via a language attention network, and designs three visual models to encode the corresponding visual feature where . In this paper, we introduce a mutual attention-based guidance approach called MutAtt to improve vision-language consistency, including vision-guided language embedding and language-guided visual feature, which is shown in Fig. 2. As we treat the REC problem as a matching problem, we only consider one region (not specific) in as the visual input, while in the inference, the region with largest matching score will be selected.

3.2 Mutual attention-based guidance

3.2.1 Visual-guided language embedding

We first design to use visual feature help the formation of language embedding through matching vision and language from word-level to sentence-level for each module . To be specific, we compute the cosine similarity vector between word embedding and visual feature of region proposal , which can be computed as


where is the average pooled visual feature of and can be obtained by


where represents the number of visual element in different module for the candidate region. In Eq. (1), represents the attention from visual feature of module to the -th word embedding. By this word-level similarity, we compute the fine-grained similarity between each visual and language element pair, which can significantly compose of the visually-guided language embedding. Thus, we use the similarity as the weight of each word embedding to generate visual-guided sentence-level embedding as follows:


where is the word-level language attention obtained from language attention network in MAttNet [8], which helps form the language embedding corresponding to different visual modules. Under the guidance of word-level vision-language similarities, the sentence-level embedding can be enhanced by visual feature.

After that, we further calculate the score of visual feature and visual-guided language embedding by the cosine similarity through matching vision and language in sentence level:


Note, we propose to match vision and language information from word-level to sentence-level, which can guarantee the multi-scale vision-language matching. If the region and referring expression never match, the score would be small by this two level matching method, which could help omit failed prediction. To ensure that vision and language information have equal importance in the matching process and further improve vision-language consistency, we also construct a language-guided visual embedding.

3.2.2 Language-guided visual embedding

In our framework, we assume that the language and vision play equal role. Thus, after using visual information guide language embedding, we also hope to build the reverse guided embedding, i.e., the language-guided visual embedding. Given the visual feature of region proposal and the corresponding language embedding of referring expression , we first compute language-guided visual attention on subject, location and relationship modules.


where is the concatenation operation, are model parameters and represents the attention from language embedding to the -th visual element of region proposal . After that, we generate more discriminative language-guided visual feature by


Finally, we use MLP structure to calculate the score between language-guided visual feature and language embedding. Each MLP is composed of two fully connected layer with ReLU activation, which help transform cross modal information into a common embedding space. With language-guided visual embedding, we guarantee the consistency of visual and language information and avoid the model paying too much attention to one of them. Note, the language-guided visual embedding is similar to the common attention using in other REC methods, as they only consider the simple visual language fusion. The drawback of this method is that the language is treated as the complementary query, while ignore that the their information can guide mutually.

3.3 Matching result and loss function

We combine the proposed MutAtt in subject, location and relationship modules. The overall matching score for the region proposal and expression is:


where represent the weights of subject module, location module and relationship module obtained from language attention network in MAttNet.

For positive candidate object and query pair and negative pairs , , the ranking loss is minimized during training:


where , and is the margin for the loss.

4 Experiments

4.1 Dataset and implementation details

Dataset. We use three popular datasets for the evaluation, i.e., RefCOCO, RefCOCO+ and RefCOCOg [5, 6]. RefCOCO has 50,000 target objects collected from 19,994 images. RefCOCO+ has 49856 target objects collected from 19,992 images. These two datasets are split into four parts of “train”, “val”, “testA” and “testB”. RefCOCOg includes 49822 target objects from 25799 images, which are split into three parts of “train”, “val” and “test”.

Visual feature representation. We use faster R-CNN with ResNet101 as backbone to extract subject features, location features and relationship features for each region proposal and follow [8] to construct modular visual network. For the subject network, we feed the whole image into faster R-CNN and extract features maps from last convolutional output of 3rd-stage and last convolutional output of -th stage to represent subject features. For the location network, we represent location features of candidate object by encoding position and relative area as , and encoding relative location offsets and relative areas of up-tp-five surrounding same-category objects as . For the relationship network, we first find up-to-five surrounding objects, then extract their average-pooled visual features and encode their relative position offsets and relative areas to represent relationship features of context objects. For the visual features mentioned in Sec. 3.2, when .

Training setting.

The training batch size is 15, which means in each training iteration we feed 15 images and the referring expressions associated with these images to the network. Adam is used as the training optimizer, with initial learning rate to be 0.0004, which decays by a factor of 10 every 8000 iterations. We implement MutAtt based on PyTorch.

Evaluation setting. Following previous work [19, 20], we take the region proposals from human annotated (gt) and detection methods (det). For gt, the evaluation requires the region with the highest matching score is the same as the ground-truth region. For det, the evaluation requires the intersection over union between the region with highest matching score and ground-truth region is greater than 0.5.

Method Box RefCOCO RefCOCO+ RefCOCOg
val testA testB val testA testB val* val test
visdif+MMI [6] gt - 73.98 76.59 - 59.17 55.62 64.02 - -
Speaker/visdif [6] gt 76.18 74.39 77.30 58.64 61.29 56.24 59.40 - -
S-L-R [3] gt 79.56 78.65 80.22 62.26 64.60 59.62 72.63 71.65 71.92
VC [9] gt - 78.98 82.39 - 62.56 62.90 73.98 - -
Attr [21] gt - 78.05 78.07 - 61.47 57.22 69.83 - -
Accu-Att [22] gt 81.27 81.17 80.01 65.56 68.76 60.63 73.18 - -
PLAN [23] gt 81.67 80.81 81.32 64.18 66.31 61.46 69.47 - -
Multi-hop Film [24] gt 84.9 87.4 83.1 73.8 78.7 65.8 71.5 - -
MattNet [8] gt 85.65 85.26 84.57 71.01 75.13 66.17 - 78.10 78.12
 [25] gt 85.65 85.63 85.08 72.84 75.74 67.62 78.03 78.57 78.21
LGRANS [19] gt 82.0 81.2 84.0 66.6 67.6 65.5 - 75.4 74.7
DGA [20] gt 86.34 86.64 84.79 73.56 78.31 68.15 - 80.21 80.26
MutAtt gt 86.58 87.20 85.38 73.69 76.30 67.74 - 80.37 79.24
S-L-R [3] det 69.48 73.71 64.96 55.71 60.74 48.80 - 60.21 59.63
PLAN [23] det - 75.31 65.52 - 61.34 50.86 58.03 - -
MattNet [8] det 76.40 80.43 69.28 64.93 70.26 56.00 - 66.67 67.01
LGRANS [19] det - 76.6 66.4 - 64.0 53.4 62.5 - -
DGA [20] det - 78.42 65.53 - 69.07 51.99 - - 63.28
MutAtt det 78.35 82.52 71.50 67.90 72.60 58.60 - 68.67 69.03
Table 1: Comparison with state-of-the-art REC approaches on ground-truth regions and automatically detected regions. It can be seen that our method has significantly improved compared with other methods, and is superior to SOTA in most indicators.

4.2 Results

Comparisons with State-of-The-Art. We provide a comparison of our method with other SOTA methods in Table. 1, including the results of using two settings on three datasets. As can be seen, MutAtt shows the advantage of the proposed approach. On the ground-truth setting, MutAtt is significantly better than the previous method on the RefCOCO dataset, and performs similarly to the previous method on the RefCOCO+ and RefCOCOg datasets. On more important detection settings, We use the features of res101-frcn and compare with other methods. MutAtt outperforms the state-of-the-art on various split sets of the three datasets. It demonstrates that MutAtt can ensure the equality of vision and language in matching and improve the vision-language consistency on subject, location and relationship module.

val test
1 MutAtt:subj+loc+rel 77.96 77.14
2 MutAtt:subj(VL)+loc+rel 79.33 78.53
3 MutAtt:subj(VL+LV)+loc+rel 80.00 79.34
4 MutAtt:subj(VL)+loc(VL)+rel 80.35 79.03
5 MutAtt:subj(VL)+loc(VL)+rel(VL) 80.37 79.24
Table 2: Ablation studies on RefCOCOg dataset.

Ablation Study. We perform ablation study to verify the reliability of visual and language mutual guidance on each module. In the ablation study, we give the evaluation results of ground-truth setting on the RefCOCOg dataset. The results show in Table. 2. Line1 shows the result without mutual guidance. Line23 show the results of adding visual guidance and language guidance to subject module. The results show that visual guidance and language guidance all improve the comprehension of the model and prove the effectiveness of our method. Line45 shows the results of the same method applied to relation module and relationship module. We can see that the help for the improvement of model comprehension gradually decreases. The reason for this phenomenon is that of the three module weights generated by the language attention network, the subject module has the highest weight, the relationship module has the lowest weight and is less than 0.1 in most cases.

Figure 3: Visualization comparisons between MutAtt and MAttNet of visual attention and language attention of each word on three modules. Green rectangle is the prediction result of MutAtt and red rectangle is the prediction result of MAttNet. We can see that MutAtt can adaptively capture the weight of each word, and accurately focus on the objects described in the language.

4.3 Visualization

We visualize the attention of image and the weights of expressions in Fig. 3. The first column is the comprehension result of our approach and the third column is the comprehension result of MAttNet. From the first set of examples, it is obvious that our method is superior to mattnet in terms of visual attention, language embedding and comprehensive understanding. With the guidance of “a brown bowl on the ground”, the focus area of the model is moved from the edge of the “bowl” to the main body of the “bowl”. Correspondingly, with the help of the guidance of visual features, the model improves the understanding of the relationship between “bowl” and “ground” in “a brown bowl on the ground”, and encodes the “ground” as a related object rather than a target object.

5 Conclusion

In this paper, we proposed a mutual attention-based guidance method (MutAtt) for the task of REC. MutAtt contains two key components for vision-language matching: visual-guided language embedding and language-guided visual embedding. By combining two matching processes, we maintains vision and language equality. So MutAtt can learn more discriminative visual feature and language embedding while guarantee vision-language consistency during the matching process in three sub-component, which beneficial to matching on cross-modal information. Experiments on three REC datasets with two setting show that MutAtt outperforms other method on most evaluation indicators, which demonstrates the effectiveness of MutAtt.


  • [1] Jiuxiang Gu, Jianfei Cai, Gang Wang, and Tsuhan Chen, “Stack-captioning: Coarse-to-fine learning for image captioning,” in AAAI, 2018.
  • [2] Xu Yang, Hanwang Zhang, and Jianfei Cai, “Learning to collocate neural modules for image captioning,” arXiv preprint arXiv:1904.08608, 2019.
  • [3] Licheng Yu, Hao Tan, Mohit Bansal, and Tamara L Berg, “A joint speaker-listener-reinforcer model for referring expressions,” in CVPR, 2017.
  • [4] Ruotian Luo and Gregory Shakhnarovich, “Comprehension-guided referring expressions,” in CVPR, 2017.
  • [5] Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L Yuille, and Kevin Murphy, “Generation and comprehension of unambiguous object descriptions,” in CVPR, 2016.
  • [6] Licheng Yu, Patrick Poirson, Shan Yang, Alexander C Berg, and Tamara L Berg, “Modeling context in referring expressions,” in ECCV, 2016.
  • [7] Ronghang Hu, Marcus Rohrbach, Jacob Andreas, Trevor Darrell, and Kate Saenko, “Modeling relationships in referential expressions with compositional modular networks,” in CVPR, 2017.
  • [8] Licheng Yu, Zhe Lin, Xiaohui Shen, Jimei Yang, Xin Lu, Mohit Bansal, and Tamara L Berg, “Mattnet: Modular attention network for referring expression comprehension,” in CVPR, 2018.
  • [9] Hanwang Zhang, Yulei Niu, and Shih-Fu Chang, “Grounding referring expressions in images by variational context,” in CVPR, 2018.
  • [10] Zhou Yu, Jun Yu, Chenchao Xiang, Zhou Zhao, Qi Tian, and Dacheng Tao, “Rethinking diversified and discriminative proposal generation for visual grounding,” arXiv preprint arXiv:1805.03508, 2018.
  • [11] Fan Lyu, Qi Wu, Fuyuan Hu, Qingyao Wu, and Mingkui Tan, “Attend and imagine: Multi-label image classification with visual attention and recurrent neural networks,” TMM, 2019.
  • [12] Ya Jing, Chenyang Si, Junbo Wang, Wei Wang, Liang Wang, and Tieniu Tan, “Cascade attention network for person search: Both image and text-image similarity selection,” arXiv preprint arXiv:1809.08440, 2018.
  • [13] Shuang Li, Tong Xiao, Hongsheng Li, Wei Yang, and Xiaogang Wang, “Identity-aware textual-visual matching with latent co-attention,” in ICCV, 2017.
  • [14] Shuang Li, Tong Xiao, Hongsheng Li, Bolei Zhou, Dayu Yue, and Xiaogang Wang, “Person search with natural language description,” in CVPR, 2017.
  • [15] Andrea Frome, Greg S Corrado, Jon Shlens, Samy Bengio, Jeff Dean, Marc’Aurelio Ranzato, and Tomas Mikolov, “Devise: A deep visual-semantic embedding model,” in NeurIPS, 2013.
  • [16] Ryan Kiros, Ruslan Salakhutdinov, and Richard S Zemel, “Unifying visual-semantic embeddings with multimodal neural language models,” arXiv preprint arXiv:1411.2539, 2014.
  • [17] Jiuxiang Gu, Jianfei Cai, Shafiq R Joty, Li Niu, and Gang Wang, “Look, imagine and match: Improving textual-visual cross-modal retrieval with generative models,” in CVPR, 2018.
  • [18] Kunpeng Li, Yulun Zhang, Kai Li, Yuanyuan Li, and Yun Fu, “Visual semantic reasoning for image-text matching,” in ICCV, 2019.
  • [19] Peng Wang, Qi Wu, Jiewei Cao, Chunhua Shen, Lianli Gao, and Anton van den Hengel, “Neighbourhood watch: Referring expression comprehension via language-guided graph attention networks,” in CVPR, 2019.
  • [20] Sibei Yang, Guanbin Li, and Yizhou Yu, “Dynamic graph attention for referring expression comprehension,” in ICCV, 2019.
  • [21] Jingyu Liu, Liang Wang, and Ming-Hsuan Yang, “Referring expression generation and comprehension via attributes,” in ICCV, 2017.
  • [22] Chaorui Deng, Qi Wu, Qingyao Wu, Fuyuan Hu, Fan Lyu, and Mingkui Tan, “Visual grounding via accumulated attention,” in CVPR, 2018.
  • [23] Bohan Zhuang, Qi Wu, Chunhua Shen, Ian Reid, and Anton van den Hengel, “Parallel attention: A unified framework for visual object discovery through dialogs and queries,” in CVPR, 2018.
  • [24] Florian Strub, Mathieu Seurin, Ethan Perez, Harm De Vries, Jérémie Mary, Philippe Preux, and Aaron CourvilleOlivier Pietquin, “Visual reasoning with multi-hop feature modulation,” in ECCV, 2018.
  • [25] Daqing Liu, Hanwang Zhang, Feng Wu, and Zheng-Jun Zha, “Learning to assemble neural module tree networks for visual grounding,” in ICCV, 2019, pp. 4673–4682.