Visually Grounded Neural Syntax Acquisition

We present the Visually Grounded Neural Syntax Learner (VG-NSL), an approach for learning syntactic representations and structures without any explicit supervision. The model learns by looking at natural images and reading paired captions. VG-NSL generates constituency parse trees of texts, recursively composes representations for constituents, and matches them with images. We define concreteness of constituents by their matching scores with images, and use it to guide the parsing of text. Experiments on the MSCOCO data set show that VG-NSL outperforms various unsupervised parsing approaches that do not use visual grounding, in terms of F1 scores against gold parse trees. We find that VGNSL is much more stable with respect to the choice of random initialization and the amount of training data. We also find that the concreteness acquired by VG-NSL correlates well with a similar measure defined by linguists. Finally, we also apply VG-NSL to multiple languages in the Multi30K data set, showing that our model consistently outperforms prior unsupervised approaches.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

05/04/2020

What is Learned in Visually Grounded Neural Syntax Acquisition

Visual features are a promising signal for learning bootstrap textual mo...
09/29/2021

Visually Grounded Concept Composition

We investigate ways to compose complex concepts in texts from primitive ...
06/16/2021

Attention-Based Keyword Localisation in Speech using Visual Grounding

Visually grounded speech models learn from images paired with spoken cap...
06/05/2019

An Imitation Learning Approach to Unsupervised Parsing

Recently, there has been an increasing interest in unsupervised parsers ...
05/18/2018

SemStyle: Learning to Generate Stylised Image Captions using Unaligned Text

Linguistic style is an essential part of written communication, with the...
02/17/2022

Grammar-Based Grounded Lexicon Learning

We present Grammar-Based Grounded Lexicon Learning (G2L2), a lexicalist ...
04/28/2021

Learning Syntax from Naturally-Occurring Bracketings

Naturally-occurring bracketings, such as answer fragments to natural lan...

1 Introduction

We study the problem of visually grounded syntax acquisition. Consider the images in Figure 1, paired with the descriptive texts (captions) in English. Given no prior knowledge of English, and sufficient such pairs, one can infer the correspondence between certain words and visual attributes, (e.g., recognizing that “a cat” refers to the objects in the blue boxes). One can further extract constituents, by assuming that concrete spans of words should be processed as a whole, and thus form the constituents. Similarly, the same process can be applied to verb or prepositional phrases.

This intuition motivates the use of image-text pairs to facilitate automated language learning, including both syntax and semantics. In this paper we focus on learning syntactic structures, and propose the Visually Grounded Neural Syntax Learner (VG-NSL, shown in Figure 2). VG-NSL acquires syntax, in the form of constituency parsing, by looking at images and reading captions.

At a high level, VG-NSL builds latent constituency trees of word sequences and recursively composes representations for constituents. Next, it matches the visual and textual representations. The training procedure is built on the hypothesis that a better syntactic structure contributes to a better representation of constituents, which then leads to better alignment between vision and language. We use no human-labeled constituency trees or other syntactic labeling (such as part-of-speech tags). Instead, we define a concreteness score of constituents based on their matching with images, and use it to guide the parsing of sentences. At test time, no images paired with the text are needed.

We compare VG-NSL with prior approaches to unsupervised language learning, most of which do not use visual grounding. Our first finding is that VG-NSL improves over the best previous approaches to unsupervised constituency parsing in terms of scores against gold parse trees. We also find that many existing approaches are quite unstable with respect to the choice of random initialization, whereas VG-NSL exhibits consistent parsing results across multiple training runs. Third, we analyze the performance of different models on different types of constituents, and find that our model shows substantial improvement on noun phrases and prepositional phrases which are common in captions. Fourth, VG-NSL is much more data-efficient than prior work based purely on text, achieving comparable performance to other approaches using only 20% of the training captions. In addition, the concreteness score, which emerges during the matching between constituents and images, correlates well with a similar measure defined by linguists. Finally, VG-NSL can be easily extended to multiple languages, which we evaluate on the Multi30K data set Elliott et al. (2016, 2017) consisting of German and French image captions.

Figure 2: VG-NSL consists of two modules: a textual module for inferring structures and representations for captions, and a visual-semantic module for matching constituents with images. VG-NSL induces constituency parse trees of captions by looking at images and reading paired captions.

2 Related Work

Linguistic structure induction from text.

Recent work has proposed several approaches for inducing latent syntactic structures, including constituency trees (Choi et al., 2018; Yogatama et al., 2017; Maillard and Clark, 2018; Havrylov et al., 2019; Kim et al., 2019; Drozdov et al., 2019) and dependency trees (Shi et al., 2019)

, from the distant supervision of downstream tasks. However, most of the methods are not able to produce linguistically sound structures, or even consistent ones with fixed data and hyperparameters but different random initializations

(Williams et al., 2018).

A related line of research is to induce latent syntactic structure via language modeling. This approach has achieved remarkable performance on unsupervised constituency parsing Shen et al. (2018a, 2019), especially in identifying the boundaries of higher-level (i.e., larger) constituents. To our knowledge, the Parsing-Reading-Predict Network (PRPN; Shen et al., 2018a)

and the Ordered Neuron LSTM

(ON-LSTM; Shen et al., 2019) currently produce the best fully unsupervised constituency parsing results. One issue with PRPN, however, is that it tends to produce meaningless parses for lower-level (smaller) constituents Phu Mon Htut et al. (2018).

Over the last two decades, there has been extensive study targeting unsupervised constituency parsing (Klein and Manning, 2002, 2004, 2005; Bod, 2006a, b; Ponvert et al., 2011) and dependency parsing (Klein and Manning, 2004; Smith and Eisner, 2006; Spitkovsky et al., 2010; Han et al., 2017). However, all of these approaches are based on linguistic annotations. Specifically, they operate on the part-of-speech tags of words instead of word tokens. One exception is Spitkovsky et al. (2011), which produces dependency parse trees based on automatically induced pseudo tags.

In contrast to these existing approaches, we focus on inducing constituency parse trees with visual grounding. We use parallel data from another modality (i.e., paired images and captions), instead of linguistic annotations such as POS tags. We include a detailed comparison between some related works in the supplementary material.

There has been some prior work on improving unsupervised parsing by leveraging extra signals, such as parallel text (Snyder et al., 2009), annotated data in another language with parallel text (Ganchev et al., 2009), annotated data in other languages without parallel text (Cohen et al., 2011), or non-parallel text from multiple languages (Cohen and Smith, 2009). We leave the integration of other grounding signals as future work.

Grounded language acquisition.

Grounded language acquisition has been studied for image-caption data (Christie et al., 2016a), video-caption data (Siddharth et al., 2014; Yu et al., 2015), and visual reasoning (Mao et al., 2019)

. However, existing approaches rely on human labels or rules for classifying visual attributes or actions. Instead, our model induces syntax structures with no human-defined labels or rules.

Meanwhile, learning visual-semantic representations in a joint embedding space (Ngiam et al., 2011) is a widely studied approach, and has achieved remarkable results on image-caption retrieval (Kiros et al., 2014; Faghri et al., 2018; Shi et al., 2018a), image caption generation (Kiros et al., 2014; Karpathy and Fei-Fei, 2015; Ma et al., 2015), and visual question answering (Malinowski et al., 2015). In this work, we borrow this idea to match visual and textual representations.

Concreteness estimation.

Turney et al. (2011)

define concrete words as those referring to things, events, and properties that we can perceive directly with our senses. Subsequent work has studied word-level concreteness estimation based on text

(Turney et al., 2011; Hill et al., 2013), human judgments (Silberer and Lapata, 2012; Hill and Korhonen, 2014a; Brysbaert et al., 2014), and multi-modal data (Hill and Korhonen, 2014b; Hill et al., 2014; Kiela et al., 2014; Young et al., 2014; Hessel et al., 2018; Silberer et al., 2017; Bhaskar et al., 2017). As with Hessel et al. (2018) and Kiela et al. (2014), our model uses multi-modal data to estimate concreteness. Compared with them, we define concreteness for spans instead of words, and use it to induce linguistic structures.

3 Visually Grounded Neural Syntax Learner

Given a set of paired images and captions, our goal is to learn representations and structures for words and constituents. Toward this goal, we propose the Visually Grounded Neural Syntax Learner (VG-NSL), an approach for the grounded acquisition of syntax of natural language. VG-NSL is inspired by the idea of semantic bootstrapping (Pinker, 1984), which suggests that children acquire syntax by first understanding the meaning of words and phrases, and linking them with the syntax of words.

At a high level (Figure 2), VG-NSL consists of 2 modules. First, given an input caption (i.e., a sentence or a smaller constituent), as a sequence of tokens, VG-NSL builds a latent constituency parse tree, and recursively composes representations for every constituent. Next, it matches textual representations with visual inputs, such as the paired image with the constituents. Both modules are jointly optimized from natural supervision: the model acquires constituency structures, composes textual representations, and links them with visual scenes, by looking at images and reading paired captions.

3.1 Textual Representations and Structures

VG-NSL starts by composing a binary constituency structure of text, using an easy-first bottom-up parser Goldberg and Elhadad (2010). The composition of the tree from a caption of length consists of steps. Let denote the textual representations of a sequence of constituents after step , where . For simplicity, we use to denote the word embeddings for all tokens (the initial representations).

At step , a score function , parameterized by

, is evaluated on all pairs of consecutive constituents, resulting in a vector

of length :

score

We implement as a two-layer feed-forward network.

A pair of constituents is sampled from all pairs of consecutive constituents, with respect to the distribution produced by a softmax:222 At test time, we take the argmax.

The selected pair is combined to form a single new constituent. Thus, after step , the number of constituents is decreased by 1. The textual representation for the new constituent is defined as the L2-normed sum of the two component constituents:

We find that using a more complex encoder for constituents, such as GRUs, will cause the representations to be highly biased towards a few salient words in the sentence (e.g., the encoder encodes only the word “cat” while ignoring the rest part of the caption; Shi et al., 2018a; Wu et al., 2019). This significantly degrades the performance of linguistic structure induction.

We repeat this score-sample-combine process for steps, until all words in the input text have been combined into a single constituent (Figure 3). This ends the inference of the constituency parse tree. Since at each time step we combine two consecutive constituents, the derived tree contains constituents (including all words).

Figure 3: An illustration of how VG-NSL composes a constituency parse tree. At each step, the score function is evaluated on all pairs of consecutive constituents (dashed lines). Next, a pair of constituents is sampled from all pairs w.r.t. a distribution computed by the softmax of all predicted scores. The selected pair of constituents is combined into a larger one, while the other constituents remain unchanged (solid lines).

3.2 Visual-Semantic Embeddings

We follow an approach similar to that of Kiros et al. (2014) to define the visual-semantic embedding (VSE) space for paired images and text constituents. Let denote the vector representation of an image , and denote the vector representation of the -th constituent of its corresponding text caption. During the matching with images, we ignore the tree structure and index them as a list of constituents. A function is defined as the matching score between images and texts:

where the parameter vector aligns the visual and textual representations into a joint space.

3.3 Training

We optimize the visual-semantic representations () and constituency structures () in an alternating approach. At each iteration, given constituency parsing results of caption, is optimized for matching the visual and the textual representations. Next, given the visual grounding of constituents, is optimized for producing constituents that can be better matched with images. Specifically, we optimize textual representations and the visual-semantic embedding space using a hinge-based triplet ranking loss:

where and index over all image-caption pairs in the data set, while and enumerate all constituents of a specific caption ( and , respectively), is the set of image representations, is the set of textual representations of all constituents, and is a constant margin, denotes . The loss extends the loss for image-caption retrieval of Kiros et al. (2014), by introducing the alignments between images and sub-sentence constituents.

We optimize textual structures via distant supervision: they are optimized for a better alignment between the derived constituents and the images. Intuitively, the following objective encourages adjectives to be associated (combined) with the corresponding nouns, and verbs/prepositions to be associated (combined) with the corresponding subjects and objects. Specifically, we use REINFORCE (Williams, 1992) as the gradient estimator for . Consider the parsing process of a specific caption , and denote the corresponding image embedding . For a constituent of , we define its (visual) concreteness as:

(1)

where is a fixed margin. At step , we define the reward function for a combination of a pair of constituents (, ) as:

(2)

where . In plain words, at each step, we encourage the model to compose a constituent that maximizes the alignment between the new constituent and the corresponding image. During training, we sample constituency parse trees of captions, and reinforce each composition step using Equation 2. During test, no paired images of text are needed.

3.4 The Head-Initial Inductive Bias

English and many other Indo-European languages are usually head-initial Baker (2001). For example, in verb phrases or prepositional phrases, the verb (or the preposition) precedes the complements (e.g., the object of the verb). Consider the simple caption a white cat on the lawn. While the association of the adjective (white) could be induced from the visual grounding of phrases, whether the preposition (on) should be associated with a white cat or the lawn is more challenging to induce. Thus, we impose an inductive bias to guide the learner to correctly associate prepositions with their complements, determiners with corresponding noun phrases, and complementizers with the corresponding relative clauses. Specifically, we discourage abstract constituents (i.e., constituents that cannot be grounded in the image) from being combined with a preceding constituent, by modifying the original reward definition (Equation 2) as:

(3)

where is a scalar hyperparameter, is the image embedding corresponding to the caption being parsed, and denotes the abstractness of the span, defined analogously to concreteness (Equation 1):

The intuition here is that the initial heads for prepositional phrases (e.g., on) and relative clauses (e.g., which, where) are usually abstract words. During training, we encourage the model to associate these abstract words with the succeeding constituents instead of the preceding ones. It is worth noting that such an inductive bias is language-specific, and cannot be applied to head-final languages such as Japanese Baker (2001). We leave the design of head-directionality inductive biases for other languages as future work.

4 Experiments

We evaluate VG-NSL for unsupervised parsing in a few ways: score with gold trees, self-consistency across different choices of random initialization, performance on different types of constituents, and data efficiency. In addition, we find that the concreteness score acquired by VG-NSL is consistent with a similar measure defined by linguists. We focus on English for the main experiments, but also extend to German and French.

4.1 Data Sets and Metrics

We use the standard split of the MSCOCO data set Lin et al. (2014), following karpathy2015deep. It contains 82,783 images for training, 1,000 for development, and another 1,000 for testing. Each image is associated with 5 captions.

For the evaluation of constituency parsing, the Penn Treebank (PTB; Marcus et al., 1993) is a widely used, manually annotated data set. However, PTB consists of sentences from abstract domains, e.g., the Wall Street Journal (WSJ), which are not visually grounded and whose linguistic structures can hardly be induced by VG-NSL. Here we evaluate models on the MSCOCO test set, which is well-matched to the training domain; we leave the extension of our work to more abstract domains to future work. We apply Benepar Kitaev and Klein (2018),333 https://pypi.org/project/benepar an off-the-shelf constituency parser with state-of-the-art performance (95.52 score) on the WSJ test set,444 We also manually label the constituency parse trees for 50 captions randomly sampled from the MSCOCO test split, where Benepar has an score of 95.65 with the manual labels. Details can be found in the supplementary material. to parse the captions in the MSCOCO test set as gold constituency parse trees. We evaluate all of the investigated models using the score compared to these gold parse trees.555 Following convention Sekine and Collins (1997), we report the score across all constituents in the data set, instead of the average of sentence-level scores.

4.2 Baselines

Model NP VP PP ADJP Avg. Self
Random 47.3 10.5 17.3 33.5 27.1 32.4
Left 51.4 1.8 0.2 16.0 N/A
Right 32.2 23.4 18.7 14.4 N/A
PMI 54.2 16.0 14.3 39.2 N/A
PRPN (Shen et al., 2018a)    72.8    33.0    61.6    35.4    52.5 60.3
ON-LSTM (Shen et al., 2019) 74.4 11.8 41.3 44.0 45.5 69.3
Gumbel (Choi et al., 2018) 50.4 8.7 15.5 34.8 27.9 40.1
VG-NSL (ours) 79.6 26.2 42.0 22.0 50.4 87.1
VG-NSL+HI (ours) 74.6 32.5 66.5 21.7 53.3 90.2
VG-NSL+HI+FastText (ours)* 78.8 24.4 65.6 22.0 54.4 89.8
Concreteness estimation–based models
Turney et al. (2011)* 65.5 30.8 35.3 30.4 42.5 N/A
Turney et al. (2011)+HI* 74.5 26.2 47.6 25.6 48.9 N/A
Brysbaert et al. (2014)* 54.1 27.8 27.0 33.1 34.1 N/A
Brysbaert et al. (2014)+HI* 73.4 23.9 50.0 26.1 47.9 N/A
Hessel et al. (2018) 50.9 21.7 32.8 27.5 33.2 N/A
Hessel et al. (2018)+HI 72.5 34.4 65.8 26.2 52.9 N/A
Table 1: Recall of specific typed phrases, and overall score, evaluated on the MSCOCO test split, averaged over 5 runs with different random initializations. We also include self-agreement score Williams et al. (2018) across the 5 runs.

denotes standard deviation. * denotes models requiring extra labels and/or corpus, and

denotes models requiring a pre-trained visual feature extractor. We highlight the best number in each column among all models that do not require extra data other than paired image-caption data, as well as the overall best number. The Left, Right, PMI, and concreteness estimation–based models have no standard deviation or self (shown as N/A) as they are deterministic given the training and/or testing data.

We compare VG-NSL with various baselines for unsupervised tree structure modeling of texts. We can categorize the baselines by their training objective or supervision.

Trivial tree structures.

Similarly to recent work on latent tree structures Williams et al. (2018); Phu Mon Htut et al. (2018); Shi et al. (2018b), we include three types of trivial baselines without linguistic information: random binary trees, left-branching binary trees, and right-branching binary trees.

Syntax acquisition by language modeling and statistics.

Shen et al. (2018a) proposes the Parsing-Reading-Predict Network (PRPN), which predicts syntactic distances (Shen et al., 2018b) between adjacent words, and composes a binary tree based on the syntactic distances to improve language modeling. The learned distances can be mapped into a binary constituency parse tree, by recursively splitting the sentence between the two consecutive words with the largest syntactic distance.

Ordered neurons (ON-LSTM; Shen et al., 2019) is a recurrent unit based on the LSTM cell (Hochreiter and Schmidhuber, 1997) that explicitly regularizes different neurons in a cell to represent short-term or long-term information. After being trained on the language modeling task, Shen et al. (2019) suggest that the gate values in ON-LSTM cells can be viewed as syntactic distances Shen et al. (2018b) between adjacent words to induce latent tree structures. ON-LSTM has the state-of-the-art unsupervised constituency parsing performance on the WSJ test set. We train both PRPN and ON-LSTM on all captions in the MSCOCO training set and use the models as baselines.

Inspired by the syntactic distance–based approaches (Shen et al., 2018a, 2019), we also introduce another baseline, PMI, which uses negative pointwise mutual information Church and Hanks (1990) between adjacent words as the syntactic distance. We compose constituency parse trees based on the distances in the same way as PRPN and ON-LSTM.

Syntax acquisition from downstream tasks.

choi2018learning propose to compose binary constituency parse trees directly from downstream tasks using the Gumbel softmax trick Jang et al. (2017). We integrate a Gumbel tree-based caption encoder into the visual semantic embedding approach Kiros et al. (2014). The model is trained on the downstream task of image-caption retrieval.

Syntax acquisition from concreteness estimation.

Since we apply concreteness information to train VG-NSL, it is worth comparing against unsupervised constituency parsing based on previous approaches for predicting word concreteness. This set of baselines includes semi-supervised estimation Turney et al. (2011), crowdsourced labeling Brysbaert et al. (2014), and multimodal estimation Hessel et al. (2018). Note that none of these approaches has been applied to unsupervised constituency parsing. Implementation details can be found in the supplementary material.

Based on the concreteness score of words, we introduce another baseline similar to VG-NSL. Specifically, we recursively combine two consecutive constituents with the largest average concreteness, and use the average concreteness as the score for the composed constituent. The algorithm generates binary constituency parse trees of captions. For a fair comparison, we implement a variant of this algorithm that also uses a head-initial inductive bias and include the details in the appendix.

4.3 Implementation Details

Across all experiments and all models (including baselines such as PRPN, ON-LSTM, and Gumbel), the embedding dimension for words and constituents is 512. For VG-NSL, we use a pre-trained ResNet-101 (He et al., 2016)

, trained on ImageNet

(Russakovsky et al., 2015), to extract vector embeddings for images. Thus, is a mapping from a 2048-D image embedding space to a 512-D visual-semantic embedding space. As for the score

function in constituency parsing, we use a hidden dimension of 128 and ReLU activation. All VG-NSL models are trained for 30 epochs. We use an Adam optimizer

(Kingma and Ba, 2015) with initial learning rate to train VG-NSL. The learning rate is re-initialized to after 15 epochs. We tune other hyperparameters of VG-NSL on the development set using the self-agreement score Williams et al. (2018) over 5 runs with different choices of random initialization.

4.4 Results: Unsupervised Constituency Parsing

We evaluate the induced constituency parse trees via the overall score, as well as the recall of four types of constituents: noun phrases (NP), verb phrases (VP), prepositional phrases (PP), and adjective phrases (ADJP) (Table 1). We also evaluate the robustness of models trained with fixed data and hyperparameters, but different random initialization, in two ways: via the standard deviation of performance across multiple runs, and via the self-agreement score Williams et al. (2018), which is the average taken over pairs of different runs.

Among all of the models which do not require extra labels, VG-NSL with the head-initial inductive bias (VG-NSL+HI) achieves the best score. PRPN Shen et al. (2018a) and a concreteness estimation-based baseline (Hessel et al., 2018) both produce competitive results. It is worth noting that the PRPN baseline reaches this performance without any information from images. However, the performance of PRPN is less stable than that of VG-NSL across random initializations. In contrast to its state-of-the-art performance on the WSJ full set (Shen et al., 2019), we observe that ON-LSTM does not perform well on the MSCOCO caption data set. However, it remains the best model for adjective phrases, which is consistent with the result reported by Shen et al. (2019).

In addition to the best overall scores, VG-NSL+HI achieves competitive scores across most phrase types (NP, VP and PP). Our models (VG-NSL and VG-NSL+HI) perform the best on NP and PP, which are the most common visually grounded phrases in the MSCOCO data set. In addition, our models produce much higher self than the baselines Shen et al. (2018a, 2019); Choi et al. (2018), showing that they reliably produce reasonable constituency parse trees with different initialization.

We also test the effectiveness of using pre-trained word embeddings. Specifically, for VG-NSL+HI+FastText, we use a pre-trained FastText embedding (300-D, Joulin et al., 2016), concatenated with a 212-D trainable embedding, as the word embedding. Using pre-trained word embeddings further improves performance to an average of 54.4% while keeping a comparable self .

4.5 Results: Data Efficiency

(a) The percent data- curves.
(b) The percent data-self curves.
Figure 4: score and self score with respect to the amount of training data. All numbers are averaged over 5 runs with different random initialization.

We compare the data efficiency for PRPN (the strongest baseline method), ON-LSTM, VG-NSL, and VG-NSL+HI. We train the models using 1%, 2%, 5%, 10%, 20%, 50% and 100% of the MSCOCO training set, and report the overall and self scores on the test set (Figure 4).

Compared to PRPN trained on the full training set, VG-NSL and VG-NSL+HI reach comparable performance using only 20% of the data (i.e., 8K images with 40K captions). VG-NSL tends to quickly become more stable (in terms of the self score) as the amount of data increases, while PRPN and ON-LSTM remain less stable.

4.6 Analysis: Consistency with Linguistic Concreteness

Model/method VG-NSL (+HI)
Turney et al. (2011) 0.74 0.72
Brysbaert et al. (2014) 0.71 0.71
Hessel et al. (2018) 0.84 0.85
Table 2: Agreement between our concreteness estimates and existing models or labels, evaluated via the Pearson correlation coefficient computed over the most frequent 100 words in the MSCOCO test set, averaged over 5 runs with different random initialization.

During training, VG-NSL acquires concreteness estimates for constituents via Equation 1. Here, we evaluate the consistency between word-level concreteness estimates induced by VG-NSL and those produced by other methods Turney et al. (2011); Brysbaert et al. (2014); Hessel et al. (2018). Specifically, we measure the correlation between the concreteness estimated by VG-NSL on MSCOCO test set and existing linguistic concreteness definitions (Table 2). For any word, of which the representation is , we estimate its concreteness by taking the average of , across all associated images . The high correlation between VG-NSL and the concreteness scores produced by turney2011literal and brysbaert2014concreteness supports the argument that the linguistic concept of concreteness can be acquired in an unsupervised way. Our model also achieves a high correlation with Hessel et al. (2018), which also estimates word concreteness based on visual-domain information.

4.7 Analysis: Self-Agreement Score as the Criterion for Model Selection

Model Criterion Avg. Self
VG-NSL Self 50.4 87.1
VG-NSL R@1 47.7 83.4
VG-NSL+HI Self 53.3 90.2
VG-NSL+HI R@1 53.1 88.7
Table 3: Average scores and Self scores of VG-NSL and VG-NSL+HI with different model selection methods. R@1 denotes using recall at 1 Kiros et al. (2014) as the model selection criterion. All hyperparameters are tuned with respect to self-agreement score. The numbers are comparable to those in Table 1.

We introduce a novel hyperparameter tuning and model selection method based on the self-agreement score.

Let denote the j-th checkpoint of the i-th model trained with hyperparameters , where and differ in their random initialization. The hyperparameters are tuned to maximize:

where denotes the score between the trees generated by two models, the number of different runs, and the margin to ensure only nearby checkpoints are compared.666 In all of our experiments, .

After finding the best hyperparameters , we train the model for another times with different random initialization, and select the best models by

We compare the performance of VG-NSL selected by the self score and that selected by recall at 1 in image-to-text retrieval (R@1 in Table 3; Kiros et al., 2014). As a model selection criterion, self consistently outperforms R@1 (avg. : 50.4 vs. 47.7 and 53.3 vs. 53.1 for VG-NSL and VG-NSL+HI, respectively). Meanwhile, it is worth noting that even if we select VG-NSL by R@1, it shows better stability compared with PRPN and ON-LSTM (Table 1

), in terms of the score variance across different random initialization and self

. Specifically, the variance of avg. is always less than 0.6 while the self is greater than 80.

Note that the PRPN and ON-LSTM models are not tuned using self , since these models are usually trained for hundreds or thousands of epochs and thus it is computationally expensive to evaluate self . We leave the efficient tuning of these baselines by self as a future work.

4.8 Extension to Multiple Languages

Model EN DE FR
PRPN   30.8   31.5   27.5
ON-LSTM   38.7   34.9   27.7
VG-NSL 33.5 36.3 34.3
VG-NSL+HI 38.7 38.3 38.1
Table 4: scores on the Multi30K test split Young et al. (2014); Elliott et al. (2016, 2017), averaged over 5 runs with different random initialization. denotes the standard deviation.

We extend our experiments to the Multi30K data set, which is built on the Flickr30K data set Young et al. (2014) and consists of English, German Elliott et al. (2016), and French Elliott et al. (2017) captions. For Multi30K, there are 29,000 images in the training set, 1,014 in the development set and 1,000 in the test set. Each image is associated with one caption in each language.

We compare our models to PRPN and ON-LSTM in terms of overall score (Table 4). VG-NSL with the head-initial inductive bias consistently performs the best across the three languages, all of which are highly head-initial Baker (2001). Note that the scores here are not comparable to those in Table 1, since Multi30K (English) has 13x fewer captions than MSCOCO.

5 Discussion

We have proposed a simple but effective model, the Visually Grounded Neural Syntax Learner, for visually grounded language structure acquisition. VG-NSL jointly learns parse trees and visually grounded textual representations. In our experiments, we find that this approach to grounded language learning produces parsing models that are both accurate and stable, and that the learning is much more data-efficient than a state-of-the-art text-only approach. Along the way, the model acquires estimates of word concreteness.

The results suggest multiple future research directions. First, VG-NSL matches text embeddings directly with embeddings of entire images. Its performance may be boosted by considering structured representations of both images (e.g., Lu et al., 2016; Wu et al., 2019) and texts (Steedman, 2000). Second, thus far we have used a shared representation for both syntax and semantics, but it may be useful to disentangle their representations (Steedman, 2000). Third, our best model is based on the head-initial inductive bias. Automatically acquiring such inductive biases from data remains challenging (Kemp et al., 2006; Gauthier et al., 2018). Finally, it may be possible to extend our approach to other linguistic tasks such as dependency parsing (Christie et al., 2016b), coreference resolution (Kottur et al., 2018), and learning pragmatics beyond semantics (Andreas and Klein, 2016).

There are also limitations to the idea of grounded language acquisition. In particular, the current approach has thus far been applied to understanding grounded texts in a single domain (static visual scenes for VG-NSL). Its applicability could be extended by learning shared representations across multiple modalities (Castrejon et al., 2016) or integrating with pure text-domain models (such as PRPN, Shen et al., 2018a).

Acknowledgement

We thank Allyson Ettinger for helpful suggestions on this work, and the anonymous reviewers for their valuable feedback.

References

Appendix A Overview of Models for Constituency Parsing without Explicit Syntactic Supervision

Model Objective Extra Label Multi- Stochastic Extra
modal Corpus
CCM Klein and Manning (2002)* MAP POS
DMV-CCM Klein and Manning (2005)* MAP POS
U-DOP Bod (2006b)* Probability Estimation POS
UML-DOP Bod (2006a)* MAP POS
PMI N/A
Random N/A
Left N/A
Right N/A
PRPN Shen et al. (2018a) LM
ON-LSTM Shen et al. (2019) LM
Gumbel softmaxChoi et al. (2018) Cross-modal Retrieval
VG-NSL (ours) Cross-modal Retrieval
VG-NSL+HI (ours) Cross-modal Retrieval
Concreteness estimation based models
Turney et al. (2011)* N/A
Concreteness
(Partial)
Turney et al. (2011)+HI* N/A
Concreteness
(Partial)
Brysbaert et al. (2014)* N/A
Concreteness
(Full)
Brysbaert et al. (2014)+HI* N/A
Concreteness
(Full)
Hessel et al. (2018) N/A
Hessel et al. (2018)+HI N/A
Table 5: Comparison of models for constituency parsing without explicit syntactic supervision. * denotes models requiring extra labels, such as POS tags or manually labeled concreteness scores. All multimodal methods listed in the table require a pretrained visual feature extractor (i.e., ResNet-101; He et al., 2016). A model is labeled as stochastic if for fixed training data and hyperparameters the model may produce different results (e.g., due to different choices of random initialization). To the best of our knowledge, results on concreteness estimation Turney et al. (2011); Brysbaert et al. (2014); Hessel et al. (2018) have not been applied to unsupervised parsing so far.

( .. (.. (.. (.. (.. ‘A’ ‘cat’ ) ‘is’ ) ‘on’ ) ‘the’ ) ‘ground’ )

(a) Left-branching tree.

(.. ‘A’ (.. ‘cat’(.. ‘is’ (.. ‘on’ ( .. ‘the’ ‘ground’ ) ) ) ) )

(b) Right-branching tree.
Figure 5: Examples of some trivial tree structures.

Shown in Table 5, we compare existing models for constituency parsing without explicit syntactic supervision, with respect to their learning objective, dependence on extra labels or extra corpus, and other features. The table also includes the analysis of previous works on parsing sentences based on gold part-of-speech tags.

Appendix B Implementation Details for VG-NSL

We adopt the code released by Faghri et al. (2018)777https://github.com/fartashf/vsepp as the visual-semantic embedding module for VG-NSL. Following them, we fix the margin to . We also use the vocabulary provided by Faghri et al. (2018),888http://www.cs.toronto.edu/~faghri/vsepp/vocab.tar which contains 10,000 frequent words in the MSCOCO data set. Out-of-vocabulary words are treated as unseen words. For either VG-NSL or baselines, we use the same vocabulary if applicable.

Hyperparameter tuning.

As stated in main text, we use the self-agreement score Williams et al. (2018) as an unsupervised signal for tuning all hyperparamters. Besides the learning rate and other conventional hyperparameters, we also tune , the hyperparameter for the head-initial bias model. indicates the weight of penalization for “right abstract constituents”. We choose from and found that gives the best self-agreement score.

Appendix C Implementation Details for Baselines

Trivial tree structures.

We show examples for left-branching binary trees and right-branching binary trees in Figure 5. As for binary random trees, we iteratively combine two randomly selected adjacent constituents. This procedure is similar to that shown in Algorithm 2.

Parsing-Reading-Predict Network (PRPN).

We use the code released by Shen et al. (2018a) to train PRPN.999https://github.com/yikangshen/PRPN We tune the hyperparameters with respect to language modeling perplexity Jelinek et al. (1977). For a fair comparison, we fix the hidden dimension of all hidden layers of PRPN as 512. We use an Adam optimizer Kingma and Ba (2015) to optimize the parameters. The tuned parameters are number of layers (1, 2, 3) and learning rate (, , ). The models are trained for 100 epochs on the MSCOCO dataset and 1,000 epochs on the Multi30K dataset, and are early stopped using the criterion of language model perplexity.

Ordered Neurons (ON-LSTM).

We use the code release by Shen et al. (2019) to train ON-LSTM.101010https://github.com/yikangshen/Ordered-Neurons We tune the hyperparameters with respect to language modeling perplexity Jelinek et al. (1977), and use perplexity as an early stopping criterion. For a fair comparison, the hidden dimension of all hidden layers is set to 512, and the chunk size is changed to 16 to fit the hidden layer size. Following the original paper (Shen et al., 2019), we set the number of layers to be 3, and report the constituency parse tree with respect to the gate values output by the second layer of ON-LSTM. In order to obtain a better perplexity, we explore both Adam Kingma and Ba (2015) and SGD as the optimizer. We tune the learning rate (, , for Adam, and , , , for SGD). The models are trained for 100 epochs on the MSCOCO dataset and 1,000 epochs on the Multi30K dataset, and are early stopped using the criterion of language model perplexity.

PMI based constituency parsing.

We estimate the pointwise mutual information (PMI; Church and Hanks, 1990) between two words using all captions in MSCOCO training set. We apply negative PMI as syntactic distance Shen et al. (2018b) to generate a binary constituency parse tree recursively. The method of constituency parsing with a given list of syntactic distances is shown in Algorithm 1.

Input: text length , list of syntactic distances
Output: Boundaries of constituents
= parse(, , )
 
Function parse(, left, right)
if  then
       return EmptySet
end if
union(
   {(left, right)},
  parse (, left, ),
  parse (, left, right)
)
return boundaries
Algorithm 1 Constituency parsing based on given syntactic distance.

Gumbel-softmax based latent tree.

We integrate Gumbel-softmax latent tree based text encoder Choi et al. (2018)111111https://github.com/jihunchoi/unsupervised-treelstm to the visual semantic embedding framework Faghri et al. (2018), and use the tree structure produced by it as a baseline.

Turney et al. (2011) Brysbaert et al. (2014) Hessel et al. (2018) VG-NSL+HI
Turney et al. (2011) 1.00 0.84 0.58 0.72
Brysbaert et al. (2014) 0.84 1.00 0.55 0.71
Hessel et al. (2018) 0.58 0.55 1.00 0.85
VG-NSL+HI 0.72 0.71 0.85 1.00
Table 6: Pearson correlation coefficients between existing concreteness estimation methods, including baselines and VG-NSL+HI. In order to make a fair comparison, the correlation coefficients are evaluated on the 100 most frequent words on MSCOCO test set.
Figure 6: Normalized concreteness scores of example words.

Concreteness estimation.

For the semi-supervised concreteness estimation, we reproduce the experiments by Turney et al. (2011), applying the manually labeled concreteness scores for 4,295 words from the MRC Psycholinguistic Database Machine Usable Dictionary Coltheart (1981) as supervision,121212http://ota.oucs.ox.ac.uk/headers/1054.xml and use English Wikipedia pages to estimate PMI between words.131313https://dumps.wikimedia.org/other/static_html_dumps/April_2007/en/ The PMI is then used to compute similarity between seen and unseen words, which is further used as weights to estimate concreteness for unseen words. For the concreteness scores from crowdsourcing, we use the released data set of Brysbaert et al. (2014).141414http://crr.ugent.be/archives/1330 Similarly to VG-NSL, the multimodal concreteness score Hessel et al. (2018) is also estimated on the MSCOCO training set, using an open-sourced implementation.151515https://github.com/victorssilva/concreteness

Constituency parsing with concreteness scores.

Denote as the concreteness score estimated by a model for the word . Given a sequence of concreteness scores of caption tokens denoted by , we aim to produce a binary constituency parse tree. We first normalize the concreteness scores to the range of , via:161616 For the concreteness scores estimated by Hessel et al. (2018), we let before normalizing, as the original scores are in the range of .

We treat unseen words (i.e., out-of-vocabulary words) in the same way in VG-NSL, by assigning the concreteness of to unseen words, with the assumption that unseen words are the most abstract ones.

We compose constituency parse trees using the normalized concreteness scores by iteratively combining consecutive constituents. At each step, we select two adjacent constituents (initially, words) with the highest average concreteness score and combine them into a larger constituent, of which the concreteness is the average of its children. We repeat the above procedure until there is only one constituent left.

As for the head-initial inductive bias, we weight the concreteness of the right constituent with a hyperparemeter when ranking all pairs of consecutive constituents during selection. Meanwhile, the concreteness of the composed constituent remains the average of the two component constituents. In order to keep consistent with VG-NSL, we set in all of our experiments.

The procedure is summarized in Algorithm 2.

Input: list of normalized concreteness scores , hyperparameter
Output: Boundaries of constituents
for  to  do
      
      
end for
while  do
      
       add to
       + () +
       + () +
       + () +
end while
Algorithm 2 Constituency parsing based on concreteness estimation.

(.. (.. ‘Three’ ‘white’ ‘sinks’ ) (.. ‘in’ (.. (.. ‘a’ ‘bathroom’ ) (.. ‘under ’ ‘mirrors’ ) ) ) )

(a) Constituency parse tree labeled by Benepar Kitaev and Klein (2018).

(.. (.. ‘Three’ ‘white’ ‘sinks’ ) (.. ‘in’ (.. ‘a’ ‘bathroom’ ) ) (.. ‘under’ ‘mirrors’ ) )

(b) Manually labeled constituency parse tree.
Figure 7: A failure example by Benepar, where it fails to parse the noun phrase “three white sinks in a bathroom under mirrors” – according to human commonsense, it is much more common for sinks, rather than a bathroom, to be under mirrors. However, most of the constituents (e.g., “three white sinks” and “under mirrors”) are still successfully extracted by Benepar.

Appendix D Details of Manual Ground Truth Evaluation

It is important to confirm that the constituency parse trees of the MSCOCO captions produced by Benepar Kitaev and Klein (2018) are of high enough qualities, so that they can serve as reliable ground truth for further evaluation of other models. To verify this, we randomly sample 50 captions from the MSCOCO test split, and manually label the constituency parse trees without reference to either Benepar or the paired images, following the principles by Bies et al. (1995) as much as possible.171717 The manually labeled constituency parse trees are publicly available at https://ttic.uchicago.edu/~freda/vgnsl/manually_labeled_trees.txt Note that we only label the tree structures without constituency labels (e.g., NP and PP). Most failure cases by Benepar are related to human commonsense in resolving parsing ambiguities, e.g., prepositional phrase attachments (Figure 7).

We compare the manually labeled trees and those produced by Benepar Kitaev and Klein (2018), and find that the score between them are 95.65.

Appendix E Concreteness by Different Models

e.1 Correlation between Different Concreteness Estimations

We report the correlation of different methods for concreteness estimation, shown in (Table 6). The concreteness given by Turney et al. (2011) and Brysbaert et al. (2014) highly correlate with each other. The concreteness scores estimated on multi-modal dataset (Hessel et al., 2018) also moderately correlates with the aforementioned two methods (Turney et al., 2011; Brysbaert et al., 2014). Compared to the concreteness estimated by Hessel et al. (2018), the one estimated by our model has a stronger correlation with the scores estimated from linguistic data Turney et al. (2011); Brysbaert et al. (2014).

e.2 Concreteness Scores of Sample Words by Different Methods

We present the concreteness scores estimated or labeled by different methods in Figure 6, which qualitatively shows that different methods correlate with others well.

Appendix F Sample Trees Generated by VG-NSL

Figure 6 shows the sample trees generated by VG-NSL with the head-initial inductive bias (VG-NSL+HI). All captions are chosen from the MSCOCO test set.

( .. ( .. ‘a’ ‘kitchen’ ) ( .. ‘with’ ( .. ( .. ‘two’ ‘windows’ ) ( .. ‘and’ ( .. ‘two’ ( .. ‘metal’ ‘sinks’ ) ) ) ) ) )

(a) a kitchen with two windows and two metal sinks

( .. ( .. ‘a’ ( .. ‘blue’ ( .. ‘small’ ‘plane’ ) ) ) ( .. ‘standing’ ( .. ‘at’ ( .. ‘the’ ‘airstrip’ ) ) ) )

(b) a blue small plane standing at the airstrip

( .. ( .. ( .. ‘young’ ‘boy’ ) ‘sitting’ ) ( .. ‘on’ ( .. ‘top’ ( .. ‘of’ ( .. ‘a’ ‘briefcase’ ) ) ) ) )

(c) young boy sitting on top of a briefcase

( .. ( .. ( .. ‘a’ ( .. ‘small’ ‘dog’ ) ) ‘eating’ ) ( .. ( .. ‘a’ ‘plate’ ) ( .. ‘of’ ‘broccoli’ ) ) )

(d) a small dog eating a plate of broccoli

( .. ( .. ( .. ( .. ‘a’ ‘building’ ) ( .. ‘with’ ( .. ‘a’ ( .. ‘bunch’ ( .. ‘of’ ‘people’ ) ) ) ) ) ( .. ‘standing’ ‘around’ ) ) ‘it’ )

(i) a building with a bunch of people standing around it

( .. ( .. ( .. ‘a’ ‘horse’ ) ‘walking’ ) ( .. ‘by’ ( .. ( .. ‘a’ ‘tree’ ) ( .. ‘in’ ( .. ‘the’ ‘woods’ ) ) ) ) )

(j) a horse walking by a tree in the woods

( .. ( .. ( .. ‘the’ ( .. ‘golden’ ‘waffle’ ) ) ( .. ‘has’ ( .. ‘a’ ‘banana’ ) ) ) ( .. ‘in’ ‘it’ ) )

(k) the golden waffle has a banana in it .

( .. ( .. ( .. ‘a’ ‘bowl’ ) ( .. ‘full’ ( .. ‘of’ ‘oranges’ ) ) ) ( .. ‘that’ ( .. ‘still’ ( .. ‘have’ ‘stems’ ) ) ) )

(l) a bowl full of oranges that still have stems

( .. ( .. ‘there’ ( .. ‘is’ ( .. ‘a’ ‘person’ ) ) ) ( .. ‘that’ ( .. ‘is’ ( .. ‘sitting’ ( .. ( .. ‘in’ ( .. ‘the’ ‘boat’ ) ) ( .. ‘on’ ( .. ‘the’ ‘water’ ) ) ) ) ) ) )

(u) there is a person that is sitting in the boat on the water

( .. ( .. ( .. ‘a’ ‘sandwich’ ) ( .. ‘and’ ‘soup’ ) ) ( .. ‘sit’ ( .. ‘on’ ( .. ‘a’ ‘table’ ) ) ) )

(v) a sandwich and soup sit on a table

( .. ( .. ‘a’ ( .. ‘big’ ‘umbrella’ ) ) ( .. ‘sitting’ ( .. ‘on’ ( .. ‘the’ ‘beach’ ) ) ) )

(w) a big umbrella sitting on the beach
Figure 6: Examples of parsing trees generated by VG-NSL.