Mapping Images to Scene Graphs with Permutation-Invariant Structured Prediction

02/15/2018 ∙ by Roei Herzig, et al. ∙ Tel Aviv University Bar-Ilan University 0

Structured prediction is concerned with predicting multiple inter-dependent labels simultaneously. Classical methods like CRF achieve this by maximizing a score function over the set of possible label assignments. Recent extensions use neural networks to either implement the score function or in maximization. The current paper takes an alternative approach, using a neural network to generate the structured output directly, without going through a score function. We take an axiomatic perspective to derive the desired properties and invariances of a such network to certain input permutations, presenting a structural characterization that is provably both necessary and sufficient. We then discuss graph-permutation invariant (GPI) architectures that satisfy this characterization and explain how they can be used for deep structured prediction. We evaluate our approach on the challenging problem of inferring a scene graph from an image, namely, predicting entities and their relations in the image. We obtain state-of-the-art results on the challenging Visual Genome benchmark, outperforming all recent approaches.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

Code Repositories

SceneGraph

Scene Graphs with Permutation-Invariant Structured Prediction


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Understanding the semantics of a complex visual scene is a fundamental problem in machine perception. It often requires recognizing multiple objects in a scene, together with their spatial and functional relations. The set of objects and relations is sometimes represented as a graph, connecting objects (nodes) with their relations (edges) and is known as a scene graph (Figure 1). Scene graphs provide a compact representation of the semantics of an image, and can be useful for semantic-level interpretation and reasoning about a visual scene Johnson et al. (2018). Scene-graph prediction is the problem of inferring the joint set of objects and their relations in a visual scene.

Since objects and relations are inter-dependent (e.g., a person and chair are more likely to be in relation “sitting on” than “eating”), a scene graph predictor should capture this dependence in order to improve prediction accuracy. This goal is a special case of a more general problem, namely, inferring multiple inter-dependent labels, which is the research focus of the field of structured prediction. Structured prediction has attracted considerable attention because it applies to many learning problems and poses unique theoretical and algorithmic challenges (e.g., see Belanger et al., 2017; Chen et al., 2015; Taskar et al., 2004). It is therefore a natural approach for predicting scene graphs from images.

Figure 1: An image and its scene graph from the Visual Genome dataset (Krishna et al., 2017). The scene graph captures the entities in the image (nodes, blue circles) like dog and their relations (edges, red circles) like hat, on, dog.

Structured prediction models typically define a score function that quantifies how well a label assignment is compatible with an input . In the case of understanding complex visual scenes, is an image, and is a complex label containing the labels of objects detected in an image and the labels of their relations. In this setup, the inference task amounts to finding the label that maximizes the compatibility score

. This score-based approach separates a scoring component – implemented by a parametric model, from an optimization component – aimed at finding a label that maximizes that score. Unfortunately, for a general scoring function

, the space of possible label assignments grows exponentially with input size. For instance, for scene graphs the set of possible object label assignments is too large even for relatively simple images, since the vocabulary of candidate objects may contain thousands of objects. As a result, inferring the label assignment that maximizes a scoring function is computationally hard in the general case.

An alternative approach to score-based methods is to map an input to a structured output with a “black box" neural network, without explicitly defining a score function. This raises a natural question: what is the right architecture for such a network? Here we take an axiomatic approach and argue that one important property such networks should satisfy is invariance to a particular type of input permutation. We then prove that this invariance is equivalent to imposing certain structural constraints on the architecture of the network, and describe architectures that satisfy these constraints.

To evaluate our approach, we first demonstrate on a synthetic dataset that respecting permutation invariance is important, because models that violate this invariance need more training data, despite having a comparable model size. Then, we tackle the problem of scene graph generation. We describe a model that satisfies the permutation invariance property, and show that it achieves state-of-the-art results on the competitive Visual Genome benchmark (Krishna et al., 2017), demonstrating the power of our new design principle.

In summary, the novel contributions of this paper are: a) Deriving sufficient and necessary conditions for graph-permutation invariance in deep structured prediction architectures. b) Empirically demonstrating the benefit of graph-permutation invariance. c) Developing a state-of-the-art model for scene graph prediction on a large dataset of complex visual scenes.

2 Structured Prediction

Scored-based methods in structured prediction define a function that quantifies the degree to which is compatible with , and infer a label by maximizing (e.g., see Belanger et al., 2017; Chen et al., 2015; Lafferty et al., 2001; Meshi et al., 2010; Taskar et al., 2004). Most score functions previously used decompose as a sum over simpler functions, , making it possible to optimize efficiently. This local maximization forms the basic building block of algorithms for approximately maximizing . One way to decompose the score function is to restrict each to depend only on a small subset of the variables.

The renewed interest in deep learning led to efforts to integrate deep networks with structured prediction, including modeling the functions as deep networks. In this context, the most widely-used score functions are singleton and pairwise . The early work taking this approach used a two-stage architecture, learning the local scores independently of the structured prediction goal (Chen et al., 2014; Farabet et al., 2013). Later studies considered end-to-end architectures where the inference algorithm is part of the computation graph (Chen et al., 2015; Pei et al., 2015; Schwing & Urtasun, 2015; Zheng et al., 2015). Recent studies go beyond pairwise scores, also modelling global factors (Belanger et al., 2017; Gygli et al., 2017).

Score-based methods provide several advantages. First, they allow intuitive specification of local dependencies between labels and how these translate to global dependencies. Second, for linear score functions, the learning problem has natural convex surrogates Lafferty et al. (2001); Taskar et al. (2004). Third, inference in large label spaces is sometimes possible via exact algorithms or empirically accurate approximations. However, with the advent of deep scoring functions , learning is no longer convex. Thus, it is worthwhile to rethink the architecture of structured prediction models, and consider models that map inputs to outputs directly without explicitly maximizing a score function. We would like these models to enjoy the expressivity and predictive power of neural networks, while maintaining the ability to specify local dependencies between labels in a flexible manner. In the next section, we present such an approach and consider a natural question: what should be the properties of a deep neural network used for structured prediction.

3 Permutation-Invariant Structured Prediction

In what follows we define the permutation-invariance property for structured prediction models, and argue that permutation invariance is a natural principle for designing their architecture.

We first introduce our notation. We focus on structures with pairwise interactions, because they are simpler in terms of notation and are sufficient for describing the structure in many problems. We denote a structured label by . In a score-based approach, the score is defined via a set of singleton scores and pairwise scores , where the overall score is the sum of these scores. For brevity, we denote and . An inference algorithm takes as input the local scores , and outputs an assignment that maximizes . We can thus view inference as a black-box that takes node-dependent and edge-dependent inputs (i.e., the scores , ) and returns a label , even without an explicit score function . While numerous inference algorithms exist for this setup, including belief propagation (BP) and mean field, here we develop a framework for a deep labeling algorithm (we avoid the term “inference” since the algorithm does not explicitly maximize a score function). Such an algorithm will be a black-box, taking the functions as input and the labels as output. We next ask what architecture such an algorithm should have.

We follow with several definitions. A graph labeling function is a function whose input is an ordered set of node features and an ordered set of edge features . For example, can be the array of values , and can be the table of values . Assume and . The output of is a set of node labels . Thus, algorithms such as BP are graph labeling functions. However, graph labeling functions do not necessarily maximize a score function. We denote the joint set of node features and edge features by (i.e., a set of vectors). In Section 3.1 we discuss extensions to this case where only a subset of the edges is available.

Figure 2: Left: Graph permutation invariance. A graph labeling function is graph permutation invariant (GPI) if permuting the node features maintains the output. Right: a schematic representation of the GPI architecture in Theorem 1. Singleton features are omitted for simplicity. (a) First, the features are processed element-wise by . (b) Features are summed to create a vector , which is concatenated with . (c) A representation of the entire graph is created by applying times and summing the created vector. (d) The graph representation is then finally processed by together with .

A natural requirement is that the function produces the same result when given the same features, up to a permutation of the input. For example, consider a label space with three variables , and assume that takes as input , and outputs a label . When is given an input that is permuted in a consistent way, say, , this defines exactly the same input. Hence, the output should still be . Most inference algorithms, including BP and mean field, satisfy this symmetry requirement by design, but this property is not guaranteed in general in a deep model. Here, our goal is to design a deep learning black-box, and hence we wish to guarantee invariance to input permutations. A black-box that violates this invariance “wastes” capacity on learning it at training time, which increases sample complexity, as shown in Sec. 5.1. We proceed to formally define the permutation invariance property.

Definition 1.

Let be a set of node features and edge features, and let be a permutation of . We define to be a new set of node and edge features given by and .

We also use the notation for permuting the labels. Namely, applied to a set of labels yields the same labels, only permuted by . Be aware that applying to the input features is different from permuting labels, because edge input features must permuted in a way that is consistent with permuting node input features. We now provide our key definition of a function whose output is invariant to permutations of the input. See Figure 2 (left).

Definition 2.

A graph labeling function is said to be graph-permutation invariant (GPI), if for all permutations of and for all it satisfies: .

3.1 Characterizing Permutation Invariance

Motivated by the above discussion, we ask: what structure is necessary and sufficient to guarantee that is GPI? Note that a function takes as input an ordered set . Therefore its output on could certainly differ from its output on . To achieve permutation invariance, should contain certain symmetries. For instance, one permutation invariant architecture could be to define for any function , but this architecture is too restrictive and does not cover all permutation invariant functions. Theorem 1 below provides a complete characterization (see Figure 2 for the corresponding architecture). Intuitively, the architecture in Theorem 1 is such that it can aggregate information from the entire graph, and do so in a permutation invariant manner.

Theorem 1.

Let be a graph labeling function. Then is graph-permutation invariant if and only if there exist functions such that for all :

(1)

where , and .

Proof.

First, we show that any satisfying the conditions of Theorem 1 is GPI. Namely, for any permutation , . To see this, write using Eq. 1 and Definition 1:

(2)

The second argument of above is invariant under , because it is a sum over nodes and their neighbors, which is invariant under permutation. Thus Eq. 2 is equal to:

where equality follows from Eq. 1. We thus proved that Eq. 1 implies graph permutation invariance.

Next, we prove that any given GPI function can be expressed as a function in Eq. 1. Namely, we show how to define and that can implement . Note that in this direction of the proof the function is a black-box. Namely, we only know that it is GPI, but do not assume anything else about its implementation.

The key idea is to construct such that the second argument of in Eq. 1 contains the information about all the graph features . Then, the function corresponds to an application of to this representation, followed by extracting the label . To simplify notation assume edge features are scalar (). The extension to vectors is simple, but involves more indexing.

We assume WLOG that the black-box function is a function only of the pairwise features (otherwise, we can always augment the pairwise features with the singleton features). Since we use a matrix to denote all the pairwise features.

Finally, we assume that our implementation of will take additional node features such that no two nodes have the same feature (i.e., the features identify the node).

Our goal is thus to show that there exist functions such that the function in Eq. 2 applied to yields the same labels as .

Let be a hash function with buckets mapping node features to an index (bucket). Assume that is perfect (this can be achieved for a large enough ). Define to map the pairwise features to a vector of size . Let be a one-hot vector of dimension , with one in the coordinate. Recall that we consider scalar so that is indeed in , and define as: , i.e., “stores” in the unique bucket for node .

Let be the second argument of in Eq. 1 (). Then, since all are distinct, stores all the pairwise features for neighbors of in unique positions within its coordinates. Since contains the feature whereas contains the feature , we cannot simply sum the , since we would lose the information of which edges the features originated from. Instead, we define to map to such that each feature is mapped to a distinct location. Formally:

(3)

outputs a matrix that is all zeros except for the features corresponding to node that are stored in row . The matrix (namely, the second argument of in Eq. 1) is a matrix with all the edge features in the graph including the graph structure.

To complete the construction we set to have the same outcome as . We first discard rows and columns in that do not correspond to original nodes (reducing to dimension ). Then, we use the reduced matrix as the input to the black-box .

Assume for simplicity that does not need to be contracted (this merely introduces another indexing step). Then corresponds to the original matrix of pairwise features, with both rows and columns permuted according to . We will thus use as input to the function . Since is GPI, this means that the label for node will be given by in position . Thus we set , and by the argument above this equals , implying that the above and indeed implement . ∎

Extension to general graphs

So far, we discussed complete graphs, where edges correspond to valid feature pairs. However, many graphs of interest might be incomplete. For example, an -variable chain graph in sequence labeling has only edges. For such graphs, the input to would not contain all pairs but rather only features corresponding to valid edges of the graph, and we are only interested in invariances that preserve the graph structure, namely, the automorphisms of the graph. Thus, the desired invariance is that , where is not an arbitrary permutation but an automorphism. It is easy to see that a simple variant of Theorem 1 holds in this case. All we need to do is replace in Eq. 2 the sum with , where are the neighbors of node i in the graph. The arguments are then similar to the proof above.

Implications of Theorem 1

Our result has interesting implications for deep structured prediction. First, it highlights that the fact that the architecture “collects” information from all different edges of the graph, in an invariant fashion via the functions. Specifically, the functions (after summation) aggregate all the features around a given node, and then (after summation) can collect them. Thus, these functions can provide a summary of the entire graph that is sufficient for downstream algorithms. This is different from one round of message passing algorithms which would not be sufficient for collecting global graph information. Note that the dimensions of may need to be large to aggregate all graph information (e.g., by hashing all the features as in the proof of Theorem 1), but the architecture itself can be shallow.

Second, the architecture is parallelizable, as all functions can be applied simultaneously. This is in contrast to recurrent models Zellers et al. (2017) which are harder to parallelize and are thus slower in practice.

Finally, the theorem suggests several common architectural structures that can be used within GPI. We briefly mention two of these. 1) Attention: Attention is a powerful component in deep learning architectures (Bahdanau et al., 2015), but most inference algorithms do not use attention. Intuitively, in attention each node aggregates features of neighbors through a weighted sum, where the weight is a function of the neighbor’s relevance. For example, the label of an entity in an image may depend more strongly on entities that are spatially closer. Attention can be naturally implemented in our GPI characterization, and we provide a full derivation for this implementation in the appendix. It plays a key role in our scene graph model described below. 2) RNNs: Because GPI functions are closed under composition, for any GPI function we can run iteratively by providing the output of one step of as part of the input to the next step and maintain GPI. This results in a recurrent architecture, which we use in our scene graph model.

4 Related Work

The concept of architectural invariance was recently proposed in DeepSets (Zaheer et al., 2017). The invariance we consider is much less restrictive: the architecture does not need to be invariant to all permutations of singleton and pairwise features, just those consistent with a graph re-labeling. This characterization results in a substantially different set of possible architectures.

Deep structured prediction. There has been significant recent interest in extending deep learning to structured prediction tasks. Much of this work has been on semantic segmentation, where convolutional networks (Shelhamer et al., 2017) became a standard approach for obtaining “singleton scores” and various approaches were proposed for adding structure on top. Most of these approaches used variants of message passing algorithms, unrolled into a computation graph (Xu et al., 2017). Some studies parameterized parts of the message passing algorithm and learned its parameters (Lin et al., 2015). Recently, gradient descent has also been used for maximizing score functions (Belanger et al., 2017; Gygli et al., 2017). An alternative to deep structured prediction is greedy decoding, inferring each label at a time based on previous labels. This approach has been popular in sequence-based applications (e.g., parsing (Chen & Manning, 2014)), relying on the sequential structure of the input, where BiLSTMs are effectively applied. Another related line of work is applying deep learning to graph-based problems, such as TSP (Bello et al., 2016; Gilmer et al., 2017; Khalil et al., 2017). Clearly, the notion of graph invariance is important in these, as highlighted in (Gilmer et al., 2017). They however do not specify a general architecture that satisfies invariance as we do here, and in fact focus on message passing architectures, which we strictly generalize. Furthermore, our focus is on the more general problem of structured prediction, rather than specific graph-based optimization problems.

Scene graph prediction.

Extracting scene graphs from images provides a semantic representation that can later be used for reasoning, question answering, and image retrieval

(Johnson et al., 2015; Lu et al., 2016; Raposo et al., 2017). It is at the forefront of machine vision research, integrating challenges like object detection, action recognition and detection of human-object interactions (Liao et al., 2016; Plummer et al., 2017). Prior work on scene graph predictions used neural message passing algorithms (Xu et al., 2017) as well as prior knowledge in the form of word embeddings (Lu et al., 2016). Other work suggested to predict graphs directly from pixels in an end-to-end manner Newell & Deng (2017). NeuralMotif (Zellers et al., 2017), currently the state-of-the-art model for scene graph prediction on Visual Genome, employs an RNN that provides global context by sequentially reading the independent predictions for each entity and relation and then refines those predictions. The NeuralMotif model maintains GPI by fixing the order in which the RNN reads its inputs and thus only a single order is allowed. However, this fixed order is not guaranteed to be optimal.

5 Experimental Evaluation

We empirically evaluate the benefit of GPI architectures. First, using a synthetic graph-labeling task, and then for the problem of mapping images to scene graphs.

5.1 Synthetic Graph Labeling

We start with studying GPI on a synthetic problem, defined as follows. An input graph is given, where each node is assigned to one of sets. The set for node is denoted by . The goal is to compute for each node the number of neighbors that belong to the same set. Namely, the label of a node is . We generated random graphs with 10 nodes (larger graphs produced similar results) by sampling each edge independently and uniformly, and sampling for every node uniformly from . The node features are one-hot vectors of and the edge features indicate whether . We compare two standard non-GPI architectures and one GPI architecture: (a) A GPI-architecture for graph prediction, described in detail in Section 5.2. We used the basic version without attention and RNN. (b) LSTM: We replace and , which perform aggregation in Theorem 1 with two LSTMs with a state size of 200 that read their input in random order. (c) A fully-connected (FC) feed-forward network with 2 hidden layers of 1000 nodes each. The input to the fully connected model is a concatenation of all node and pairwise features. The output is all node predictions. The focus of the experiment is to study sample complexity. Therefore, for a fair comparison, we use the same number of parameters for all models.

Figure 3: Accuracy as a function of sample size for graph labeling. Right is a zoomed in version of left.

Figure 3, shows the results, demonstrating that GPI requires far fewer samples to converge to the correct solution. This illustrates the advantage of an architecture with the correct inductive bias for the problem.

5.2 Scene-Graph Classification

We evaluate the GPI approach on the motivating task of this paper, inferring scene graphs from images (Figure 1). In this problem, the input is an image annotated with a set of bounding boxes for the entities in the image.111For simplicity, we focus on the task where boxes are given. The goal is to label each bounding box with the correct entity category and every pair of entities with their relation, such that they form a coherent scene graph.

We begin by describing our Scene Graph Predictor (SGP) model. We aim to predict two types of variables. The first is entity variables for all bounding boxes. Each can take one of values (e.g., “dog”, “man”). The second is relation variables for every pair of bounding boxes. Each such can take one of values (e.g., “on”, “near”). Our graph connects variables that are expected to be inter-related. It contains two types of edges: 1) entity-entity edge connecting every two entity variables and for . 2) entity-relation edges connecting every relation variable (where ) to its two entity variables. Thus, our graph is not a complete graph and our goal is to design an architecture that will be invariant to any automorphism of the graph, such as permutations of the entity variables.

For the input features , we used the features learned by the baseline model from Zellers et al. (2017).222The baseline does not use any LSTM or context, and is thus unrelated to the main contribution of Zellers et al. (2017). Specifically, the entity features

included (1) The confidence probabilities of all entities for

as learned by the baseline model. (2) Bounding box information given as (left, bottom, width, height); (3) The number of smaller entities (also bigger); (4) The number of entities to the left, right, above and below. (5) The number of entities with higher and with lower confidence; (6) For the linguistic model only: word embedding of the most probable class. Word vectors were learned with GloVe from the ground-truth captions of Visual Genome.

Similarly, the relation features contained the probabilities of relation entities for the relation . For the Linguistic model, these features were extended to include word embedding of the most probable class. For entity-entity pairwise features

, we use the relation probability for each pair. Because the output of SGP are probability distributions over entities and relations, we use them as an the input

to SGP, once again in a recurrent manner and maintain GPI.

We next describe the main components of the GPI architecture. First, we focus on the parts that output the entity labels. is the network that integrates features for two entity variables and . It simply takes , and as input, and outputs a vector of dimension . Next, the network takes as input the outputs of for all neighbors of an entity, and uses the attention mechanism described above to output a vector of dimension . Finally, the network takes these dimensional vectors and outputs logits predicting the entity value. The network takes as input the representation of the two entities, as well as and transforms the output into logits. See appendix for specific network architectures.

5.2.1 Experimental Setup and Results

Dataset.

We evaluated our approach on Visual Genome (VG) (Krishna et al., 2017), a dataset with 108,077 images annotated with bounding boxes, entities and relations. On average, images have 12 entities and 7 relations per image. For a proper comparison with previous results (Newell & Deng, 2017; Xu et al., 2017; Zellers et al., 2017), we used the data from (Xu et al., 2017), including the train and test splits. For evaluation, we used the same 150 entities and 50 relations as in (Newell & Deng, 2017; Xu et al., 2017; Zellers et al., 2017). To tune hyper-parameters, we also split the training data into two by randomly selecting 5K examples, resulting in a final 70K/5K/32K split for train/validation/test sets.

Constrained Evaluation Unconstrained Evaluation
SGCls PredCls SGCls PredCls
R@50 R@100 R@50 R@100 R@50 R@100 R@50 R@100
Lu et al., 2016 (Lu et al., 2016) 11.8 14.1 35.0 27.9 - - - -
Xu et al., 2017 (Xu et al., 2017) 21.7 24.4 44.8 53.0 - - - -
Pixel2Graph (Newell & Deng, 2017) - - - - 26.5 30.0 68.0 75.2
Graph R-CNN (Yang et al., 2018) 29.6 31.6 54.2 59.1 - - - -
Neural Motifs (Zellers et al., 2017) 35.8 36.5 65.2 67.1 44.5 47.7 81.1 88.3
Baseline (Zellers et al., 2017) 34.6 35.3 63.7 65.6 43.4 46.6 78.8 85.9
No Attention 35.3 37.2 64.5 66.3 44.1 48.5 79.7 86.7
Neighbor Attention 35.7 38.5 64.6 66.6 44.7 49.9 80.0 87.1
Linguistic 36.5 38.8 65.1 66.9 45.5 50.8 80.8 88.2
Table 1: Test set results for graph-constrained evaluation (i.e., the returned triplets must be consistent with a scene graph) and for unconstrained evaluation (triplets need not be consistent with a scene graph).
Training.

All networks were trained using Adam (Kingma & Ba, 2014) with batch size

. Hyperparameter values below were chosen based on the validation set. The SGP loss function was the sum of cross-entropy losses over all entities and relations in the image. In the loss, we penalized entities

times more strongly than relations, and penalized negative relations times more weakly than positive relations.

Evaluation.

In (Xu et al., 2017) three different evaluation settings were considered. Here we focus on two of these: (1) SGCls: Given ground-truth bounding boxes for entities, predict all entity categories and relations categories. (2) PredCls: Given bounding boxes annotated with entity labels, predict all relations. Following (Lu et al., 2016), we used Recall@

as the evaluation metric. It measures the fraction of correct ground-truth triplets that appear within the

most confident triplets proposed by the model. Two evaluation protocols are used in the literature differing in whether they enforce graph constraints over model predictions. The first graph-constrained protocol requires that the top- triplets assign one consistent class per entity and relation. The second unconstrained protocol does not enforce any such constraints. We report results on both protocols, following (Zellers et al., 2017).

Models and baselines.

We compare four variants of our GPI approach with the reported results of four baselines that are currently the state-of-the-art on various scene graph prediction problems (all models use the same data split and pre-processing as (Xu et al., 2017)): 1) Lu et al., 2016 (Lu et al., 2016): This work leverages word embeddings to fine-tune the likelihood of predicted relations. 2) Xu et al, 2017 (Xu et al., 2017): This model passes messages between entities and relations, and iteratively refines the feature map used for prediction. 3) Newell & Deng, 2017 (Newell & Deng, 2017): The Pixel2Graph model uses associative embeddings (Newell et al., 2017) to produce a full graph from the image. 4) Yang et al., 2018 (Yang et al., 2018): The GRAPH R-CNN model uses object-relation regularities to sparsify and reason over scene graphs. 5) Zellers et al., 2017 (Zellers et al., 2017): The NeuralMotif method encodes global context for capturing high-order motifs in scene graphs, and the Baseline outputs the entities and relations distributions without using the global context. The following variants of GPI were compared: 1) GPI: No Attention: Our GPI model, but with no attention mechanism. Instead, following Theorem 1, we simply sum the features. 2) GPI: NeighborAttention: Our GPI model, with attention over neighbors features. 3) GPI: Linguistic: Same as GPI: NeighborAttention but also concatenating the word embedding vector, as described above.

Results.

Table 1 shows recall@ and recall@ for three variants of our approach, and compared with five baselines. All GPI variants performs well, with Linguistic outperforming all baselines for SGCls and being comparable to the state-of-the-art model for PredCls. Note that PredCl is an easier task, which makes less use of the structure, hence it is not surprising that GPI achieves similar accuracy to Zellers et al. (2017). Figure 4 illustrates the model behavior. Predicting isolated labels with (4c) mislabels several entities, but these are corrected at the final output (4d). Figure 4e shows that the system learned to attend more to nearby entities (the window and building are closer to the tree), and 4f shows that stronger attention is learned for the class bird, presumably because it is usually more informative than common classes like tree.

Figure 4: (a) An input image with bounding boxes from VG. (b) The ground-truth scene graph. (c) The Baseline fails to recognize some entities (tail and tree) and relations (in front of instead of looking at). (d) GPI:Linguistic fixes most incorrect LP predictions. (e) Window is the most significant neighbor of Tree. (f) The entity bird receives substantial attention, while Tree and building are less informative.
Implementation details.

The and networks were each implemented as a single fully-connected (FC) layer with a 500-dimensional outputs. was implemented as a FC network with 3 500-dimensional hidden layers, with one 150-dimensional output for the entity probabilities, and one 51-dimensional output for relation probabilities. The attention mechanism was implemented as a network like to and , receiving the same inputs, but using the output scores for the attention . The full code is available at https://github.com/shikorab/SceneGraph

6 Conclusion

We presented a deep learning approach to structured prediction, which constrains the architecture to be invariant to structurally identical inputs. As in score-based methods, our approach relies on pairwise features, capable of describing inter-label correlations, and thus inheriting the intuitive aspect of score-based approaches. However, instead of maximizing a score function (which leads to computationally-hard inference), we directly produce an output that is invariant to equivalent representations of the pairwise terms.

This axiomatic approach to model architecture can be extended in many ways. For image labeling, geometric invariances (shift or rotation) may be desired. In other cases, invariance to feature permutations may be desirable. We leave the derivation of the corresponding architectures to future work. Finally, there may be cases where the invariant structure is unknown and should be discovered from data, which is related to work on lifting graphical models Bui et al. (2013). It would be interesting to explore algorithms that discover and use such symmetries for deep structured prediction.

Acknowledgements

This work was supported by the ISF Centers of Excellence grant, and by the Yandex Initiative in Machine Learning. Work by GC was performed while at Google Brain Research.

References

  • Bahdanau et al. (2015) Bahdanau, D., Cho, K., and Bengio, Y. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations (ICLR), 2015.
  • Belanger et al. (2017) Belanger, David, Yang, Bishan, and McCallum, Andrew. End-to-end learning for structured prediction energy networks. In Precup, Doina and Teh, Yee Whye (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70, pp. 429–439. PMLR, 2017.
  • Bello et al. (2016) Bello, Irwan, Pham, Hieu, Le, Quoc V, Norouzi, Mohammad, and Bengio, Samy. Neural combinatorial optimization with reinforcement learning. arXiv preprint arXiv:1611.09940, 2016.
  • Bui et al. (2013) Bui, Hung Hai, Huynh, Tuyen N., and Riedel, Sebastian. Automorphism groups of graphical models and lifted variational inference. In Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence, UAI’13, pp. 132–141, 2013.
  • Chen & Manning (2014) Chen, Danqi and Manning, Christopher. A fast and accurate dependency parser using neural networks. In

    Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)

    , pp. 740–750, 2014.
  • Chen et al. (2014) Chen, Liang Chieh, Papandreou, George, Kokkinos, Iasonas, Murphy, Kevin, and Yuille, Alan L. Semantic image segmentation with deep convolutional nets and fully connected CRFs. In Proceedings of the Second International Conference on Learning Representations, 2014.
  • Chen et al. (2015) Chen, Liang Chieh, Schwing, Alexander G, Yuille, Alan L, and Urtasun, Raquel. Learning deep structured models. In Proc. ICML, 2015.
  • Farabet et al. (2013) Farabet, Clement, Couprie, Camille, Najman, Laurent, and LeCun, Yann. Learning hierarchical features for scene labeling. IEEE transactions on pattern analysis and machine intelligence, 35(8):1915–1929, 2013.
  • Gilmer et al. (2017) Gilmer, Justin, Schoenholz, Samuel S, Riley, Patrick F, Vinyals, Oriol, and Dahl, George E. Neural message passing for quantum chemistry. arXiv preprint arXiv:1704.01212, 2017.
  • Gygli et al. (2017) Gygli, Michael, Norouzi, Mohammad, and Angelova, Anelia. Deep value networks learn to evaluate and iteratively refine structured outputs. In Precup, Doina and Teh, Yee Whye (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 1341–1351, International Convention Centre, Sydney, Australia, 2017. PMLR.
  • Johnson et al. (2015) Johnson, Justin, Krishna, Ranjay, Stark, Michael, Li, Li-Jia, Shamma, David A., Bernstein, Michael S., and Li, Fei-Fei. Image retrieval using scene graphs. In

    Proc. Conf. Comput. Vision Pattern Recognition

    , pp. 3668–3678, 2015.
  • Johnson et al. (2018) Johnson, Justin, Gupta, Agrim, and Fei-Fei, Li. Image generation from scene graphs. arXiv preprint arXiv:1804.01622, 2018.
  • Khalil et al. (2017) Khalil, Elias, Dai, Hanjun, Zhang, Yuyu, Dilkina, Bistra, and Song, Le.

    Learning combinatorial optimization algorithms over graphs.

    In Advances in Neural Information Processing Systems, pp. 6351–6361, 2017.
  • Kingma & Ba (2014) Kingma, Diederik P. and Ba, Jimmy. Adam: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980, abs/1412.6980, 2014.
  • Krishna et al. (2017) Krishna, Ranjay, Zhu, Yuke, Groth, Oliver, Johnson, Justin, Hata, Kenji, Kravitz, Joshua, Chen, Stephanie, Kalantidis, Yannis, Li, Li-Jia, Shamma, David A, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123(1):32–73, 2017.
  • Lafferty et al. (2001) Lafferty, J., McCallum, A., and Pereira, F. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the 18th International Conference on Machine Learning, pp. 282–289, 2001.
  • Liao et al. (2016) Liao, Wentong, Yang, Michael Ying, Ackermann, Hanno, and Rosenhahn, Bodo. On support relations and semantic scene graphs. arXiv preprint arXiv:1609.05834, 2016.
  • Lin et al. (2015) Lin, Guosheng, Shen, Chunhua, Reid, Ian, and van den Hengel, Anton. Deeply learning the messages in message passing inference. In Advances in Neural Information Processing Systems, pp. 361–369, 2015.
  • Lu et al. (2016) Lu, Cewu, Krishna, Ranjay, Bernstein, Michael S., and Li, Fei-Fei. Visual relationship detection with language priors. In European Conf. Comput. Vision, pp. 852–869, 2016.
  • Meshi et al. (2010) Meshi, O., Sontag, D., Jaakkola, T., and Globerson, A. Learning efficiently with approximate inference via dual losses. In Proceedings of the 27th International Conference on Machine Learning, pp. 783–790, New York, NY, USA, 2010. ACM.
  • Newell & Deng (2017) Newell, Alejandro and Deng, Jia. Pixels to graphs by associative embedding. In Advances in Neural Information Processing Systems 30 (to appear), pp. 1172–1180. Curran Associates, Inc., 2017.
  • Newell et al. (2017) Newell, Alejandro, Huang, Zhiao, and Deng, Jia. Associative embedding: End-to-end learning for joint detection and grouping. In Neural Inform. Process. Syst., pp. 2274–2284. Curran Associates, Inc., 2017.
  • Pei et al. (2015) Pei, Wenzhe, Ge, Tao, and Chang, Baobao. An effective neural network model for graph-based dependency parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computationa Linguistics, pp. 313–322, 2015.
  • Plummer et al. (2017) Plummer, Bryan A., Mallya, Arun, Cervantes, Christopher M., Hockenmaier, Julia, and Lazebnik, Svetlana. Phrase localization and visual relationship detection with comprehensive image-language cues. In ICCV, pp. 1946–1955, 2017.
  • Raposo et al. (2017) Raposo, David, Santoro, Adam, Barrett, David, Pascanu, Razvan, Lillicrap, Timothy, and Battaglia, Peter. Discovering objects and their relations from entangled scene representations. arXiv preprint arXiv:1702.05068, 2017.
  • Schwing & Urtasun (2015) Schwing, Alexander G and Urtasun, Raquel. Fully connected deep structured networks. ArXiv e-prints, 2015.
  • Shelhamer et al. (2017) Shelhamer, Evan, Long, Jonathan, and Darrell, Trevor. Fully convolutional networks for semantic segmentation. Proc. Conf. Comput. Vision Pattern Recognition, 39(4):640–651, 2017.
  • Taskar et al. (2004) Taskar, B., Guestrin, C., and Koller, D. Max margin Markov networks. In Thrun, S., Saul, L., and Schölkopf, B. (eds.), Advances in Neural Information Processing Systems 16, pp. 25–32. MIT Press, Cambridge, MA, 2004.
  • Xu et al. (2017) Xu, Danfei, Zhu, Yuke, Choy, Christopher B., and Fei-Fei, Li. Scene Graph Generation by Iterative Message Passing. In Proc. Conf. Comput. Vision Pattern Recognition, pp. 3097–3106, 2017.
  • Yang et al. (2018) Yang, Jianwei, Lu, Jiasen, Lee, Stefan, Batra, Dhruv, and Parikh, Devi. Graph R-CNN for scene graph generation. In European Conf. Comput. Vision, pp. 690–706, 2018.
  • Zaheer et al. (2017) Zaheer, Manzil, Kottur, Satwik, Ravanbakhsh, Siamak, Poczos, Barnabas, Salakhutdinov, Ruslan R, and Smola, Alexander J. Deep sets. In Advances in Neural Information Processing Systems 30, pp. 3394–3404. Curran Associates, Inc., 2017.
  • Zellers et al. (2017) Zellers, Rowan, Yatskar, Mark, Thomson, Sam, and Choi, Yejin. Neural motifs: Scene graph parsing with global context. arXiv preprint arXiv:1711.06640, abs/1711.06640, 2017.
  • Zheng et al. (2015) Zheng, Shuai, Jayasumana, Sadeep, Romera-Paredes, Bernardino, Vineet, Vibhav, Su, Zhizhong, Du, Dalong, Huang, Chang, and Torr, Philip HS.

    Conditional random fields as recurrent neural networks.

    In Proceedings of the IEEE International Conference on Computer Vision, pp. 1529–1537, 2015.

7 Supplementary Material

This supplementary material includes: (1) Visual illustration of the proof of Theorem 1. (2) Explaining how to integrate an attention mechanism in our GPI framework. (3) Additional evaluation method to further analyze and compare our work with baselines.

7.1 Theorem 1: Illustration of Proof

Figure 5: Illustration of the proof of Theorem 1 using a specific construction example. Here is a hash function of size = 5 such that , is a three-node input graph, and are the pairwise features (in purple) of . (a) is applied to each . Each application yields a vector in . The three dark yellow columns correspond to , and . Then, all vectors are summed over to obtain three vectors. (b) ’s (blue matrices) are an outer product between and resulting in a matrix of zeros except one row. The dark blue matrix corresponds for . (c) All ’s are summed to a matrix, isomorphic to the original matrix.

7.2 Characterizing Permutation Invariance: Attention

Attention is a powerful component which naturally can be introduced into our GPI model. We now show how attention can be introduced in our framework. Formally, we learn attention weights for the neighbors of a node , which scale the features of that neighbor. We can also learn different attention weights for individual features of each neighbor in a similar way.

Let be an attention mask specifying the weight that node gives to node :

(4)

where can be any scalar-valued function of its arguments (e.g., a dot product of and

as in standard attention models). To introduce attention we wish

to have the form of weighting over neighboring feature vectors , namely, .

To achieve this form we extend by a single entry, defining (namely we set ) as (here are the first elements of ) and . We keep the definition of . Next, we define and substitute and to obtain the desired form as attention weights over neighboring feature vectors :

A similar approach can be applied over and to model attention over the outputs of as well (graph nodes).

7.3 Scene Graph Results

In the main paper, we described the results for the two prediction tasks: SGCls and PredCls, as defined in section 5.2.1: "Experimental Setup and Results". To further analyze our module, we compare the best variant, GPI: Linguistic, per relation to two baselines: (Lu et al., 2016) and Xu et al. (2017). Table 2, specifies the PredCls recall@5 of the 20-top frequent relation classes. The GPI module performs better in almost all the relations classes.

Relation (Lu et al., 2016) (Xu et al., 2017) Linguistic
on 99.71 99.25 99.3
has 98.03 97.25 98.7
in 80.38 88.30 95.9
of 82.47 96.75 98.1
wearing 98.47 98.23 99.6
near 85.16 96.81 95.4
with 31.85 88.10 94.2
above 49.19 79.73 83.9
holding 61.50 80.67 95.5
behind 79.35 92.32 91.2
under 28.64 52.73 83.2
sitting on 31.74 50.17 90.4
in front of 26.09 59.63 74.9
attached to 8.45 29.58 77.4
at 54.08 70.41 80.9
hanging from 0.0 0.0 74.1
over 9.26 0.0 62.4
for 12.20 31.71 45.1
riding 72.43 89.72 96.1
Table 2: Recall@ of PredCls for the 20-top relations ranked by their frequency, as in (Xu et al., 2017)