Action Genome: Actions as Composition of Spatio-temporal Scene Graphs

12/15/2019 ∙ by Jingwei Ji, et al. ∙ Stanford University 35

Action recognition has typically treated actions and activities as monolithic events that occur in videos. However, there is evidence from Cognitive Science and Neuroscience that people actively encode activities into consistent hierarchical part structures. However in Computer Vision, few explorations on representations encoding event partonomies have been made. Inspired by evidence that the prototypical unit of an event is an action-object interaction, we introduce Action Genome, a representation that decomposes actions into spatio-temporal scene graphs. Action Genome captures changes between objects and their pairwise relationships while an action occurs. It contains 10K videos with 0.4M objects and 1.7M visual relationships annotated. With Action Genome, we extend an existing action recognition model by incorporating scene graphs as spatio-temporal feature banks to achieve better performance on the Charades dataset. Next, by decomposing and learning the temporal changes in visual relationships that result in an action, we demonstrate the utility of a hierarchical event decomposition by enabling few-shot action recognition, achieving 42.7 scene graph models on the new task of spatio-temporal scene graph prediction.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 4

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Video understanding tasks, such as action recognition, have, for the most part, treated actions and activities as monolithic events [37, 8, 86, 65]. Most models proposed have resorted to end-to-end predictions that produce a single label for a long sequence of a video [68, 71, 10, 23, 31]

and do not explicitly decompose events into a series of interactions between objects. On the other hand, image-based structured representations like scene graphs have cascaded improvements across multiple image tasks, including image captioning 

[2]

, image retrieval 

[35, 63], visual question answering [34], relationship modeling [40] and image generation [33]. The scene graph representation, introduced in Visual Genome [42], provides a scaffold that allows vision models to tackle complex inference tasks by breaking scenes into its corresponding objects and their visual relationships. However, decompositions for temporal events has not been explored much [49], even though representing events with structured representations could lead to more accurate and grounded action understanding.

Figure 1: We present Action Genome: a representation that decomposes actions into spatio-temporal scene graphs. Inspired by hierarchical bias theory [83] and event segmentation theory [43], Action Genome provides the scaffold to study the dynamics of actions as relationships between people and objects change. They also allow us to improve action recognition, enable few-shot action detection, and introduce spatio-temporal scene graph prediction.
Dataset Video # videos # action Objects Relationships
hours categories annotated localized # categories # instances annotated localized # categories # instances
ActivityNet [8] 648 28K 200 - - - -
HACS Clips [86] 833 0.4K 200 - - - -
Kinetics-700 [9] 1794 650K 700 - - - -
AVA [26] 108 504K 80 - - 49 -
Charades [65] 82 10K 157 37 - - -
EPIC-Kitchen [15] 55 - 125 331 - - -
DALY [74] 31 8K 10 41 3.6K - -
CAD120++ [90] 0.57 0.5K 10 13 64K 6 32K
Action Genome 82 10K 157 35 0.4M 25 1.7M
Table 1: A comparison of Action Genome with existing video datasets. Built upon Charades [65], Action Genome is the first large-scale video database providing both action labels and spatio-temporal scene graph labels.

Meanwhile, in Cognitive Science and Neuroscience, it has been postulated that people segment events into consistent groups [54, 5, 6]. Furthermore, people actively encode those ongoing activities in a hierarchical part structure — a phenomenon referred to as hierarchical bias hypothesis [83] or event segmentation theory [43]. Let’s consider the action of “sitting on a sofa”. The person initially starts off next to the sofa, moves in front of it, and finally sits atop it. Such decompositions can enable machines to predict future and past scene graphs with objects and relationships as an action occurs: we can predict that the person is about to sit on the sofa when we see them move in front of it. Similarly, such decomposition can also enable machines to learn from few examples: we can recognize the same action when we see a different person move towards a different chair. While that was a relatively simple decomposition, other events like “playing football”, with its multiple rules and actors, can involve multifaceted decompositions. So while such decompositions can provide the scaffolds to improve vision models, how is it possible to correctly create representative hierarchies for a wide variety of complex actions?

In this paper, we introduce Action Genome, a representation that decomposes actions into spatio-temporal scene graphs. Object detection faced a similar challenge of large variation within any object category. So, just as progress in 2D perception was catalyzed by taxonomies [56], partonomies [55], and ontologies [78, 42], we aim to improve temporal understanding with Action Genome’s partonomy. Going back to the example of “person sitting on a sofa”, Action Genome breaks down such actions by annotating frames within that action with scene graphs. The graphs captures both the objects, person and sofa, and how their relationships evolve as the actions progress from person - next to - sofa to person - in front of - sofa to finally person - sitting on - sofa. Built upon Charades [65], Action Genome provides K object bounding boxes with M relationships across K video frames with action categories.

Most perspectives on action decomposition converge on the prototypical unit of action-object couplets [83, 62, 43, 49]

. Action-object couplets refer to transitive actions performed on objects (e.g. “moving a chair” or “throwing a ball”) and intransitive self-actions (e.g. “moving towards the sofa”). Action Genome’s dynamic scene graph representations capture both such types of events and as such, represent the prototypical unit. With this representation, we enable the study for tasks such as spatio-temporal scene graph prediction — a task where we estimate the decomposition of action dynamics given a video. We can even improve existing tasks like action recognition and few-shot action detection by jointly studying how those actions change visual relationships between objects in scene graphs.

To demonstrate the utility of Action Genome’s event decomposition, we introduce a method that extends to a state-of-the-art action recognition model [75] by incorporating spatio-temporal scene graphs as feature banks that can be used to both predict the action as well as the objects and relationships involved. First, we demonstrate that predicting scene graphs can benefit the popular task of action recognition by improving the state-of-the-art on the Charades dataset [65] from to and to when using oracle scene graphs. Second, we show that the compositional understanding of actions induces better generalization by showcasing few-shot action recognition experiments, achieving mAP using as few as

training examples. Third, we introduce the task of spatio-temporal scene graph prediction and benchmark existing scene graph models with new evaluation metrics designed specifically for videos. With a better understanding of the dynamics of human-object interactions via spatio-temporal scene graphs, we aim to inspire a new line of research in more decomposable and generalizable action understanding.

2 Related work

We derive inspiration from Cognitive Science, compare our representation with static scene graphs, and survey methods in action recognition and few-shot prediction.

Cognitive science. Early work in Cognitive Science provides evidence for the regularities with which people identify event boundaries [54, 5, 6]. Remarkably, people consistently, both within and between subjects, carve out video streams into events, actions, and activities [82, 11, 28]. Such findings hint that it is possible to predict when actions begin and end, and have inspired hundreds of Computer Vision datasets, models, and algorithms to study tasks like action recognition [70, 81, 19, 80, 79, 36]. Subsequent Cognitive and Neuroscience research, using the same paradigm, has also shown that event categories form partonomies [59, 82, 28]. However, Computer Vision has done little work in explicitly representing the hierarchical structures of actions [49], even though understanding event partonomies can improve tasks like action recognition.

Action recognition in videos. Many research projects have tackled the task of action recognition. A major line of work has focused on developing powerful backbone models to extract useful representations from videos [68, 71, 10, 23, 31]. Pre-trained on large-scale databases for action classification [9, 8], these backbone models serve as cornerstones for downstream video tasks and action recognition on other datasets. To assist more complicated action understanding, another growing set of research explores structural information in videos including temporal ordering [87, 50], object localization [73, 25, 75, 4, 52], and even implicit interactions between objects [52, 4]. In our work, we contrast against these methods by explicitly using a structured decomposition of actions into objects and relationships.

Table 1 lists some of the most popular datasets used for action recognition. One major trend of video datasets is providing considerably large amount of video clips with single action labels [8, 86, 9]. Although these databases have driven the progress of video feature representation for many downstream tasks, the provided annotations treat actions as monolithic events, and do not study how objects and their relationships change during actions/activities. In the mean time, other databases have provided more varieties of annotations: AVA [26] localizes the actors of actions, Charades [65] contains multiple actions happening at the same time, EPIC-Kitchen [15] localizes the interacted objects in ego-centric kitchen videos, DALY [74] provides object bounding boxes and upper body poses for daily activities. Still, scene graph, as a comprehensive structural abstraction of images, has not yet been studied in any large-scale video database as a potential representation for action recognition. In this work, we present Action Genome, the first large-scale database to jointly boost research in scene graphs and action understanding. Compared to existing datasets, we provide orders of magnitude more object and relationship labels grounded in actions.

Scene graph prediction. Scene graphs are a formal representation for image information [35, 42] in a form of a graph, which is widely used in knowledge bases [27, 13, 88]. Each scene graph encodes objects as nodes connected together by pairwise relationships as edges. Scene graphs have led to many state of the art models in image captioning [2], image retrieval [35, 63], visual question answering [34], relationship modeling [40], and image generation [33]. Given its versatile utility, the task of scene graph prediction has resulted in a series of publications [42, 14, 48, 45, 47, 58, 76, 84, 77, 30]

that have explored reinforcement learning 

[48], structured prediction [39, 16, 69], utilizing object attributes [20, 60], sequential prediction [58], few-shot prediction [12, 17], and graph-based [76, 46, 77] approaches. However, all of these approaches have restricted their application to static images and have not modelled visual concepts spatio-temporally.

Few-shot prediction.

The few-shot literature is broadly divided into two main frameworks. The first strategy learns a classifier for a set of frequent categories and then uses them to learn the few-shot categories 

[21, 22, 57]. For example, ZSL uses attributes of actions to enable few-shot [57]. The second strategy learns invariances or decompositions that enable few-shot classification [38, 89, 7, 18]. OSS and TARN propose a measurement of similarity or distance measure between video pairs [38, 7], CMN encodes uses a multi-saliency algorithm to encode videos [89]

, and ProtoGAN creates a prototype vector for each class 

[18]. Our framework resembles the first strategy because we use the object and visual relationship representations learned using the frequent actions to identify few-shot actions.

3 Action Genome

Figure 2: Action Genome’s annotation pipeline: For every action, we uniformly sample frames across the action and annotate the person performing the action along with the objects they interact with. We also annotate the pairwise relationships between the person and those objects. Here, we show a video with actions labelled, resulting in () frames annotated with scene graphs. The objects are grounded back in the video as bounding boxes.
Figure 3: Distribution of (a) relationship and (b) object occurrences. The relationships are color coded to represent attention, spatial , and contact relationships. Most relationships have at least k instances and objects have at least k instances.
attention spatial contact
looking at in front of carrying covered by
not looking at behind drinking from eating
unsure on the side of have it on the back holding
above leaning on lying on
beneath not contacting sitting on
in standing on touching
twisting wearing
wiping writing on
Table 2: There are three types of relationships in Action Genome: attention relationships report which objects people are looking at, spatial relationships indicate how objects are laid out spatially, and contact relationships are semantic relationships involving people manipulating objects.
Figure 4: A weighted bipartite mapping between objects and relationships shows that they are densely interconnected in Action Genome. The weights represent percentage of occurrences in which a specific object occurs in a relationship. There are three colors in the graph and they represent the three kinds of relationships: attention (in orange), spatial (in green) and contact (in purple).

Inspired from Cognitive Science, we decompose events into prototypical action-object units [83, 62, 43]. Each action in Action Genome is representated as changes to objects and their pairwise interactions with the actor/person performing the action. We derive our representation as a temporally changing version of Visual Genome’s scene graphs [42]. However, unlike Visual Genome, who’s goal was to densely represent a scene with objects and visual relationships, Action Genome’s goals is to decompose actions and as such, focuses on annotating only those segments of the video where the action occurs and only those objects that are involved in the action.

Annotation framework. Action Genome is built upon the Charades dataset [65], which contains action classes, of which are human-object activities. In Charades, there are multiple actions that might be occurring at the same time. We do not annotate every single frame in a video; it would be redundant as the changes between objects and relationships occur at longer time scales.

Figure 2 visualizes the pipeline of our annotation. We uniformly sample frames to annotate across the range of each action interval. With this action-oriented sampling strategy, we provide more labels where more actions occur. For instance, in the example, actions “sitting on a chair” and “drinking from a cup” occur together and therefore, result in more annotated frames, from each action. When annotating each sampled frame, the annotators hired were prompted with action labels and clips of the neighboring video frames for context. The annotators first draw bounding boxes around the objects involved in these actions, then choose the relationship labels from the label set. The clips are used to disambiguate between the objects that are actually involved in an action when multiple instances of a given category is present. For example, if multiple “cups” are present, the context disambiguates which “cup” to annotate for the action “drinking from a cup”.

Action Genome contains three different kinds of human-object relationships: attention, spatial and contact relationships (see Table 2). Attention relationships indicate if a person is looking at an object or not, and serve as indicators for which object the person is or will interacting with. Spatial relationships describe where objects are relative to one another. Contact relationships describe the different ways the person is contacting an object. A change in contact often indicates the occurrence of an actions: for example, changing from person - not contacting - book to person - holding - book may show an action of “picking up a book”.

It is worth noting that while Charades provides an injective mapping from each action to a verb, it is different from the relationship labels we provide. Charades’ verbs are clip-level labels, such as “awaken”, while we decompose them into frame-level human-object relationships, such as a sequence of person - lying on - bed, person - sitting on - bed and person - not contacting - bed.

Database statistics. Action Genome provides frame-level scene graph labels for the components of each action. Overall, we provide annotations for frames with a total of bounding boxes of object classes (excluding “person”), and instances of relationship classes. Figure 3 visualizes the log-distribution of object and relationship categories in the dataset. Like most concepts in vision, some objects (e.g. table and chair) and relationships (e.g. in front of and not looking at) occur frequently while others (e.g. twisting and doorknob) only occur a handful of times. However, even with such a distribution, almost all objects have at least k instances and every relationship as at least K instances.

Additionally, Figure 4 visualizes how frequently objects occur in which relationships. We see that most objects are pretty evenly involved in all three types of relationships. Unlike Visual Genome, where dataset bias provides a strong baseline for predicting relationships given the object categories, Action Genome does not suffer the same bias.

4 Method

Figure 5: Overview of our proposed model, SGFB, for action recognition using spatio-temporal scene graphs. SGFB predicts scene graphs for every frame in a video. These scene graphs are converted into features representations that are then combined using methods similar to long-term feature banks [75]. The final representation is merged with 3D CNN features and used to predict action labels.

We validate the utility of Action Genome’s action decomposition by studying the effect of simultaneously learning spatio-temporal scene graphs while learning to recognize actions. We propose a method, named Scene Graph Feature Banks (SGFB), to incorporate spatio-temporal scene graphs into action recognition. Our method is inspired by recent work in computer vision that uses the information “banks” [44, 1, 75]. Information banks are feature representations that have been used to represent, for example, object categories that occur in the video [44], or even include where the objects are [1]. Our model is most directly related to the recent long-term feature banks [75], which accumulates features of a long video as a fixed size representation for action recognition.

Overall, our SGFB model contains two components: the first component generates spatio-temporal scene graphs while the second component encodes the graphs to predict action labels. Given a video sequence , the aim of traditional multi-class action recognition is to assign multiple action labels to this video. Here, represents the video sequence made up of image frames . SGFB generates a spatio-temporal scene graph for every frame in the given video sequence. The scene graphs are encoded to formulate a spatio-temporal scene graph feature bank for the final task of action recognition. We describe the scene graph prediction and the scene graph feature bank components in more detail below. See Figure 5 for a high-level visualization of the model’s forward pass.

4.1 Scene graph prediction

Previous research has proposed plenty of methods for predicting scene graphs on static images [84, 51, 77, 47, 76, 85]. We employ a state-of-the-art scene graph prediction method as the first step of our method. Given a video sequence , the scene graph predictor generates all the objects and connects each object with their relationships with the actor in each frame, i.e. . On each frame, the scene graph consists of a set of objects that a person is interacting with and a set of relationships . Here denotes the th relationship between the person with the object . Note that there can be multiple relationships between the person and each object, including attention, spatial, and contact relationships. Besides the graph labels, the scene graph predictor also outputs confidence scores for all predicted objects: and relationships: . We have experimented with various choices of and benchmark their performance on Action Genome in Section 5.3.

4.2 Scene graph feature banks

After obtaining the scene graph on each frame, we formulate a feature vector by aggregating the information across all the scene graphs into a feature bank. Let’s assume there are classes of objects and classes of relationships. In Action Genome, and . We first construct a confidence matrix with dimension , where each entry corresponds to an object-relationship category pair. We compure every entry of this matrix using the scores output by the scene graph predictor . . Intuitively, is a high value when is confident that there is an object in the current frame and its relationship with the actor is . We flatten the confidence matrix as the feature vector for each image.

Formally,

is a sequence of scene graph features extracted from frames

. We aggregate the features across the frames using methods similar to long-term feature banks [75], i.e.  are combined with 3D CNN features

extracted from a short-term clip using feature bank operators (FBO), which can be instantiated as mean/max pooling or non-local blocks 

[72]. The 3D CNN embeds short-term information into while provides contextual information, critical in modeling the dynamics of complex actions with long time span. The final aggregated feature is then used to predict action labels for the video.

5 Experiments

Action Genome’s representation enables us to study few-shot action recognition by decomposing actions into temporally changing visual relationships between objects. It also allows us to benchmark whether understanding the decomposition helps improve performance in action recognition or scene graph prediction individually. To study these benefits afforded by Action Genome, we design three experiments: action recognition, few-shot action recognition, and finally, spatio-temporal scene graph prediction.

5.1 Action recognition on Charades

Method Backbone Pre-train mAP
I3D + NL [10, 72] R101-I3D-NL Kinetics-400 37.5
STRG [73] R101-I3D-NL Kinetics-400 39.7
Timeception [31] R101 Kinetics-400 41.1
SlowFast [23] R101 Kinetics-400 42.1
SlowFast+NL [23, 72] R101-NL Kinetics-400 42.5
LFB [75] R101-I3D-NL Kinetics-400 42.5
SGFB (ours) R101-I3D-NL Kinetics-400 44.3
SGFB Oracle (ours) R101-I3D-NL Kinetics-400 60.3
Table 3: Action recognition on Charades validation set in mAP (%). We outperform all existing methods when we simultaneously predict scene graphs while performing action recognition. We also find that utilizing ground truth scene graphs can significantly boost performance.

We expect that grounding the components that compose an action — the objects and their relationships — will improve our ability to predict which actions are occurring in a video sequence. So, we evaluate the utility of Action Genome’s scene graphs on the task of action recognition.

Problem formulation. We specifically study multi-class action recognition on the Charades dataset [65]. The Charades dataset contains crowdsourced videos with a length of seconds on average. At any frame, a person can perform multiple actions out of a nomenclature of classes. The multi-classification task provides a video sequence as input and expects multiple action labels as output. We train our SGFB model to predict Charades action labels during test time and during training, provide SGFB with spatio-temporal scene graphs as additional supervision.

Baselines. Previous work has proposed methods for multi-class action recognition and benchmarked on Charades. Recent state-of-the-art methods include applying I3D [10] and non-local blocks [72] as video feature extractors (I3D+NL), spatio-temporal region graphs (STRG[73], Timeception convolutional layers (Timeception[31], SlowFast networks (SlowFast[23], and long-term feature banks (LFB[75]. All the baseline methods are pre-trained on Kinetics-400 [37] and the input modality is RGB.

Implementation details. SGFB first predicts a scene graph on each frame, then constructs a spatio-temporal scene graph feature bank for action recognition. We use Faster R-CNN [61] with ResNet-101 [29] as the backbone for region proposals and object detection. We leverage RelDN [85] to predict the visual relationships. Scene graph prediction is trained on Action Genome, where we follow the same train/val splits of videos as the Charades dataset. Action recognition uses the same video feature extractor, hyper-parameters, and solver schedulers as long-term feature banks (LFB[75] for a fair comparison.

Results. We report performance of all models using mean average precision (mAP) on Charades validation set in Table 3. By replacing the feature banks with spatio-temporal scene graph features, we outperform the state-of-the-art LFB by mAP. Our features are smaller in size ( in SGFB versus in LFB) but concisely capture the more information for recognizing actions.

We also find that improving object detectors designed for videos can further improve action recognition results. To quantitatively demonstrate the potential of better scene graphs on action recognition, we designed an SGFB Oracle setup. The SGFB Oracle assumes that a perfect scene graph prediction method is available. The spatio-temporal scene graph feature bank therefore, directly encodes a feature vector from ground truth objects and visual relationships for the annotated frames. Feeding such feature banks into the SGFB model, we observe a significant improvement on action recognition: increase on mAP. Such a boost in performance shows the potential of Action Genome and compositional action understanding when video-based scene graph models are utilized to improve scene graph prediction. It is important to note that the performance by SGFB Oracle is not an upper bound on performance since we only utilize ground truth scene graphs for the few frames that have ground truth annotations.

5.2 Few-shot action recognition

Intuitively, predicting actions should be easier from a symbolic embedding of scene graphs than from pixels. When trained with very few examples, compositional action understanding with additional knowledge of scene graphs should outperform methods that treat actions as monolithic concept. We showcase the capability and potential of spatio-temporal scene graphs to generalize to rare actions.

1-shot 5-shot 10-shot
LFB [75] 28.3 36.3 39.6
SGFB (ours) 28.8 37.9 42.7
SGFB oracle (ours) 30.4 40.2 50.5
Table 4: Few-shot experiments. With the ability of compositional action understanding, our SGFB demonstrates better generalizability than LFB. The SGFB oracle shows the great potential of how much the scene graph representation could benefit action recognition.

Problem formulation. In our few-shot action recognition experiments on Charades, we split the action classes into a base set of classes and a novel set of classes. We first train a backbone feature extractor (R101-I3D-NL) on all video examples of the base classes, which is shared by the baseline LFB, our SGFB, and SGFB oracle. Next, we train each model with only examples from each novel class, where

, for 50 epochs. Finally, we evaluate the trained models on all examples of novel classes in the Charades validation set.

Results. We report few-shot experiment performance in Table 4. SGFB achieves better performance than LFB on all -shot experiments. Furthermore, if with ground truth scene graphs, SGFB Oracle shows a -shot mAP improvement. We visualize the comparison between SGFB and LFB in Figure 6. With the knowledge of spatio-temporal scene graphs, SGFB better captures action concepts involving the dynamics of objects and relationships.

Figure 6: Qualitative results of -shot experiments. We compare the predictions of our SGFB against LFB [75]. Since SGFB uses scene graph knowledge and explicitly captures the dynamics of human-object relationships, it easily learns the concept of “awakening in bed” even when only trained with examples of this label. Also, since SGFB is trained to detect and ground objects, it avoids misclassifying objects, such as television, which then results in more robust action recognition.

5.3 Spatio-temporal scene graph prediction

Progress in image-based scene graph prediction has cascaded to improvements across multiple Computer Vision tasks, including image captioning [2], image retrieval [35, 63], visual question answering [34], relationship modeling [40] and image generation [33]. In order to promote similar progress in video-based tasks, we introduce the complementary of spatio-temporal scene graph prediction. Unlike image-based scene graph prediction, which only has a single image as input, this task expects a video as input and therefore, can utilize temporal information from neighboring frames to strength its predictions. In this section, we define the task, its evaluation metrics and report benchmarked results from numerous recently proposed image-based scene graph models applied to this new task.

Method PredCls SGCls SGGen
image video image video image video
R@20 R@50 R@20 R@50 R@20 R@50 R@20 R@50 R@20 R@50 R@20 R@50
VRD [51] 24.92 25.20 24.63 24.87 22.59 24.62 22.25 24.30 16.40 17.27 16.01 16.87
Freq Prior [84] 45.50 45.67 44.91 45.05 43.89 45.60 43.30 44.99 33.37 34.54 32.65 33.79
Graph R-CNN [77] 23.71 23.91 23.42 23.60 21.76 23.50 21.41 23.18 15.99 16.81 15.59 16.42
MSDN [47] 48.05 48.32 47.43 47.67 44.69 47.85 43.99 47.17 34.18 36.09 33.45 35.28
IMP [76] 48.20 48.48 47.58 47.83 44.78 48.00 44.12 47.35 34.24 36.18 33.58 35.47
RelDN [85] 49.37 49.58 48.80 48.98 46.74 49.36 46.19 48.76 35.61 37.21 34.92 36.49
Table 5: We evaluate numerous recently proposed image-based scene graph prediction models and provide a benchmark for the new task of spatio-temporal scene graph prediction. We find that there is significant room for improvement, especially since these existing methods were designed to be conditioned on a single frame and do not consider the entire video sequence as a whole.

Problem formulation. The task expects as input a video sequence where represents image frames from the video. The task requires the model to generate a spatio-temporal scene graph per frame. is represented as objects with category labels and bounding box locations. represents the relationships between objects and .

Evaluation metrics. We borrow the three standard evaluation modes for image-based scene graph prediction [51]: (i) scene graph detection (SGDET) which expects input images and predicts bounding box locations, object categories, and predicate labels, (ii) scene graph classification (SGCLS) which expects ground truth boxes and predicts object categories and predicate labels, and (iii) predicate classification (PREDCLS), which expects ground truth bounding boxes and object categories to predict predicate labels. We refer the reader to the paper that introduced these tasks for more details [51]. We adapt these metrics for video, where the per-frame measurements are first averaged in each video as the measurement of the video, then we average video results as the final result for the test set.

Baselines. We benchmark the following recently proposed image-based scene graph models for the task of spatio-temporal scene graph prediction: VRD’s visual module (VRD[51], iterative message passing (IMP[76], multi-level scene description network (MSDN[47], graph convolution R-CNN (Graph R-CNN[77], neural motif’s frequency prior (Freq-prior[84], and relationship detection network (RelDN[85].

Results. To our surprise, we find that IMP, which was one of the earliest scene graph prediction models actually outperforms numerous more recently proposed methods. The most recently proposed scene graph model, RelDN marginally outperforms IMP, suggesting that modeling similarlities between object and relationship classes improve performance in our task as well. The small gap in performance between the task of PredCls and SGCls suggests that these models suffer from not being able to accurately detect the objects in the video frames. Improving object detectors designed specifically for videos could improve performance. The models were trained only using Action Genome’s data and not finetuned on Visual Genome [42], which contains image-based scene graphs, or on ActivityNet Captions [41]

, which contains dense captioning of actions in videos with natural language paragraphs. We expect that finetuning models with such datasets would result in further improvements.

6 Future work

With the rich hierarchy of events, Action Genome not only enables research on spatio-temporal scene graph prediction and compositional action recognition, but also promises various research directions. We hope future work will develop methods for the following:

Spatio-temporal action localization. The majority of spatio-temporal action localization methods [24, 25, 67, 32, 67] focus on localizing the person performing the action but ignore the objects, which are also involved in the action, that the person interacts with. Action Genome can enable research on localization of both actors and objects, formulating a more comprehensive grounded action localization task. Furthermore, other variants of this task can also be explored; for example, a weakly-supervised localization task where a model is trained with only action labels but tasked with localizing the actors and objects.

Explainable action models.

Explainable visual models is an emerging field of research. Amongst numerous techniques, saliency prediction has emerged as a key mechanism to interpret machine learning models 

[66, 53, 64]. Action Genome provides frame-level labels of attention in the form of objects that a the person performing the action is either looking at or interacting with. These labels can be used to further train explainable models.

Video generation from spatio-temporal scene graphs. Recent studies have explored image generation from scene graphs [33, 3]. Similarly, with a structured video representation, Action Genome enables research on video generation from spatio-temporal scene graphs.

7 Conclusion

We introduce Action Genome, a representation that decomposes actions into spatio-temporal scene graphs. Scene graphs explain how objects and their relationships change as an action occurs. We demonstrated the utility of Action Genome by collecting a large dataset of spatio-temporal scene graphs and used it to improve state of the art results for action recognition as well as few-shot action recognition. Finally, we benchmarked results for the new task of spatio-temporal scene graph prediction. We hope that Action Genome will inspire a new line of research in more decomposable and generalizable video understanding.

Acknowledgement. This work has been supported by Panasonic. This article solely reflects the opinions and conclusions of its authors and not Panasonic or any entity associated with Panasonic.

References

  • [1] T. Althoff, H. O. Song, and T. Darrell (2012) Detection bank: an object detection based video representation for multimedia event recognition. In Proceedings of the 20th ACM international conference on Multimedia, pp. 1065–1068. Cited by: §4.
  • [2] P. Anderson, B. Fernando, M. Johnson, and S. Gould (2016) Spice: semantic propositional image caption evaluation. In European Conference on Computer Vision, pp. 382–398. Cited by: §1, §2, §5.3.
  • [3] O. Ashual and L. Wolf (2019) Specifying object attributes and relations in interactive scene generation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4561–4569. Cited by: §6.
  • [4] F. Baradel, N. Neverova, C. Wolf, J. Mille, and G. Mori (2018) Object level visual reasoning in videos. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 105–121. Cited by: §2.
  • [5] R. G. Barker and H. F. Wright (1951) One boy’s day; a specimen record of behavior.. Cited by: §1, §2.
  • [6] R. G. Barker and H. F. Wright (1955) Midwest and its children: the psychological ecology of an american town.. Cited by: §1, §2.
  • [7] M. Bishay, G. Zoumpourlis, and I. Patras (2019) TARN: temporal attentive relation network for few-shot and zero-shot action recognition. arXiv preprint arXiv:1907.09021. Cited by: §2.
  • [8] F. Caba Heilbron, V. Escorcia, B. Ghanem, and J. Carlos Niebles (2015) Activitynet: a large-scale video benchmark for human activity understanding. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    ,
    pp. 961–970. Cited by: Table 1, §1, §2, §2.
  • [9] J. Carreira, E. Noland, C. Hillier, and A. Zisserman (2019) A short note on the kinetics-700 human action dataset. arXiv preprint arXiv:1907.06987. Cited by: Table 1, §2, §2.
  • [10] J. Carreira and A. Zisserman (2017) Quo vadis, action recognition? a new model and the kinetics dataset. In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6299–6308. Cited by: §1, §2, §5.1, Table 3.
  • [11] R. Casati and A. Varzi (1996) Events, volume 15 of the international research library of philosophy. Dartmouth Publishing, Aldershot. Cited by: §2.
  • [12] V. S. Chen, P. Varma, R. Krishna, M. Bernstein, C. Re, and L. Fei-Fei (2019) Scene graph prediction with limited labels. arXiv preprint arXiv:1904.11622. Cited by: §2.
  • [13] A. Culotta and J. Sorensen (2004) Dependency tree kernels for relation extraction. In Proceedings of the 42nd annual meeting on association for computational linguistics, pp. 423. Cited by: §2.
  • [14] B. Dai, Y. Zhang, and D. Lin (2017) Detecting visual relationships with deep relational networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3298–3308. Cited by: §2.
  • [15] D. Damen, H. Doughty, G. Maria Farinella, S. Fidler, A. Furnari, E. Kazakos, D. Moltisanti, J. Munro, T. Perrett, W. Price, et al. (2018) Scaling egocentric vision: the epic-kitchens dataset. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 720–736. Cited by: Table 1, §2.
  • [16] C. Desai, D. Ramanan, and C. C. Fowlkes (2011) Discriminative models for multi-class object layout. International journal of computer vision 95 (1), pp. 1–12. Cited by: §2.
  • [17] A. Dornadula, A. Narcomey, R. Krishna, M. Bernstein, and L. Fei-Fei (2019) Visual relationships as functions: enabling few-shot scene graph prediction. arXiv preprint arXiv:1906.04876. Cited by: §2.
  • [18] S. K. Dwivedi, V. Gupta, R. Mitra, S. Ahmed, and A. Jain (2019) ProtoGAN: towards few shot learning for action recognition. arXiv preprint arXiv:1909.07945. Cited by: §2.
  • [19] V. Escorcia, F. C. Heilbron, J. C. Niebles, and B. Ghanem (2016) Daps: deep action proposals for action understanding. In European Conference on Computer Vision, pp. 768–784. Cited by: §2.
  • [20] A. Farhadi, I. Endres, D. Hoiem, and D. Forsyth (2009) Describing objects by their attributes. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 1778–1785. Cited by: §2.
  • [21] L. Fe-Fei et al. (2003) A bayesian approach to unsupervised one-shot learning of object categories. In Proceedings Ninth IEEE International Conference on Computer Vision, pp. 1134–1141. Cited by: §2.
  • [22] L. Fei-Fei, R. Fergus, and P. Perona (2006) One-shot learning of object categories. IEEE transactions on pattern analysis and machine intelligence 28 (4), pp. 594–611. Cited by: §2.
  • [23] C. Feichtenhofer, H. Fan, J. Malik, and K. He (2019) Slowfast networks for video recognition. In Proceedings of the IEEE International Conference on Computer Vision, pp. 6202–6211. Cited by: §1, §2, §5.1, Table 3.
  • [24] R. Girdhar, J. Carreira, C. Doersch, and A. Zisserman (2018) A better baseline for ava. arXiv preprint arXiv:1807.10066. Cited by: §6.
  • [25] R. Girdhar, J. Carreira, C. Doersch, and A. Zisserman (2019)

    Video action transformer network

    .
    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 244–253. Cited by: §2, §6.
  • [26] C. Gu, C. Sun, D. A. Ross, C. Vondrick, C. Pantofaru, Y. Li, S. Vijayanarasimhan, G. Toderici, S. Ricco, R. Sukthankar, et al. (2018) AVA: a video dataset of spatio-temporally localized atomic visual actions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6047–6056. Cited by: Table 1, §2.
  • [27] Z. GuoDong, S. Jian, Z. Jie, and Z. Min (2005) Exploring various knowledge in relation extraction. In Proceedings of the 43rd annual meeting on association for computational linguistics, pp. 427–434. Cited by: §2.
  • [28] B. M. Hard, B. Tversky, and D. S. Lang (2006) Making sense of abstract events: building event schemas. Memory & cognition 34 (6), pp. 1221–1235. Cited by: §2.
  • [29] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §5.1.
  • [30] R. Herzig, M. Raboh, G. Chechik, J. Berant, and A. Globerson (2018) Mapping images to scene graphs with permutation-invariant structured prediction. In Advances in Neural Information Processing Systems, pp. 7211–7221. Cited by: §2.
  • [31] N. Hussein, E. Gavves, and A. W. Smeulders (2019) Timeception for complex action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 254–263. Cited by: §1, §2, §5.1, Table 3.
  • [32] J. Jiang, Y. Cao, L. Song, S. Z. Y. Li, Z. Xu, Q. Wu, C. Gan, C. Zhang, and G. Yu (2018) Human centric spatio-temporal action localization. In ActivityNet Workshop on CVPR, Cited by: §6.
  • [33] J. Johnson, A. Gupta, and L. Fei-Fei (2018) Image generation from scene graphs. arXiv preprint arXiv:1804.01622. Cited by: §1, §2, §5.3, §6.
  • [34] J. Johnson, B. Hariharan, L. van der Maaten, J. Hoffman, L. Fei-Fei, C. L. Zitnick, and R. Girshick (2017) Inferring and executing programs for visual reasoning. arXiv preprint arXiv:1705.03633. Cited by: §1, §2, §5.3.
  • [35] J. Johnson, R. Krishna, M. Stark, L. Li, D. Shamma, M. Bernstein, and L. Fei-Fei (2015) Image retrieval using scene graphs. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3668–3678. Cited by: §1, §2, §5.3.
  • [36] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei (2014)

    Large-scale video classification with convolutional neural networks

    .
    In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 1725–1732. Cited by: §2.
  • [37] W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, et al. (2017) The kinetics human action video dataset. arXiv preprint arXiv:1705.06950. Cited by: §1, §5.1.
  • [38] O. Kliper-Gross, T. Hassner, and L. Wolf (2011) One shot similarity metric learning for action recognition. In International Workshop on Similarity-Based Pattern Recognition, pp. 31–45. Cited by: §2.
  • [39] P. Krähenbühl and V. Koltun (2011) Efficient inference in fully connected crfs with gaussian edge potentials. In Advances in neural information processing systems, pp. 109–117. Cited by: §2.
  • [40] R. Krishna, I. Chami, M. Bernstein, and L. Fei-Fei (2018) Referring relationships. In Computer Vision and Pattern Recognition, Cited by: §1, §2, §5.3.
  • [41] R. Krishna, K. Hata, F. Ren, L. Fei-Fei, and J. Carlos Niebles (2017) Dense-captioning events in videos. In Proceedings of the IEEE international conference on computer vision, pp. 706–715. Cited by: §5.3.
  • [42] R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L. Li, D. A. Shamma, et al. (2017) Visual genome: connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision 123 (1), pp. 32–73. Cited by: §1, §1, §2, §3, §5.3.
  • [43] C. A. Kurby and J. M. Zacks (2008) Segmentation in the perception and memory of events. Trends in cognitive sciences 12 (2), pp. 72–79. Cited by: Figure 1, §1, §1, §3.
  • [44] L. Li, H. Su, L. Fei-Fei, and E. P. Xing (2010)

    Object bank: a high-level image representation for scene classification & semantic feature sparsification

    .
    In Advances in neural information processing systems, pp. 1378–1386. Cited by: §4.
  • [45] Y. Li, W. Ouyang, X. Wang, and X. Tang (2017) Vip-cnn: visual phrase guided convolutional neural network. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pp. 7244–7253. Cited by: §2.
  • [46] Y. Li, W. Ouyang, B. Zhou, J. Shi, C. Zhang, and X. Wang (2018) Factorizable net: an efficient subgraph-based framework for scene graph generation. In European Conference on Computer Vision, pp. 346–363. Cited by: §2.
  • [47] Y. Li, W. Ouyang, B. Zhou, K. Wang, and X. Wang (2017) Scene graph generation from objects, phrases and region captions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1261–1270. Cited by: §2, §4.1, §5.3, Table 5.
  • [48] X. Liang, L. Lee, and E. P. Xing (2017) Deep variation-structured reinforcement learning for visual relationship and attribute detection. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pp. 4408–4417. Cited by: §2.
  • [49] I. Lillo, A. Soto, and J. Carlos Niebles (2014) Discriminative hierarchical modeling of spatio-temporally composable human activities. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 812–819. Cited by: §1, §1, §2.
  • [50] J. Lin, C. Gan, and S. Han (2019) Tsm: temporal shift module for efficient video understanding. In Proceedings of the IEEE International Conference on Computer Vision, pp. 7083–7093. Cited by: §2.
  • [51] C. Lu, R. Krishna, M. Bernstein, and L. Fei-Fei (2016) Visual relationship detection with language priors. In European Conference on Computer Vision, pp. 852–869. Cited by: §4.1, §5.3, §5.3, Table 5.
  • [52] C. Ma, A. Kadav, I. Melvin, Z. Kira, G. AlRegib, and H. Peter Graf (2018) Attend and interact: higher-order object interactions for video understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6790–6800. Cited by: §2.
  • [53] A. Mahendran and A. Vedaldi (2016) Salient deconvolutional networks. In European Conference on Computer Vision, pp. 120–135. Cited by: §6.
  • [54] A. Michotte (1963) The perception of causality. Routledge. Cited by: §1, §2.
  • [55] G. A. Miller and P. N. Johnson-Laird (1976) Language and perception.. Belknap Press. Cited by: §1.
  • [56] G. A. Miller (1995) WordNet: a lexical database for english. Communications of the ACM 38 (11), pp. 39–41. Cited by: §1.
  • [57] A. Mishra, V. K. Verma, M. S. K. Reddy, S. Arulkumar, P. Rai, and A. Mittal (2018) A generative approach to zero-shot and few-shot action recognition. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 372–380. Cited by: §2.
  • [58] A. Newell and J. Deng (2017) Pixels to graphs by associative embedding. In Advances in Neural Information Processing Systems, pp. 2168–2177. Cited by: §2.
  • [59] D. Newtson (1973) Attribution and the unit of perception of ongoing behavior.. Journal of Personality and Social Psychology 28 (1), pp. 28. Cited by: §2.
  • [60] D. Parikh and K. Grauman (2011) Relative attributes. In Computer Vision (ICCV), 2011 IEEE International Conference on, pp. 503–510. Cited by: §2.
  • [61] S. Ren, K. He, R. Girshick, and J. Sun (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91–99. Cited by: §5.1.
  • [62] J. R. Reynolds, J. M. Zacks, and T. S. Braver (2007) A computational model of event segmentation from perceptual prediction. Cognitive science 31 (4), pp. 613–643. Cited by: §1, §3.
  • [63] S. Schuster, R. Krishna, A. Chang, L. Fei-Fei, and C. D. Manning (2015) Generating semantically precise scene graphs from textual descriptions for improved image retrieval. In Proceedings of the fourth workshop on vision and language, pp. 70–80. Cited by: §1, §2, §5.3.
  • [64] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626. Cited by: §6.
  • [65] G. A. Sigurdsson, G. Varol, X. Wang, A. Farhadi, I. Laptev, and A. Gupta (2016) Hollywood in homes: crowdsourcing data collection for activity understanding. In European Conference on Computer Vision, pp. 510–526. Cited by: Table 1, §1, §1, §1, §2, §3, §5.1.
  • [66] K. Simonyan, A. Vedaldi, and A. Zisserman (2013) Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034. Cited by: §6.
  • [67] C. Sun, A. Shrivastava, C. Vondrick, K. Murphy, R. Sukthankar, and C. Schmid (2018) Actor-centric relation network. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 318–334. Cited by: §6.
  • [68] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri (2015) Learning spatiotemporal features with 3d convolutional networks. In 2015 IEEE International Conference on Computer Vision (ICCV), pp. 4489–4497. Cited by: §1, §2.
  • [69] Z. Tu and X. Bai (2010) Auto-context and its application to high-level vision tasks and 3d brain image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence 32 (10), pp. 1744–1757. Cited by: §2.
  • [70] G. Varol, I. Laptev, and C. Schmid (2017) Long-term temporal convolutions for action recognition. IEEE transactions on pattern analysis and machine intelligence 40 (6), pp. 1510–1517. Cited by: §2.
  • [71] L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, and L. Van Gool (2016) Temporal segment networks: towards good practices for deep action recognition. In European conference on computer vision, pp. 20–36. Cited by: §1, §2.
  • [72] X. Wang, R. Girshick, A. Gupta, and K. He (2018) Non-local neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7794–7803. Cited by: §4.2, §5.1, Table 3.
  • [73] X. Wang and A. Gupta (2018) Videos as space-time region graphs. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 399–417. Cited by: §2, §5.1, Table 3.
  • [74] P. Weinzaepfel, X. Martin, and C. Schmid (2016) Human action localization with sparse spatial supervision. arXiv preprint arXiv:1605.05197. Cited by: Table 1, §2.
  • [75] C. Wu, C. Feichtenhofer, H. Fan, K. He, P. Krahenbuhl, and R. Girshick (2019) Long-term feature banks for detailed video understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 284–293. Cited by: §1, §2, Figure 5, §4.2, §4, Figure 6, §5.1, §5.1, Table 3, Table 4.
  • [76] D. Xu, Y. Zhu, C. B. Choy, and L. Fei-Fei (2017) Scene graph generation by iterative message passing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Vol. 2. Cited by: §2, §4.1, §5.3, Table 5.
  • [77] J. Yang, J. Lu, S. Lee, D. Batra, and D. Parikh (2018) Graph r-cnn for scene graph generation. arXiv preprint arXiv:1808.00191. Cited by: §2, §4.1, §5.3, Table 5.
  • [78] B. Yao, X. Yang, and S. Zhu (2007) Introduction to a large-scale general purpose ground truth database: methodology, annotation tool and benchmarks. In International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition, pp. 169–183. Cited by: §1.
  • [79] S. Yeung, O. Russakovsky, N. Jin, M. Andriluka, G. Mori, and L. Fei-Fei (2018)

    Every moment counts: dense detailed labeling of actions in complex videos

    .
    International Journal of Computer Vision 126 (2-4), pp. 375–389. Cited by: §2.
  • [80] S. Yeung, O. Russakovsky, G. Mori, and L. Fei-Fei (2016) End-to-end learning of action detection from frame glimpses in videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2678–2687. Cited by: §2.
  • [81] J. Yue-Hei Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici (2015) Beyond short snippets: deep networks for video classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4694–4702. Cited by: §2.
  • [82] J. M. Zacks, T. S. Braver, M. A. Sheridan, D. I. Donaldson, A. Z. Snyder, J. M. Ollinger, R. L. Buckner, and M. E. Raichle (2001) Human brain activity time-locked to perceptual event boundaries. Nature neuroscience 4 (6), pp. 651. Cited by: §2.
  • [83] J. M. Zacks, B. Tversky, and G. Iyer (2001) Perceiving, remembering, and communicating structure in events.. Journal of experimental psychology: General 130 (1), pp. 29. Cited by: Figure 1, §1, §1, §3.
  • [84] R. Zellers, M. Yatskar, S. Thomson, and Y. Choi (2017) Neural motifs: scene graph parsing with global context. arXiv preprint arXiv:1711.06640. Cited by: §2, §4.1, §5.3, Table 5.
  • [85] J. Zhang, K. J. Shih, A. Elgammal, A. Tao, and B. Catanzaro (2019) Graphical contrastive losses for scene graph parsing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 11535–11543. Cited by: §4.1, §5.1, §5.3, Table 5.
  • [86] H. Zhao, A. Torralba, L. Torresani, and Z. Yan (2019) Hacs: human action clips and segments dataset for recognition and temporal localization. In Proceedings of the IEEE International Conference on Computer Vision, pp. 8668–8678. Cited by: Table 1, §1, §2.
  • [87] B. Zhou, A. Andonian, A. Oliva, and A. Torralba (2018) Temporal relational reasoning in videos. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 803–818. Cited by: §2.
  • [88] G. Zhou, M. Zhang, D. Ji, and Q. Zhu (2007) Tree kernel-based relation extraction with context-sensitive structured parse tree information. In

    Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)

    ,
    Cited by: §2.
  • [89] L. Zhu and Y. Yang (2018) Compound memory networks for few-shot video classification. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 751–766. Cited by: §2.
  • [90] T. Zhuo, Z. Cheng, P. Zhang, Y. Wong, and M. Kankanhalli (2019) Explainable video action reasoning via prior knowledge and state transitions. In Proceedings of the 27th ACM International Conference on Multimedia, pp. 521–529. Cited by: Table 1.