Driving is a complex task because it involves highly dynamic, complex environments in which many different autonomous agents (other drivers, pedestrians, etc.) are acting at the same time. Human drivers make real-time decisions by combining information from multiple sources, among which visual information often plays the most important part. Since humans have foveated vision systems that require controlling both head pose and eye gaze [34, 26, 5, 14, 42, 36], people must identify and attend to the most task-relevant objects in their visual field at any given time.
Learning to predict drivers’ attention has become a popular topic in recent years [35, 40, 41, 12, 44] due to potential application in advanced driver assistance systems (ADAS) and autonomous driving. However, much of this work [35, 40, 41, 44] focuses on predicting pixel-level human eye gaze, which has two main drawbacks. First, drivers will often look at objects irrelevant to the driving task — e.g.
, beautiful scenery. Second, gaze is limited to a single, small region at any moment in time, whereas the human may be actively attending to multiple objects (using short-term memory, peripheral vision, or multiple saccades) — for example, if a group of people is crossing the road, a good driver (and hence an autonomous system) should pay attention to all of them instead of just a single person.
To overcome the above problems, in this paper we investigate how to directly estimate each object’s importance to the ego-vehicle for making decisions in on-road driving videos without using eye gaze as an intermediate step, as is shown in Fig. 1. We use the dataset collected by Gao et al. , in which each sample clip was viewed by experienced drivers and each object in the last frame of the clip was labeled as either important or not. These on-road videos were recorded by forward-facing cameras mounted on cars. This perspective is somewhat different from the drivers’ actual field of view since the dashboard camera is fixed; it makes the video more stable than if it were from a head-mounted camera, but also makes the problem more challenging since we cannot use cues about where the driver was looking (e.g., human drivers tend to adjust their head pose to center an attended region within their visual field ).
We propose to leverage frequent interactions among objects in the scene other than the ego-vehicle. Such interactions are often overlooked by other methods but extremely helpful. For example, in Fig. 1, the front car will prevent the ego-vehicle from hitting the pedestrians and thus greatly reduces their importance: the ego-vehicle’s driver only needs to avoid hitting the car in front at the moment. We model these interactions using a novel interaction graph, with features pooled from an I3D-based  feature extractor as the nodes, and interaction scores learned by the network itself as the edges. Through stacked graph convolutional layers, object nodes interact with each other and their features are updated from those of the nodes they closely interact with. Our experiments show that the interaction graph greatly improves performance and our model outperforms state-of-the-art methods with less input information (RGB clips only) and pre-processing (object detection on the target frame only).
Ii Related work
Ii-a Driver Attention Prediction
As interest in (semi-)autonomous driving is growing, researchers have paid more attention to the problem of predicting driver attention. This is typically posed as predicting pixel-level eye gaze in terms of likelihood maps [35, 40, 41, 44]. Fully convolutional networks , which were original proposed to solve image segmentation, have been applied by Tawari et al. [40, 41]
for similar dense spatial probability prediction tasks. Palazziet al.  combine features from multiple branches with RGB frames, optical flow, and semantic segmentation to create the final prediction. Xia et al. propose to handle critical situations by paying more attention to the frames that are identified as crucial driving moments based on human driver eye gaze movements .
However, using eye gaze prediction to estimate driver attention has limitations, such as that drivers can be attending to multiple objects at once (through short-term memory, frequent saccades or peripheral vision). To overcome these drawbacks, Gao et al.  collected an on-road driving video dataset with objects labeled as important or not by experienced drivers. They use an LSTM  leveraging goal information for object importance estimation. To achieve state-of-the-art performance, however, their technique requires multiple sources of input including maneuver information of the planned path, RGB clips, optical flow, and location information, as well as complex pre-processing of per-frame object detection and tracking.
Ii-B Object-level Attention
While many papers study saliency [18, 31, 48, 23] and eye gaze [19, 28, 49, 47, 20, 22, 4, 45] prediction, only a few focus on object-level attention [27, 37, 33, 2, 3, 50]. Hand-designed features are applied for important object and people detection in Lee et al. . Pirsiavash and Ramanan  and Ma et al.  detect objects in hands as a proxy for attended objects. Bertasius et al.  propose an unsupervised method for attended object prediction. The recent work of Zhang et al.  tries to jointly identify and locate the attended object. Inspired by the foveated nature of human vision system, the class and the location information are integrated through a self-validation module to further refine the prediction. But these techniques are for egocentric videos in which both head movement and ego hands are available for making inferences, while in our settings the camera is fixed and hand information is not applicable.
Ii-C Graph Convolutional Networks
have become popular in computer vision problems[43, 7, 29, 46] for their ability to capture long-range relationships in non-Euclidean space. Compared with traditional convolutional networks which need to stack many layers for a large receptive field, GCNs can efficiently model long-range relations with an adjacency matrix. Inspired by these papers, we apply interaction graphs to solve the problem of object importance estimation in on-road driving videos. To the best of our knowledge, ours is the first work to perform per-node predictions in computer vision problems with GCNs. Also, our model learns to predict the graph edges themselves based on nodes’ interactions, while existing work typically formulates the edges with hand-designed metrics such as spatial-temporal overlap .
Iii Our Model
Given videos taken by forward-facing (egocentric) vehicle cameras, our aim is to estimate the importance of each object in the scene to the driver. Following the same problem setting as , we do online estimation, predicting object importance scores in the last frame of each video clip. We propose a novel framework that leverages the interactions among objects using graph convolution. An overview of our model is shown in Fig. 2.
Object proposals. Since it is time-sensitive, we would like online prediction to require as little pre-processing as possible. In contrast to the state-of-the-art method  which requires object detection on each frame of the input video as well as object tracking, our model only needs to run object detection on the target frame (the last frame in the online detection setting). We apply Mask RCNN , using an off-the-shelf ResXNet-101  32x8d Mask RCNN model  trained on MSCOCO  without any further fine-tuning. Proposals of object classes other than person, bicycle, car, motorcycle, bus, train, truck, traffic light, and stop sign are removed since they are irrelevant to our task. For simplicity, a number of dummy boxes with coordinates of (all box coordinates are rescaled to
with respect to the height and width of the input frame) are padded until each target frame contains the same number of proposals. We empirically set the number of object proposals per target frame after padding to be. The dummy box is the hood of the ego car, for which the image content almost remains unchanged for all of the samples. Also, object proposals were pre-generated to save time in our experiments, though during model design we tried our best to reduce pre-processing steps to make it meet the time requirements of online prediction.
Visual feature extractor. Since only RGB clips are used as input, making correct object importance predictions requires as strong a feature extractor as possible. We use Inception-V1  I3D  because of its capacity for capturing both rich spatial and temporal features. Temporal motion information is important for reasoning about both the ego-vehicle’s and other objects’ intention and future movement, and spatial appearance information helps determine the inherent characteristics of each object. Given contiguous RGB frames , we feed them through I3D and extract features from the last mixing layer of rgb_Mixed_5c. We have , , , and based on the architecture setting of Inception-V1 I3D.
Feature vectors of graph nodes. Extracted features
from above are further aggregated temporally through one-layer convolution along only the temporal dimension, with kernel size and stride set to. From the obtained feature maps , the features for each object are pooled using ROI Pooling [15, 38]. Following , we have (the temporal dimension is removed). These feature maps then go through a spatial max pooling layer, resulting in feature vectors for each object node.
Graph edge formulation. The strength of an edge in our interaction graph should reflect how closely two connected objects and interact with each other. We propose that the network itself learns to model the edge strength by estimating how closely two nodes and interact with each other. Given node features and , an interaction score is first computed,
are linear transformations with different learnable parametersand . is a linear transformation with as learnable parameters and denotes concatenation. With an interaction matrix obtained by computing interaction scores for each pair of nodes, we calculate by applying softmax on
as well as adding an identity matrixto force self attention,
In this way, the model learns an interaction graph itself based on each node’s features. The learned edge strength indicates how much node will affect updating node ’s features through graph convolution and thus reflects how closely they are interacting.
We note that the interaction graph learned here is a directional graph, as and are different transformations. This is reasonable since how much node affect node is not necessarily the same as how much affects . For example, in Fig. 1, while the front car will greatly reduce the importance of the pedestrians, the pedestrians almost have no influence on how important the front car is to the ego vehicle.
An alternative way of forming the graph is based on feature similarity between pairs of nodes following . We ran experiments to compare our interaction graph with that similarity graph, and found that the model is not able to learn well with similarity graphs. Our hypothesis is that similarity graphs are not suitable for our problem, as objects sharing similar appearance or motion do not necessarily closely interact, and vice versa. For example, in Fig.1, the front car and the pedestrians are very different in terms of both appearance and motion, despite the close interaction between them. Since our proposed interaction graph yields better performance, we use it in our experiments here.
Graph convolution. With the graph formulated, we perform graph convolution to allow nodes to interact and update each other. One layer of graph convolution can be represented as:
where is the edge matrix, is the input node feature matrix, and is the weight matrix of the layer.
is a non-linear function; we use ReLU in our experiments. We stackgraph convolutional layers and simply set for all of them. After the graph convolution, we obtain an updated feature matrix .
Per-node object importance estimation. We now perform per-node object importance estimation. Although each node’s features are updated through GCN to capture long-range relationships with other nodes, some global context may still be missing because object proposals cannot cover the whole image. Also, the object detector is not perfect and useful objects may be missed (e.g., small objects such as traffic lights). To circumvent this problem, we first apply global average pooling on the extracted features from I3D to obtain a global descriptor , which are then tiled times and concatenated with the updated node features . Each row of the resulting features is fed into a shared Multilayer Perceptron (MLP) for the final importance score estimation,
Other implementation and training details
Missing detections. Since we apply multi-fold cross validation (as in ) in our experiments, each sample will either serve as a training sample or a testing sample. For those samples with ground truth objects not detected by our object detector, we prepare two sets of proposals for training and testing respectively. The set for testing is the same as the proposals obtained by padding dummy boxes to the results of Mask RCNN, while the set for training is slightly different as we replace dummy boxes with the ground truth object boxes which were not detected to avoid misleading the network. Note that missing detections of ground truth objects still happen when these samples serve as testing data, and thus our model can never reach .
Hard negative mining and loss functions.The dataset collected by Gao et al.  suffers from significant imbalance: of 8,611 samples (short video clips), only 4,268 objects are labeled as important. Considering our setting of 40 box nodes per sample, the ratio of the total number of positive boxes over negative boxes is almost 1:80. In contrast to  which applies weighted-cross-entropy based on the number of positive and negatives boxes, we do hard negative mining to address the problem. During each batch, we first compute the losses for each node importance estimation with binary cross entropy loss,
where is the corresponding ground truth and,
Then the losses for negative nodes are sorted and we take only the greatest from them, along with the losses for all the positive nodes, to compute the total loss. Letting denote the total number of positive nodes, we empirically found that works well (after multiple experiments on different ratios). Supposing and denote the sets of indices for all the positive nodes and the selected negative nodes whose node losses are among the top , the total loss is then,
We implemented the model with Keras
. A batch normalization layer was inserted after each layer of the feature extractor, with momentum 0.8. We used RGB clips with dimension as the only input. During training, the I3D feature extractor was initialized with weights pretrained on Kinetics 
, while other parts were randomly initialized. We trained the model with stochastic gradient descent with initial learning rate 0.0003, momentum 0.9, decay 0.0001, and L2 regularizer 0.0005. The loss function to be optimized was as Eq.7. When making inference, the model predicts an importance score in the range of for each of the object proposals, resulting in a prediction per sample.
Iv-a Experiment Settings
Dataset. We evaluate our model on the the on-road driving video dataset used by Gao et al. . This dataset consists of 8,611 annotated samples, each of which is a 30-frame RGB clip (during our experiments we only use the last 16 of them) recording real-world driving in highly-dynamic environments. The dataset focuses on real-time driving around road intersections, and were annotated by experienced drivers with object importance estimates. Please refer to  for detailed statistics about the dataset. In our experiments, we follow the same data split as in  and also perform 3-fold cross validation.
Metrics. We compute 11-point average precision (AP)  for each data split and then take the average across the 3 splits. A predicted box is considered as correct if its IOU with one of the unmatched ground truth boxes is over 0.5. We do not allow duplicated matching to the same ground truth box. Also, note that due to false negatives of the object detector, the upper bound performance of our model can never achieve , as in .
|Models||RGB clips||Input length||Optical flows||Goal||Location||Object detection||Tracking|
|Our model||✓||16||✗||✗||✗||On the target frame||✗||68.5||73.8||71.9||71.4|
|Goal-Visual Model ||✓||30||✓||✓||✓||On each frame||✓||70.2||70.3||72.0||70.8|
|Visual Model ||✓||30||✓||✗||✓||On each frame||✓||68.1||68.1||70.9||69.0|
|Goal-Geometry Model ||✗||0||✓||✓||✓||N/A||✓||32.1||40.6||41.8||38.2|
|Visual Model-Image ||✓||1||✗||✓||✓||On the target frame||✓||35.5||42.1||32.6||36.7|
Baselines. We compare our model with the state-of-the-art goal-oriented model as well as other competitive baselines in . These baselines have similar network structures based on LSTMs , while the main difference between them is the input features. The strongest one, Goal-Visual Model, takes as input an RGB clip of 30 frames along with optical flow, goal information, and location information. Tab. I compares our model with these baselines from the perspective of the inputs and pre-processing required.
Iv-B Experiment Results
Qualitative results. We qualitatively compare our model with the Goal-Visual model in Fig. 3. While the Goal-Visual model usually fails in scenarios with multiple potentially-important objects (Fig. 3a, b, c, d, g and h), our model suppresses false positive predictions significantly by leveraging object interactions with our proposed interaction graph. Interestingly, when generating the visualization, our model uses a threshold of 0.3 and the Goal-Visual model uses 0.5, yet our model still makes fewer false positive predictions. Our hypothesis is that when multiple objects can possibly be important, a suppression procedure is performed among nodes of these objects inside the interaction graph. These nodes interact with each other and make inferences based on node features to suppress the false positives. With the interaction graph, our model becomes more cautious about predicting multiple objects as important. However, it does not lose the ability to predict multiple true positives. As shown in Fig. 3 (e) and (f) where the pedestrians are crossing the road together with each other, our model with an interaction graph can effectively capture the relation and assign similar importance scores to them.
Quantitative results. Our model and other baselines are quantitatively compared in Tab. I. Despite requiring the least input and the easiest pre-processing, our model still outperforms the state-of-the-art model in terms of average AP across the 3 splits. We also observed that our model significantly outperforms the Goal-Visual model on split 2, achieves comparable AP on split 3, but performs worse on split 1. The reason may be that the number of samples on which goal information can significantly help the final estimation varies across the 3 splits. As we will show in the next section, in some cases it is almost impossible for the network to make correct estimates without knowing the goal of the ego-vehicle. In the future, we plan to have human annotators further analyze the 3 splits to investigate this.
Failure cases. Sample failure cases are visualized in Fig. 4. The first rows show failures caused by missing detections, which are not the fault of our model as the proposals are generated by off-the-shelf third-party object detectors. Hopefully in the future, this kind of failure can be solved with better object detection models. The failures in the second row reflect the difficulty of online prediction as the future is unknown. In both of Fig. 4 (c) and (d), the ego-vehicles have been going straight and stop right at the intersection in the frames. Without goal information, the model can never know that the driver plans to turn right, and thus fails to predict the pedestrians crossing the road at the right as important. To solve this problem requires further incorporating goal information to our model. The last row shows samples with confusing ground truth. Fig. 4 (e) shows a common case in which the annotator labeled parked cars along the road as important even though no one is starting or driving them, while Fig. 4 (f) contains incorrect ground truth of part of the road region as an important object.
Iv-C Ablation Studies
We performed ablation studies to evaluate the effectiveness of the proposed interaction graph, as well as some of our model design choices.
|Our full model||68.5||73.8||71.9||71.4|
|Remove interaction graph||65.0||70.6||69.3||68.3|
|Remove global descriptor||66.0||71.7||70.3||69.3|
|Remove self attention||4.2||5.3||5.8||5.1|
Interaction graph. We remove the interaction graph along with the graph convolution layers and directly concatenate the pooled features of each object proposals with the global descriptor. The concatenated features are fed into the shared MLP for final estimation. The results are in Tab. II. The AP on each split drops by , , and , respectively, and the average AP falls below the score of the Visual model in , indicating that the interaction graph is important for performance improvement.
Global descriptor. We remove the global descriptor and let the model make importance estimation based only on the updated node features through GCN. Performance drops on all three of the splits, as shown in Tab. II. This implies that the global descriptor is helpful as it provides useful global context. Also, we observed that AP drops less by removing the global descriptor than removing the interaction graph, suggesting that the interaction graph is more important in improving the performance.
Self attention. The identity matrix in Eq. 2 is removed and we trained the model with the new graph. We found that the model is not able to learn without the forced self attention, as is shown in Tab. II. Self attention is crucial as it ensures that each node can retain its own characteristics while interacting with others in graph convolution.
We propose a novel framework for online object importance estimation in on-road driving videos with interaction graphs. The graph edges are learned by the network itself based on the nodes’ features and reflect how closely the connected nodes interact with each other. Through graph convolutional layers, object nodes are able to interact with each other in the graph and update each other’s node features. Experiments show that our model outperforms the state-of-the-art with much less input and much easier pre-processing, and ablation studies demonstrate the effectiveness of the interaction graph as well as our other model design choices.
Part of this work was done while Zehua Zhang was an intern at Honda Research Institute, USA. This work was also partially supported by the National Science Foundation (CAREER IIS-1253549) and by the Indiana University Office of the Vice Provost for Research, the College of Arts and Sciences, and the School of Informatics, Computing, and Engineering through the Emerging Areas of Research Project Learning: Brains, Machines, and Children.
TensorFlow: a system for large-scale machine learning. In USENIX Conference on Operating Systems Design and Implementation, pp. 265–283. Cited by: 3rd item.
-  (2016) First person action-object detection with egonet. arXiv preprint arXiv:1603.04908. Cited by: §II-B.
-  (2017) Unsupervised learning of important objects from first-person videos. In IEEE International Conference on Computer Vision (ICCV), pp. 1956–1964. Cited by: §II-B.
Probabilistic learning of task-specific visual attention.
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §II-B.
-  (2009) Eye–hand coordination in a sequential target contact task. Experimental brain research 195 (2), pp. 273–283. Cited by: §I.
-  (2017) Quo vadis, action recognition? a new model and the kinetics dataset. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6299–6308. Cited by: §I, §III.
-  (2019) Graph-based global reasoning networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 433–442. Cited by: §II-C.
-  (2017) R interface to keras. GitHub. Note: https://github.com/rstudio/keras Cited by: 3rd item.
-  (2016) Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems (NeurIPS), pp. 3844–3852. Cited by: §II-C.
-  (2009) Imagenet: a large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 248–255. Cited by: 3rd item.
-  The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results. Note: http://www.pascal-network.org/challenges/VOC/voc2007/workshop/index.html Cited by: §IV-A.
-  (2019) Goal-oriented object importance estimation in on-road driving videos. arXiv preprint arXiv:1905.02848. Cited by: §I, §I, §II-A, 1st item, 2nd item, §III, §III, Fig. 3, §IV-A, §IV-A, §IV-A, §IV-C, TABLE I.
-  (2018) Detectron. Note: https://github.com/facebookresearch/detectron Cited by: §III.
-  (2005) Eye movements in natural behavior. Trends in cognitive sciences 9 (4), pp. 188–194. Cited by: §I.
-  (2017) Mask R-cnn. In IEEE International Conference on Computer Vision (ICCV), pp. 2961–2969. Cited by: §III, §III.
-  (2016) Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778. Cited by: §III.
-  (1997) Long short-term memory. Neural computation 9 (8), pp. 1735–1780. Cited by: §II-A, §IV-A.
-  (2017) Deeply supervised salient object detection with short connections. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3203–3212. Cited by: §II-B.
-  (2015) Salicon: reducing the semantic gap in saliency prediction by adapting deep neural networks. In IEEE International Conference on Computer Vision (ICCV), pp. 262–270. Cited by: §II-B.
-  (2018) Predicting gaze in egocentric video by learning task-dependent attention transition. In European Conference on Computer Vision (ECCV), pp. 754–769. Cited by: §II-B.
-  (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning (ICML), Cited by: 3rd item.
-  (1998) A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 20 (11), pp. 1254–1259 (English (US)). Cited by: §II-B.
-  (2009) Learning to predict where humans look. In IEEE International Conference on Computer Vision (ICCV), pp. 2106–2113. Cited by: §II-B.
-  (2017) The kinetics human action video dataset. arXiv preprint arXiv:1705.06950. Cited by: 3rd item.
-  (2016) Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Cited by: §II-C.
-  (2009) Eye-hand coordination in rhythmical pointing. Journal of motor behavior 41 (4), pp. 294–304. Cited by: §I.
-  (2012) Discovering important people and objects for egocentric video summarization. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1346–1353. Cited by: §II-B.
-  (2013) Learning to predict gaze in egocentric video. In IEEE International Conference on Computer Vision (ICCV), Cited by: §I, §II-B.
-  (2018) Beyond grids: learning graph representations for visual recognition. In Advances in Neural Information Processing Systems (NeurIPS), pp. 9225–9235. Cited by: §II-C.
-  (2014) Microsoft coco: common objects in context. In European Conference on Computer Vision (ECCV), pp. 740–755. Cited by: §III.
-  (2016) Dhsnet: deep hierarchical saliency network for salient object detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 678–686. Cited by: §II-B.
-  (2015) Fully convolutional networks for semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3431–3440. Cited by: §II-A.
-  (2016) Going deeper into first-person activity recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1894–1903. Cited by: §II-B.
-  (1998) Computational modeling of spatial attention. Attention 9, pp. 341–393. Cited by: §I.
-  (2018) Predicting the driver’s focus of attention: the dr (eye) ve project. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 41 (7), pp. 1720–1733. Cited by: §I, §II-A.
-  (2008) The relation between infants’ activity with objects and attention to object appearance.. Developmental psychology 44 (5), pp. 1242. Cited by: §I.
-  (2012) Detecting activities of daily living in first-person camera views.. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2847–2854. Cited by: §II-B.
-  (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems (NeurIPS), pp. 91–99. Cited by: §III.
-  (2015) Going deeper with convolutions. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9. Cited by: §III.
-  (2017) A computational framework for driver’s visual attention using a fully convolutional architecture. In IEEE Intelligent Vehicles Symposium (IV), pp. 887–894. Cited by: §I, §II-A.
-  (2018) Learning to attend to salient targets in driving videos using fully convolutional rnn. In International Conference on Intelligent Transportation Systems (ITSC), pp. 3225–3232. Cited by: §I, §II-A.
-  (2009) Manual and oculomotor performance develop contemporaneously but independently during continuous tracking. Experimental brain research 195 (4), pp. 611–620. Cited by: §I.
-  (2018) Videos as space-time region graphs. In European Conference on Computer Vision (ECCV), pp. 399–417. Cited by: §II-C, §III.
-  (2018) Predicting driver attention in critical situations. In Asian Conference on Computer Vision, pp. 658–674. Cited by: §I, §II-A.
-  (2012) Attention prediction in egocentric video using motion and visual saliency. In Advances in Image and Video Technology, Y. Ho (Ed.), pp. 277–288. Cited by: §II-B.
-  (2018) Graph r-cnn for scene graph generation. In European Conference on Computer Vision (ECCV), pp. 670–685. Cited by: §II-C.
-  (2017) Deep future gaze: gaze anticipation on egocentric videos using adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §II-B.
-  (2017) Amulet: aggregating multi-level convolutional features for salient object detection. In IEEE International Conference on Computer Vision (ICCV), pp. 202–211. Cited by: §II-B.
-  (2018) From coarse attention to fine-grained gaze: a two-stage 3d fully convolutional network for predicting eye gaze in first person video. In British Machine Vision Conference (BMVC), Cited by: §II-B.
-  (2019) A self validation network for object-level human attention estimation. In Advances in Neural Information Processing Systems, pp. 14702–14713. Cited by: §II-B.