Log In Sign Up

Interaction Graphs for Object Importance Estimation in On-road Driving Videos

A vehicle driving along the road is surrounded by many objects, but only a small subset of them influence the driver's decisions and actions. Learning to estimate the importance of each object on the driver's real-time decision-making may help better understand human driving behavior and lead to more reliable autonomous driving systems. Solving this problem requires models that understand the interactions between the ego-vehicle and the surrounding objects. However, interactions among other objects in the scene can potentially also be very helpful, e.g., a pedestrian beginning to cross the road between the ego-vehicle and the car in front will make the car in front less important. We propose a novel framework for object importance estimation using an interaction graph, in which the features of each object node are updated by interacting with others through graph convolution. Experiments show that our model outperforms state-of-the-art baselines with much less input and pre-processing.


page 1

page 5

page 6


Goal-oriented Object Importance Estimation in On-road Driving Videos

We formulate a new problem as Object Importance Estimation (OIE) in on-r...

Strategic Mitigation of Agent Inattention in Drivers with Open-Quantum Cognition Models

State-of-the-art driver-assist systems have failed to effectively mitiga...

A Pre-study on Data Processing Pipelines for Roadside Object Detection Systems Towards Safer Road Infrastructure

Single-vehicle accidents are the most common type of fatal accidents in ...

Learning 3D-aware Egocentric Spatial-Temporal Interaction via Graph Convolutional Networks

To enable intelligent automated driving systems, a promising strategy is...

EyeDAS: Securing Perception of Autonomous Cars Against the Stereoblindness Syndrome

The ability to detect whether an object is a 2D or 3D object is extremel...

PlanT: Explainable Planning Transformers via Object-Level Representations

Planning an optimal route in a complex environment requires efficient re...

Context Aware Road-user Importance Estimation (iCARE)

Road-users are a critical part of decision-making for both self-driving ...

I Introduction

Driving is a complex task because it involves highly dynamic, complex environments in which many different autonomous agents (other drivers, pedestrians, etc.) are acting at the same time. Human drivers make real-time decisions by combining information from multiple sources, among which visual information often plays the most important part. Since humans have foveated vision systems that require controlling both head pose and eye gaze [34, 26, 5, 14, 42, 36], people must identify and attend to the most task-relevant objects in their visual field at any given time.

Learning to predict drivers’ attention has become a popular topic in recent years [35, 40, 41, 12, 44] due to potential application in advanced driver assistance systems (ADAS) and autonomous driving. However, much of this work [35, 40, 41, 44] focuses on predicting pixel-level human eye gaze, which has two main drawbacks. First, drivers will often look at objects irrelevant to the driving task — e.g.

, beautiful scenery. Second, gaze is limited to a single, small region at any moment in time, whereas the human may be actively attending to multiple objects (using short-term memory, peripheral vision, or multiple saccades) — for example, if a group of people is crossing the road, a good driver (and hence an autonomous system) should pay attention to all of them instead of just a single person.

To overcome the above problems, in this paper we investigate how to directly estimate each object’s importance to the ego-vehicle for making decisions in on-road driving videos without using eye gaze as an intermediate step, as is shown in Fig. 1. We use the dataset collected by Gao et al. [12], in which each sample clip was viewed by experienced drivers and each object in the last frame of the clip was labeled as either important or not. These on-road videos were recorded by forward-facing cameras mounted on cars. This perspective is somewhat different from the drivers’ actual field of view since the dashboard camera is fixed; it makes the video more stable than if it were from a head-mounted camera, but also makes the problem more challenging since we cannot use cues about where the driver was looking (e.g., human drivers tend to adjust their head pose to center an attended region within their visual field [28]).

Fig. 1: Given a video clip, our goal is to estimate, in an online fashion, which objects in the last frame are important for the driver to make real-time control decisions. We also visualize different candidate object boxes, while the red box is the ground truth. Without considering the interaction between pedestrians and the front car, traditional methods make mistakes by predicting both pedestrians and the front car as important. Our proposed method can effectively model the interactions among other objects (visualized as dashed arrows) with a novel interaction graph and thus make the correct prediction. In this example, the front car prevents the ego vehicle from hitting the pedestrians and thus reduces the importance of them.

We propose to leverage frequent interactions among objects in the scene other than the ego-vehicle. Such interactions are often overlooked by other methods but extremely helpful. For example, in Fig. 1, the front car will prevent the ego-vehicle from hitting the pedestrians and thus greatly reduces their importance: the ego-vehicle’s driver only needs to avoid hitting the car in front at the moment. We model these interactions using a novel interaction graph, with features pooled from an I3D-based [6] feature extractor as the nodes, and interaction scores learned by the network itself as the edges. Through stacked graph convolutional layers, object nodes interact with each other and their features are updated from those of the nodes they closely interact with. Our experiments show that the interaction graph greatly improves performance and our model outperforms state-of-the-art methods with less input information (RGB clips only) and pre-processing (object detection on the target frame only).

Ii Related work

Ii-a Driver Attention Prediction

As interest in (semi-)autonomous driving is growing, researchers have paid more attention to the problem of predicting driver attention. This is typically posed as predicting pixel-level eye gaze in terms of likelihood maps [35, 40, 41, 44]. Fully convolutional networks [32], which were original proposed to solve image segmentation, have been applied by Tawari et al. [40, 41]

for similar dense spatial probability prediction tasks. Palazzi

et al. [35] combine features from multiple branches with RGB frames, optical flow, and semantic segmentation to create the final prediction. Xia et al. propose to handle critical situations by paying more attention to the frames that are identified as crucial driving moments based on human driver eye gaze movements [44].

However, using eye gaze prediction to estimate driver attention has limitations, such as that drivers can be attending to multiple objects at once (through short-term memory, frequent saccades or peripheral vision). To overcome these drawbacks, Gao et al. [12] collected an on-road driving video dataset with objects labeled as important or not by experienced drivers. They use an LSTM [17] leveraging goal information for object importance estimation. To achieve state-of-the-art performance, however, their technique requires multiple sources of input including maneuver information of the planned path, RGB clips, optical flow, and location information, as well as complex pre-processing of per-frame object detection and tracking.

Ii-B Object-level Attention

While many papers study saliency [18, 31, 48, 23] and eye gaze [19, 28, 49, 47, 20, 22, 4, 45] prediction, only a few focus on object-level attention [27, 37, 33, 2, 3, 50]. Hand-designed features are applied for important object and people detection in Lee et al. [27]. Pirsiavash and Ramanan [37] and Ma et al. [33] detect objects in hands as a proxy for attended objects. Bertasius et al. [3] propose an unsupervised method for attended object prediction. The recent work of Zhang et al. [50] tries to jointly identify and locate the attended object. Inspired by the foveated nature of human vision system, the class and the location information are integrated through a self-validation module to further refine the prediction. But these techniques are for egocentric videos in which both head movement and ego hands are available for making inferences, while in our settings the camera is fixed and hand information is not applicable.

Ii-C Graph Convolutional Networks

Graph convolutional networks (GCNs) [25, 9]

have become popular in computer vision problems 

[43, 7, 29, 46] for their ability to capture long-range relationships in non-Euclidean space. Compared with traditional convolutional networks which need to stack many layers for a large receptive field, GCNs can efficiently model long-range relations with an adjacency matrix. Inspired by these papers, we apply interaction graphs to solve the problem of object importance estimation in on-road driving videos. To the best of our knowledge, ours is the first work to perform per-node predictions in computer vision problems with GCNs. Also, our model learns to predict the graph edges themselves based on nodes’ interactions, while existing work typically formulates the edges with hand-designed metrics such as spatial-temporal overlap [43].

Fig. 2: The architecture of our proposed model.

Features from the I3D-based feature extractor are first aggregated temporally by convolution along only the temporal dimension. Then ROI pooling and spatial max pooling are applied to obtain each object node’s feature vector. These features are updated by interacting with other nodes through graph convolution based on the learned interaction graph edges. Finally, the updated features are concatenated with the global descriptor and fed into a shared Multilayer Perceptron (MLP) for object importance estimation. The step of object proposal generation is omitted for clarity.

Iii Our Model

Given videos taken by forward-facing (egocentric) vehicle cameras, our aim is to estimate the importance of each object in the scene to the driver. Following the same problem setting as [12], we do online estimation, predicting object importance scores in the last frame of each video clip. We propose a novel framework that leverages the interactions among objects using graph convolution. An overview of our model is shown in Fig. 2.

Object proposals. Since it is time-sensitive, we would like online prediction to require as little pre-processing as possible. In contrast to the state-of-the-art method [12] which requires object detection on each frame of the input video as well as object tracking, our model only needs to run object detection on the target frame (the last frame in the online detection setting). We apply Mask RCNN [15], using an off-the-shelf ResXNet-101 [16] 32x8d Mask RCNN model [13] trained on MSCOCO [30] without any further fine-tuning. Proposals of object classes other than person, bicycle, car, motorcycle, bus, train, truck, traffic light, and stop sign are removed since they are irrelevant to our task. For simplicity, a number of dummy boxes with coordinates of (all box coordinates are rescaled to

with respect to the height and width of the input frame) are padded until each target frame contains the same number of proposals. We empirically set the number of object proposals per target frame after padding to be

. The dummy box is the hood of the ego car, for which the image content almost remains unchanged for all of the samples. Also, object proposals were pre-generated to save time in our experiments, though during model design we tried our best to reduce pre-processing steps to make it meet the time requirements of online prediction.

Visual feature extractor. Since only RGB clips are used as input, making correct object importance predictions requires as strong a feature extractor as possible. We use Inception-V1 [39] I3D [6] because of its capacity for capturing both rich spatial and temporal features. Temporal motion information is important for reasoning about both the ego-vehicle’s and other objects’ intention and future movement, and spatial appearance information helps determine the inherent characteristics of each object. Given contiguous RGB frames , we feed them through I3D and extract features from the last mixing layer of rgb_Mixed_5c. We have , , , and based on the architecture setting of Inception-V1 I3D.

Feature vectors of graph nodes. Extracted features

from above are further aggregated temporally through one-layer convolution along only the temporal dimension, with kernel size and stride set to

. From the obtained feature maps , the features for each object are pooled using ROI Pooling [15, 38]. Following [15], we have (the temporal dimension is removed). These feature maps then go through a spatial max pooling layer, resulting in feature vectors for each object node.

Graph edge formulation. The strength of an edge in our interaction graph should reflect how closely two connected objects and interact with each other. We propose that the network itself learns to model the edge strength by estimating how closely two nodes and interact with each other. Given node features and , an interaction score is first computed,


where and

are linear transformations with different learnable parameters

and . is a linear transformation with as learnable parameters and denotes concatenation. With an interaction matrix obtained by computing interaction scores for each pair of nodes, we calculate by applying softmax on

as well as adding an identity matrix

to force self attention,


In this way, the model learns an interaction graph itself based on each node’s features. The learned edge strength indicates how much node will affect updating node ’s features through graph convolution and thus reflects how closely they are interacting.

We note that the interaction graph learned here is a directional graph, as and are different transformations. This is reasonable since how much node affect node is not necessarily the same as how much affects . For example, in Fig. 1, while the front car will greatly reduce the importance of the pedestrians, the pedestrians almost have no influence on how important the front car is to the ego vehicle.

An alternative way of forming the graph is based on feature similarity between pairs of nodes following [43]. We ran experiments to compare our interaction graph with that similarity graph, and found that the model is not able to learn well with similarity graphs. Our hypothesis is that similarity graphs are not suitable for our problem, as objects sharing similar appearance or motion do not necessarily closely interact, and vice versa. For example, in Fig.1, the front car and the pedestrians are very different in terms of both appearance and motion, despite the close interaction between them. Since our proposed interaction graph yields better performance, we use it in our experiments here.

Graph convolution. With the graph formulated, we perform graph convolution to allow nodes to interact and update each other. One layer of graph convolution can be represented as:


where is the edge matrix, is the input node feature matrix, and is the weight matrix of the layer.

is a non-linear function; we use ReLU in our experiments. We stack

graph convolutional layers and simply set for all of them. After the graph convolution, we obtain an updated feature matrix .

Per-node object importance estimation. We now perform per-node object importance estimation. Although each node’s features are updated through GCN to capture long-range relationships with other nodes, some global context may still be missing because object proposals cannot cover the whole image. Also, the object detector is not perfect and useful objects may be missed (e.g., small objects such as traffic lights). To circumvent this problem, we first apply global average pooling on the extracted features from I3D to obtain a global descriptor , which are then tiled times and concatenated with the updated node features . Each row of the resulting features is fed into a shared Multilayer Perceptron (MLP) for the final importance score estimation,


Other implementation and training details

  • Missing detections. Since we apply multi-fold cross validation (as in [12]) in our experiments, each sample will either serve as a training sample or a testing sample. For those samples with ground truth objects not detected by our object detector, we prepare two sets of proposals for training and testing respectively. The set for testing is the same as the proposals obtained by padding dummy boxes to the results of Mask RCNN, while the set for training is slightly different as we replace dummy boxes with the ground truth object boxes which were not detected to avoid misleading the network. Note that missing detections of ground truth objects still happen when these samples serve as testing data, and thus our model can never reach .

  • Hard negative mining and loss functions.

    The dataset collected by Gao et al. [12] suffers from significant imbalance: of 8,611 samples (short video clips), only 4,268 objects are labeled as important. Considering our setting of 40 box nodes per sample, the ratio of the total number of positive boxes over negative boxes is almost 1:80. In contrast to [12] which applies weighted-cross-entropy based on the number of positive and negatives boxes, we do hard negative mining to address the problem. During each batch, we first compute the losses for each node importance estimation with binary cross entropy loss,


    where is the corresponding ground truth and,


    Then the losses for negative nodes are sorted and we take only the greatest from them, along with the losses for all the positive nodes, to compute the total loss. Letting denote the total number of positive nodes, we empirically found that works well (after multiple experiments on different ratios). Supposing and denote the sets of indices for all the positive nodes and the selected negative nodes whose node losses are among the top , the total loss is then,

  • Other details.

    We implemented the model with Keras 


    and Tensorflow 


    . A batch normalization layer 

    [21] was inserted after each layer of the feature extractor, with momentum 0.8. We used RGB clips with dimension as the only input. During training, the I3D feature extractor was initialized with weights pretrained on Kinetics [24]

    and ImageNet 


    , while other parts were randomly initialized. We trained the model with stochastic gradient descent with initial learning rate 0.0003, momentum 0.9, decay 0.0001, and L2 regularizer 0.0005. The loss function to be optimized was as Eq. 

    7. When making inference, the model predicts an importance score in the range of for each of the object proposals, resulting in a prediction per sample.

Iv Experiments

Iv-a Experiment Settings

Dataset. We evaluate our model on the the on-road driving video dataset used by Gao et al. [12]. This dataset consists of 8,611 annotated samples, each of which is a 30-frame RGB clip (during our experiments we only use the last 16 of them) recording real-world driving in highly-dynamic environments. The dataset focuses on real-time driving around road intersections, and were annotated by experienced drivers with object importance estimates. Please refer to [12] for detailed statistics about the dataset. In our experiments, we follow the same data split as in [12] and also perform 3-fold cross validation.

Metrics. We compute 11-point average precision (AP) [11] for each data split and then take the average across the 3 splits. A predicted box is considered as correct if its IOU with one of the unmatched ground truth boxes is over 0.5. We do not allow duplicated matching to the same ground truth box. Also, note that due to false negatives of the object detector, the upper bound performance of our model can never achieve , as in [12].

Models RGB clips Input length Optical flows Goal Location Object detection Tracking
Our model 16 On the target frame 68.5 73.8 71.9 71.4
Goal-Visual Model [12] 30 On each frame 70.2 70.3 72.0 70.8
Visual Model [12] 30 On each frame 68.1 68.1 70.9 69.0
Goal-Geometry Model [12] 0 N/A 32.1 40.6 41.8 38.2
Visual Model-Image [12] 1 On the target frame 35.5 42.1 32.6 36.7
TABLE I: Comparison of our model with other baselines in terms of required input, required pre-processing, and average precision on different splits. Our model outperforms the state-of-the-art with the least input and the easiest pre-processing.

Baselines. We compare our model with the state-of-the-art goal-oriented model as well as other competitive baselines in [12]. These baselines have similar network structures based on LSTMs [17], while the main difference between them is the input features. The strongest one, Goal-Visual Model, takes as input an RGB clip of 30 frames along with optical flow, goal information, and location information. Tab. I compares our model with these baselines from the perspective of the inputs and pre-processing required.

Fig. 3: Qualitative comparison of our model (the 1st and the 3rd rows) with the state-of-the-art method of Goal-Visual Model [12] (the 2nd and the 4th rows). Blue rectangles are the predicted important object boxes and red circles represent the ground truth. In the upper half (a, b, c and d) we visualize samples with vehicles as the important object, while samples with important pedestrians are visualized in the bottom half. In a,b,c,d,g and h, our model outperforms [12] by making fewer false positive predictions, while in e and f it yields more true positive predictions.

Iv-B Experiment Results

Qualitative results. We qualitatively compare our model with the Goal-Visual model in Fig. 3. While the Goal-Visual model usually fails in scenarios with multiple potentially-important objects (Fig. 3a, b, c, d, g and h), our model suppresses false positive predictions significantly by leveraging object interactions with our proposed interaction graph. Interestingly, when generating the visualization, our model uses a threshold of 0.3 and the Goal-Visual model uses 0.5, yet our model still makes fewer false positive predictions. Our hypothesis is that when multiple objects can possibly be important, a suppression procedure is performed among nodes of these objects inside the interaction graph. These nodes interact with each other and make inferences based on node features to suppress the false positives. With the interaction graph, our model becomes more cautious about predicting multiple objects as important. However, it does not lose the ability to predict multiple true positives. As shown in Fig. 3 (e) and (f) where the pedestrians are crossing the road together with each other, our model with an interaction graph can effectively capture the relation and assign similar importance scores to them.

Quantitative results. Our model and other baselines are quantitatively compared in Tab. I. Despite requiring the least input and the easiest pre-processing, our model still outperforms the state-of-the-art model in terms of average AP across the 3 splits. We also observed that our model significantly outperforms the Goal-Visual model on split 2, achieves comparable AP on split 3, but performs worse on split 1. The reason may be that the number of samples on which goal information can significantly help the final estimation varies across the 3 splits. As we will show in the next section, in some cases it is almost impossible for the network to make correct estimates without knowing the goal of the ego-vehicle. In the future, we plan to have human annotators further analyze the 3 splits to investigate this.

Failure cases. Sample failure cases are visualized in Fig. 4. The first rows show failures caused by missing detections, which are not the fault of our model as the proposals are generated by off-the-shelf third-party object detectors. Hopefully in the future, this kind of failure can be solved with better object detection models. The failures in the second row reflect the difficulty of online prediction as the future is unknown. In both of Fig. 4 (c) and (d), the ego-vehicles have been going straight and stop right at the intersection in the frames. Without goal information, the model can never know that the driver plans to turn right, and thus fails to predict the pedestrians crossing the road at the right as important. To solve this problem requires further incorporating goal information to our model. The last row shows samples with confusing ground truth. Fig. 4 (e) shows a common case in which the annotator labeled parked cars along the road as important even though no one is starting or driving them, while Fig. 4 (f) contains incorrect ground truth of part of the road region as an important object.

Fig. 4: Sample failure cases of our model. Blue rectangles are the predicted important object boxes and red circles represent the ground truth. (a) and (b) are caused by missing detections. (c) and (d) are due to lack of goal information. (e) and (f) are samples with confusing ground truth.

Iv-C Ablation Studies

We performed ablation studies to evaluate the effectiveness of the proposed interaction graph, as well as some of our model design choices.

Our full model 68.5 73.8 71.9 71.4
Remove interaction graph 65.0 70.6 69.3 68.3
Remove global descriptor 66.0 71.7 70.3 69.3
Remove self attention 4.2 5.3 5.8 5.1
TABLE II: Results of ablation studies.

Interaction graph. We remove the interaction graph along with the graph convolution layers and directly concatenate the pooled features of each object proposals with the global descriptor. The concatenated features are fed into the shared MLP for final estimation. The results are in Tab. II. The AP on each split drops by , , and , respectively, and the average AP falls below the score of the Visual model in [12], indicating that the interaction graph is important for performance improvement.

Global descriptor. We remove the global descriptor and let the model make importance estimation based only on the updated node features through GCN. Performance drops on all three of the splits, as shown in Tab. II. This implies that the global descriptor is helpful as it provides useful global context. Also, we observed that AP drops less by removing the global descriptor than removing the interaction graph, suggesting that the interaction graph is more important in improving the performance.

Self attention. The identity matrix in Eq. 2 is removed and we trained the model with the new graph. We found that the model is not able to learn without the forced self attention, as is shown in Tab. II. Self attention is crucial as it ensures that each node can retain its own characteristics while interacting with others in graph convolution.

V Conclusion

We propose a novel framework for online object importance estimation in on-road driving videos with interaction graphs. The graph edges are learned by the network itself based on the nodes’ features and reflect how closely the connected nodes interact with each other. Through graph convolutional layers, object nodes are able to interact with each other in the graph and update each other’s node features. Experiments show that our model outperforms the state-of-the-art with much less input and much easier pre-processing, and ablation studies demonstrate the effectiveness of the interaction graph as well as our other model design choices.

Vi Acknowledgments

Part of this work was done while Zehua Zhang was an intern at Honda Research Institute, USA. This work was also partially supported by the National Science Foundation (CAREER IIS-1253549) and by the Indiana University Office of the Vice Provost for Research, the College of Arts and Sciences, and the School of Informatics, Computing, and Engineering through the Emerging Areas of Research Project Learning: Brains, Machines, and Children.


  • [1] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng (2016)

    TensorFlow: a system for large-scale machine learning

    In USENIX Conference on Operating Systems Design and Implementation, pp. 265–283. Cited by: 3rd item.
  • [2] G. Bertasius, H. S. Park, S. X. Yu, and J. Shi (2016) First person action-object detection with egonet. arXiv preprint arXiv:1603.04908. Cited by: §II-B.
  • [3] G. Bertasius, H. Soo Park, S. X. Yu, and J. Shi (2017) Unsupervised learning of important objects from first-person videos. In IEEE International Conference on Computer Vision (ICCV), pp. 1956–1964. Cited by: §II-B.
  • [4] A. Borji, D. N. Sihite, and L. Itti (2012) Probabilistic learning of task-specific visual attention. In

    IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    Cited by: §II-B.
  • [5] M. C. Bowman, R. S. Johannson, and J. R. Flanagan (2009) Eye–hand coordination in a sequential target contact task. Experimental brain research 195 (2), pp. 273–283. Cited by: §I.
  • [6] J. Carreira and A. Zisserman (2017) Quo vadis, action recognition? a new model and the kinetics dataset. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6299–6308. Cited by: §I, §III.
  • [7] Y. Chen, M. Rohrbach, Z. Yan, Y. Shuicheng, J. Feng, and Y. Kalantidis (2019) Graph-based global reasoning networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 433–442. Cited by: §II-C.
  • [8] F. Chollet, J. Allaire, et al. (2017) R interface to keras. GitHub. Note: Cited by: 3rd item.
  • [9] M. Defferrard, X. Bresson, and P. Vandergheynst (2016) Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems (NeurIPS), pp. 3844–3852. Cited by: §II-C.
  • [10] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 248–255. Cited by: 3rd item.
  • [11] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results. Note: Cited by: §IV-A.
  • [12] M. Gao, A. Tawari, and S. Martin (2019) Goal-oriented object importance estimation in on-road driving videos. arXiv preprint arXiv:1905.02848. Cited by: §I, §I, §II-A, 1st item, 2nd item, §III, §III, Fig. 3, §IV-A, §IV-A, §IV-A, §IV-C, TABLE I.
  • [13] R. Girshick, I. Radosavovic, G. Gkioxari, P. Dollár, and K. He (2018) Detectron. Note: Cited by: §III.
  • [14] M. Hayhoe and D. Ballard (2005) Eye movements in natural behavior. Trends in cognitive sciences 9 (4), pp. 188–194. Cited by: §I.
  • [15] K. He, G. Gkioxari, P. Dollár, and R. Girshick (2017) Mask R-cnn. In IEEE International Conference on Computer Vision (ICCV), pp. 2961–2969. Cited by: §III, §III.
  • [16] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778. Cited by: §III.
  • [17] S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural computation 9 (8), pp. 1735–1780. Cited by: §II-A, §IV-A.
  • [18] Q. Hou, M. Cheng, X. Hu, A. Borji, Z. Tu, and P. H. Torr (2017) Deeply supervised salient object detection with short connections. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3203–3212. Cited by: §II-B.
  • [19] X. Huang, C. Shen, X. Boix, and Q. Zhao (2015) Salicon: reducing the semantic gap in saliency prediction by adapting deep neural networks. In IEEE International Conference on Computer Vision (ICCV), pp. 262–270. Cited by: §II-B.
  • [20] Y. Huang, M. Cai, Z. Li, and Y. Sato (2018) Predicting gaze in egocentric video by learning task-dependent attention transition. In European Conference on Computer Vision (ECCV), pp. 754–769. Cited by: §II-B.
  • [21] S. Ioffe and C. Szegedy (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning (ICML), Cited by: 3rd item.
  • [22] L. Itti, C. Koch, and E. Niebur (1998) A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 20 (11), pp. 1254–1259 (English (US)). Cited by: §II-B.
  • [23] T. Judd, K. Ehinger, F. Durand, and A. Torralba (2009) Learning to predict where humans look. In IEEE International Conference on Computer Vision (ICCV), pp. 2106–2113. Cited by: §II-B.
  • [24] W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, et al. (2017) The kinetics human action video dataset. arXiv preprint arXiv:1705.06950. Cited by: 3rd item.
  • [25] T. N. Kipf and M. Welling (2016) Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Cited by: §II-C.
  • [26] S. Lazzari, D. Mottet, and J. Vercher (2009) Eye-hand coordination in rhythmical pointing. Journal of motor behavior 41 (4), pp. 294–304. Cited by: §I.
  • [27] Y. J. Lee, J. Ghosh, and K. Grauman (2012) Discovering important people and objects for egocentric video summarization. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1346–1353. Cited by: §II-B.
  • [28] Y. Li, A. Fathi, and J. M. Rehg (2013) Learning to predict gaze in egocentric video. In IEEE International Conference on Computer Vision (ICCV), Cited by: §I, §II-B.
  • [29] Y. Li and A. Gupta (2018) Beyond grids: learning graph representations for visual recognition. In Advances in Neural Information Processing Systems (NeurIPS), pp. 9225–9235. Cited by: §II-C.
  • [30] T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick (2014) Microsoft coco: common objects in context. In European Conference on Computer Vision (ECCV), pp. 740–755. Cited by: §III.
  • [31] N. Liu and J. Han (2016) Dhsnet: deep hierarchical saliency network for salient object detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 678–686. Cited by: §II-B.
  • [32] J. Long, E. Shelhamer, and T. Darrell (2015) Fully convolutional networks for semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3431–3440. Cited by: §II-A.
  • [33] M. Ma, H. Fan, and K. M. Kitani (2016) Going deeper into first-person activity recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1894–1903. Cited by: §II-B.
  • [34] M. C. Mozer and M. Sitton (1998) Computational modeling of spatial attention. Attention 9, pp. 341–393. Cited by: §I.
  • [35] A. Palazzi, D. Abati, F. Solera, R. Cucchiara, et al. (2018) Predicting the driver’s focus of attention: the dr (eye) ve project. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 41 (7), pp. 1720–1733. Cited by: §I, §II-A.
  • [36] S. Perone, K. L. Madole, S. Ross-Sheehy, M. Carey, and L. M. Oakes (2008) The relation between infants’ activity with objects and attention to object appearance.. Developmental psychology 44 (5), pp. 1242. Cited by: §I.
  • [37] H. Pirsiavash and D. Ramanan (2012) Detecting activities of daily living in first-person camera views.. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2847–2854. Cited by: §II-B.
  • [38] S. Ren, K. He, R. Girshick, and J. Sun (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems (NeurIPS), pp. 91–99. Cited by: §III.
  • [39] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich (2015) Going deeper with convolutions. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9. Cited by: §III.
  • [40] A. Tawari and B. Kang (2017) A computational framework for driver’s visual attention using a fully convolutional architecture. In IEEE Intelligent Vehicles Symposium (IV), pp. 887–894. Cited by: §I, §II-A.
  • [41] A. Tawari, P. Mallela, and S. Martin (2018) Learning to attend to salient targets in driving videos using fully convolutional rnn. In International Conference on Intelligent Transportation Systems (ITSC), pp. 3225–3232. Cited by: §I, §II-A.
  • [42] E. D. Vidoni, J. S. McCarley, J. D. Edwards, and L. A. Boyd (2009) Manual and oculomotor performance develop contemporaneously but independently during continuous tracking. Experimental brain research 195 (4), pp. 611–620. Cited by: §I.
  • [43] X. Wang and A. Gupta (2018) Videos as space-time region graphs. In European Conference on Computer Vision (ECCV), pp. 399–417. Cited by: §II-C, §III.
  • [44] Y. Xia, D. Zhang, J. Kim, K. Nakayama, K. Zipser, and D. Whitney (2018) Predicting driver attention in critical situations. In Asian Conference on Computer Vision, pp. 658–674. Cited by: §I, §II-A.
  • [45] K. Yamada, Y. Sugano, T. Okabe, Y. Sato, A. Sugimoto, and K. Hiraki (2012) Attention prediction in egocentric video using motion and visual saliency. In Advances in Image and Video Technology, Y. Ho (Ed.), pp. 277–288. Cited by: §II-B.
  • [46] J. Yang, J. Lu, S. Lee, D. Batra, and D. Parikh (2018) Graph r-cnn for scene graph generation. In European Conference on Computer Vision (ECCV), pp. 670–685. Cited by: §II-C.
  • [47] M. Zhang, K. T. Ma, J. H. Lim, Q. Zhao, and J. Feng (2017) Deep future gaze: gaze anticipation on egocentric videos using adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §II-B.
  • [48] P. Zhang, D. Wang, H. Lu, H. Wang, and X. Ruan (2017) Amulet: aggregating multi-level convolutional features for salient object detection. In IEEE International Conference on Computer Vision (ICCV), pp. 202–211. Cited by: §II-B.
  • [49] Z. Zhang, S. Bambach, C. Yu, and D. J. Crandall (2018) From coarse attention to fine-grained gaze: a two-stage 3d fully convolutional network for predicting eye gaze in first person video. In British Machine Vision Conference (BMVC), Cited by: §II-B.
  • [50] Z. Zhang, C. Yu, and D. Crandall (2019) A self validation network for object-level human attention estimation. In Advances in Neural Information Processing Systems, pp. 14702–14713. Cited by: §II-B.