Object Detection in Video with Spatial-temporal Context Aggregation

07/11/2019 ∙ by Hao Luo, et al. ∙ Horizon Robotics Huazhong University of Science u0026 Technology 7

Recent cutting-edge feature aggregation paradigms for video object detection rely on inferring feature correspondence. The feature correspondence estimation problem is fundamentally difficult due to poor image quality, motion blur, etc, and the results of feature correspondence estimation are unstable. To avoid the problem, we propose a simple but effective feature aggregation framework which operates on the object proposal-level. It learns to enhance each proposal's feature via modeling semantic and spatio-temporal relationships among object proposals from both within a frame and across adjacent frames. Experiments are carried out on the ImageNet VID dataset. Without any bells and whistles, our method obtains 80.3% mAP on the ImageNet VID dataset, which is superior over the previous state-of-the-arts. The proposed feature aggregation mechanism improves the single frame Faster RCNN baseline by 5.8 setting of no temporal post-processing, our method outperforms the previous state-of-the-art by 1.4

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Tremendous progress has been made on object detection in static images [2, 3, 4, 5]

thanks to the success of deep convolutional neural networks (CNN). However, object detection in video remains a much more challenging problem since videos often suffer from image quality degeneration such as motion blur and video defocus, which has not been addressed adequately by the state-of-the-art image-level detectors. On the other hand, videos have richer context information in both spatial and temporal domains, making it crucial to incorporate spatio-temporal context to improve detection accuracy. Feature aggregation is one important technique that has been proven effective in many video related tasks, , video action recognition

[6, 7].

Following this direction, several previous approaches [8, 9, 10] apply feature aggregation to video object detection by learning a pixel-level aggregation strategy to improve the feature of a frame based on the information from its adjacent frames. In addition, feature calibration is done using motion information explicitly produced by optical flow or FlowNet [11], or implicitly modeled by deformable convolution network (DCN) [12].

Figure 1: Illustration of the differences between pixel/instance-level feature correspondence (the top and middle rows) and the proposed proposal-level spatio-temporal context (the bottom row). The former easily suffers from emergence of new objects, video defocus, motion blur, occlusion . while the latter captures the dependency among proposals of intra- and inter-frames.

However, a range of problems such as motion blur, illumination variation, occlusion, drastic translation and scale change can make pixel-wise motion estimation error-prone and jeopardize the above pixel-level feature aggregation methods, as illustrated in top and middle row in Fig. 1. Facing the these problems, the feature calibration strategy in pixel-level feature propagation methods attempt to solve these problems but brings extra problems. Motion information predicted by CNN (, FlowNet and DCN) and other non-deep methods may be inaccurate, which will make the feature calibration illogical. These ambiguous features obtained via pixel-level feature propagation will confuse the detection network. With a task-specific loss in training, these problems are alleviated to some extent but not solved fundamentally. Besides, the above feature calibration methods usually focus only on the same object instance, while overlooking contextual relations among different object instances or categories. For example, pixel-wise feature aggregation cannot capture things like a mouse is likely to sit besides a computer and a boat is rarely on a lawn, which is no doubt beneficial to detection accuracy. From this perspective, capturing feature correspondence between frames is suboptimal for feature aggregation.

In this paper, we focus on improving the video object detection accuracy by object-level spatial-temporal context aggregation (STCA), as illustrated in the bottom row of Fig. 1. A novel network is proposed to enhance the feature of each object proposal by aggregating features of other object proposals within the same fame and across neighboring frames based on spatio-temporal relations. More precisely, we utilize the region proposal network (RPN) of [4] to generate proposals for each frame separately and extract a corresponding semantic feature for each proposal via ROI-Pooling [13]. These semantic features are input to a proposal-level feature aggregation unit, which automatically learns the dependency between proposal features and aggregates them to form new proposal representations for more accurate recognition. Within the aggregation unit, the proposal features are regarded as tokens and the self-attention mechanism [14] is utilized to capture the dependency among them. The proposed feature-based dependency modeling focuses on the visual content of image region only and the position information among proposals is ignored. Hence, we explicitly add spatial position representation to the attention for modeling the geometric information with the region proposals in one image. Besides of the spatial position information, the temporal position information among proposals across time is also captured in our method. Then the proposed context dependency model is both content- and position-aware, which makes full use of spatial-temporal context. Proposal-level feature aggregation is performed according to the learned dependency. To make the feature expressive enough, we exploit STCA units as in Fig. 2. Finally, the enhanced object feature is fed as the input to subsequent object detection module , the head net of Faster R-CNN. All steps are integrated in the proposed aggregation network and it can be trained in an end-to-end manner.

Compared with existing pixel-wise feature aggregation methods [8, 9, 10], our proposal-level STCA has the following merits: (1) STCA does not have any hand-crafted design, , the feature wrapping process in pixel-level feature propagation; it is fully trainable; (2) STCA circumvents the challenging problem of accurate feature correspondence estimation across video frames, which makes it robust to low quality image frames. (3) STCA can capture both intra-frame contextual relations among different objects and inter-frame spatio-temporal relations, which has not been achieved in any previous video object detection method. (4) In the experiments, based on the single-frame Faster RCNN, a two-tier STCA obtains the state-of-the-art performance on ImageNet VID without any temporal post-processing.

2 Related Work

Object detection in static Images.

Deep CNN based methods dominate quite a few computer vision tasks, including object detection

[15, 13, 16, 4]. State-of-the-art image-level object detection algorithms are defined as two types: one-stage detector and two-stage detector. Two-stage detectors mainly follow the R-CNN pipeline. R-CNN [15] first proposes to use CNN to extract region features and then input them to a subsequent recognition network, which achieves a remarkable performance gain in comparison with traditional methods. However, it is time and space consuming. To solve these problems, Fast R-CNN [13] proposes to share computation for all proposals and directly conducts ROI-Pooling on CNN feature maps of the whole image. To integrate all stages and make detector’s training end-to-end, RPN [4] is proposed to replace the hand-crafted proposal generation stage. Lately, a mass of improved approaches have emerged, Mask R-CNN [3] and FPN [17], which further boost the recognition accuracy. By contrast, one-stage detector is usually faster and simpler but may hinder the performance a little bit. YOLO [2] is one of the classical one-stage detectors, which divides image into grids and outputs prediction at each grid cell. Without supernumerary proposal generation and region refinement, YOLO can run in real-time. To handle with object scale variation, SSD [18] executes detection on image’s feature map of various resolution. RetinaNet [19] concentrates on the class imbalance problem. Equipped with well-designed techniques, modern one-stage detectors can obtain similar accuracy as two-stage detectors. On the whole, object detection in static images gets great progress. Besides, modeling contextual information within an image for better object is also an active topic. For example, the relation network [20] uses self-attention [14] to explorer the relation between detection hypothesis for duplicate removal and obtains better better detection accuracy at the same time.

However, video usually suffers from image quality degeneration in a real-life scenario like autonomous driving. If temporal context information is disregarded, even state-of-the-art image-level detectors behave poor on challenges particular to video. But if temporal context information in video is taken into consideration, things get easier. Here we focus on it. We take Faster R-CNN as our basic detector and extend it for video object detection.

Object detection in video.

Lacking large-scale annotated video datasets, video object detection benefits less from deep learning until the emergence of the ImageNet VID dataset. After that, there are increasing work exploring how to exploit temporal context for video object detection. Roughly, it falls into two categories, box score propagation and feature propagation

[21]. Box score propagation concentrates on suppressing false positive examples and boosting the confidence of true positive examples. In T-CNN [22], they first obtain region proposals of the whole video clip by an image-level detector. Then optical flow is exploited to propagate proposal across neighboring frames and tubelet is generated by tracker. Finally, box score propagation within tubelet is carried out. Discriminatively, in [21], short tubelet is generated firstly via CNN and then they propose a high quality linking technique to obtain better tubelet. Also, box score propagation is used within tubelet finally. Seq-NMS [23] does box score propagation through dynamic programming. On the other hand, DFF [24]

utilizes optical flow to propagate feature across frames to avoid costly feature extraction for non-key frame. Optical flow is predicted by FlowNet

[11]. FGFA [8] propagates feature in identical way and furthermore, a pixel-wise feature aggregation is learned. Sharing the same spirit, STSN [9] is the same thing but substitutes deformable convolution for optical flow’s prediction. Based on FGFA, MANet [10] adds extra object-level calibration to do feature propagation but additional annotation of instance id is of need. They take optical flow and proposal’s coordinates as input and output relative movements of identical instance. Eventually, only prediction of identical instance across consecutive frames is directly averaged. All these methods utilize temporal information to regularize detection results via temporal post-processing techniques.

Our method belongs to the latter, feature propagation in some extent. MANet [10] is partly similar to our work where proposal-level aggregation is adopted. Yet completely different from it, we propose to enhance proposal-level feature via a learnable spatial-temporal aggregation unit, which takes feature of all contexts region intra- and inter-frames into account and regularize the detection results in the training. And moreover, no additional data annotations, instance id or optical flow, are required.

Attention for sequence modeling.

In the past few years, attention mechanisms have witnessed many distinct advances in sequence modeling and now it even becomes one of the standard component in sequence modeling. Self-attention (means Scaled Dot-Product Attention [14]) reveals notable advantages, including longer range dependency, more parallelizable and lower computational complexity, in comparison with the counterpart, RNN. In consideration of self-attention’s powerful capacity for context embedding and the similarity between general sequence and video, we modulate self-attention [14] to model dependency among context regions within video clip for video object detection task. Self-attention has also been applied for video classification as the non-local network in [25].

Figure 2: Proposed spatial-temporal feature aggregation framework. For each input video frame, ResNet-101 is used to extract the feature map, followed by RPN which generates object proposals. Each dot represents the semantic feature of the corresponding proposal. The STCA operator is then applied in two stages to enhances each proposal’s feature by using multiple related proposals’ semantic features, spatial coordinates and temporal positions as input, as elaborated in Eq. (8).

3 Method

3.1 Architecture Overview

We aim to maximize the cooperation of spatial and temporal context for video object detection task. The overall architecture is shown in Fig. 2. Faster R-CNN [4] is a classical static image detector comprised of a feature extractor, region proposal network (RPN) and region-based detector. Our approach is established on it with shared feature extractor and RPN. In more detail, the proposed framework takes a set of three neighboring frames within video as input. Each frame is firstly forwarded through the feature extractor (, ResNet-101 [26]) to obtain convolutional feature maps. Then the followed RPN is applied to generate proposals for each frame separately. Based on these proposals and feature maps, we can obtain semantic feature for each proposal of each frame by ROI-Pooling. For object detection in image, classification and regression for each proposal are conducted solely by two followed fully connected layers in Faster R-CNN. Nevertheless, for object detection in video, ignoring temporal context is not rational as commented earlier. Accordingly, we take the proposals from all the input frames into consideration and modulate this architecture with a spatial-temporal context aggregation (STCA) unit, which models the relationship explicitly between proposals from intra- and inter-frame. This unit takes semantic feature, spatial position and temporal position as input and output enhanced feature for each proposal. The enhanced feature is context-aware in both spatial and temporal domains, which is of great importance for classification and regression. As a result, recognition accuracy will be improved. In the next sections, we will give a detailed description for the spatial-temporal context-aware unit.

3.2 Self-attention for Semantic Dependency Modeling

The effectiveness of self-attention in capturing short- and long-range dependency via calculating attention weight between each pair of tokens in NLP task has been proven in [14]. In computer vision, [25] and [20] have successfully applied it to video classification and object detection in static image respectively. We further extend it to modeling the intra- and inter-frame semantic dependency among proposals for object detection in video.

Given two key frames , and one supporting frame , the corresponding semantic features of -th proposal are denoted as and respectively. In the initial spatial-temporal aggregation unit, we divide them into two groups, and , where N is the total number of proposals per frame. Below we explain how attention is calculated for ( is similar). The content-based attention weight between -th target proposal in key frame and -th candidate proposal in is calculated as follows:

(1)

To enhance the power of expression, semantic features and

are linearly transformed via parameter matrices

, severally. A softmax function is applied to normalize across all candidate proposals. The normalized attention weight is as follows.

(2)

A residual connection is adopted and eventually the enhanced feature

for each target proposal is as

(3)

In this aggregation unit, target proposals only comes from key frame and candidate proposals are from both key and supporting frames. In the same way, the enhanced feature for proposals in frame can be generated as well. In the next aggregation unit, we mix all the proposals from key frames up and take their enhanced features

as input. At the moment, every proposal will serve as a candidate proposal for each other and all parameter matrices are individual. Semantic features of proposals from key frames are updated twice and these refined features are utilized to do regression and classification.

3.3 Spatial Position Representation

In [20], spatial position plays a key role in modeling spatial dependency between objects for image recognition. Video has richer spatial context by contrast with image, and so it should be of great importance as well for video object detection. Consequently, we add spatial constraints to the attention in a similar way as [20]. Concretely, given a target proposal and candidate proposal , the spatial position is first transformed into to make the representation invariant to scale and translation. Then sine and cosine functions of varying wavelengths is applied to each element , as

(4)

where ranges from to

and so each embedding vector

is of dimension . The embedding vector of is then linearly transformed by parameter matrix . Eventually, the spatial position representation between target proposal and candidate proposal is as,

(5)

3.4 Temporal Position Representation

Although the spatial position representation brings the proposed network spatial constraints, it may also confuse the network without regard to temporal position. Assuming that there are two proposals with identical spatial position but from two disparate video frames, the spatial position representation between them and any other target proposals would be identical. In other words, these two proposals are treated as the same one. Hence, we incorporate temporal constraints in the attention. In video, frame number is usually not fixed, which is not suitable for modeling temporal position. Thus, we use sine and cosine functions(as in Eq.(4)) to encode the relative temporal distance of frame id between two proposals and . The encoded vector and also it is linearly transformed by parameter matrices . Then the temporal position representation is as,

(6)

3.5 Spatial-temporal Context Aggregation

As introduced earlier, our STCA is responsible for the dependency modeling and feature aggregation among proposals. We unite its mathematical description here. For clarity, we denote the semantic feature and spatial-temporal position of a target proposal and a set of corresponding candidate proposals as and , respectively. The superscript represents the -th proposal in key frame.

Concretely, we can calculated following Eq. (1), following Eq. (5) and following Eq. (6). Then, the attention weight or dependency within STCA between and is,

(7)

in which, a function is exploited to balance the spatial position representation and other items following [20]. After obtaining the attention weight between and each of the proposal in , the enhanced feature of the target proposal, denoted as , can be calculated following the formulation in Eq. (3). At last, the proposed STCA operation can be concluded as follows,

(8)

3.6 Training & Inference

Training. During training, we take Faster R-CNN[4] as our basic detector. We observe a similar detection accuracy (74.51% mAP 74.50% mAP) between Faster R-CNN and R-FCN [16]We use the pre-trained model released on github.

. Stride of convolutional layers in

Res5 are set to 1, which changes the effective feature stride from 32 to 16. Dilated convolutions are exploited in Res5 to maintain the receptive field. A convolutional layer with 256 channels is added on the Res5 feature maps to reduce dimension. The proposed network is directly trained on the mixture of DET and VID data, see Section 4.1 for details. To get a triplet of images for training, one key frame is first sampled and then the other key frame and supporting frame are randomly sampled near the first key frame in range of -9 9. Because DET only has images, we cannot sample sequential images strictly. In such case, we copy the image repeatedly and then take them as sequential images. Dimension of feature is 1024 and is 16.

1:Sequence of video frames , inference window size .
2:Bounding boxes of frame .
3:
4:for  to  do stage 1: feature buffer
5:     Generate proposals for frame with RPN.
6:     Extract features for proposals .
7:     Assign temporal position for proposals .
8:end for
9:for  to  do stage 2: feature buffer
10:     Prepare target proposal .
11:     Prepare candidate proposals .
12:     .
13:end for
14:for  to  do update feature
15:     .
16:end for
17:. stage 3: detection
18:Do detection with .
19:Add detection results to
Algorithm 1 Inference algorithm of STCA network for video object detection

Inference. Being consistent with training, we take proposals of key frame and adjacent supporting frames (resulting in frames in total) as candidate proposals to assist target proposals in key frame during inference. More details of the inference phase are summarized in Algorithm 1. At present, the temporal receptive field for key frame is

. It can be implemented easily, and for efficiency, we decouple it into three submodules and adopt two feature buffers to store the shared features for the first two stages, separately. For frames around the video boundary, we pad the buffer with boundary frame for convenience. Unless otherwise specified,

is set to 31 by default. The proposals number

of each frame is set to 300. We will investigate how performance varies with hyperparameters

and in the ablation studies. NMS threshold for RPN is 0.7 and for the final detection results refinement, it is 0.5.

4 Experiments

4.1 Experiment Setup

Dataset. Following most of previous state-of-the-art methods, we conduct all experiments on the ImageNet VID dataset [1] (VID). VID is a large-scale video dataset, which contains 5354 video in total. These videos are split into training, validation, and testing three subsets, which include 3862, 555, and 937 videos, respectively. Each video has about 300 frames in average and the annotations of bounding box and category for each frame are provided. There are 30 object classes in ImageNe VID and all these classes are included in the 200 classes in ImgeNet DET dataset (DET). Following the protocols in [22, 24]

, images in DET overlapped with classes in VID are utilized as training data, which is a common practice. Classical detection evaluation metric Mean Average Precision (mAP) is used and all results are reported on validation set in VID because annotation data of test set is not public.

Implementation details.

Parameters of the backbone network (ResNet-101) are initialized by a pre-trained model on ImageNet and all new layers are randomly initialized by drawing weights from a zero-mean Gaussian distribution with standard deviation 0.01. For anchors in RPN, we use 3 scales with box areas of

, , and pixels, and 3 aspect ratios of 1:1, 1:2, and 2:1, resulting in 9 anchors for each spatial location. Images are resized to shorter side 600 pixels for both training and inference. We adopt SGD optimizer with momentum 0.9 and 120K iterations are trained. Learning rate is 1e-3 for first 80K iterations and 1e-4 for last 40K iterations. Weight decay is 5e-4 and batch size is set to 4, with each GPU holds one mini-batch. OHEM [27] with 128 rois is used. All our implementations are based on MXNet [28] deep learning framework.

4.2 Ablation Study

Methods Semantic dep. Spatial pos. Temporal pos. mAP (%)
(a) 74.5
(b)   
(c)
(d)
(e)
Table 1: Accuracy (mAP in ) for baseline and various model variations by checking semantic dependency, spatial position and temporal position. (a) is the single-frame baseline. The inference window size of variant (b) is 1 and for variants (c)-(e), default inference window size is 31. Relative gains over the single-frame baseline (a) are listed in the subscript.

Model variation.

As shown in Table 1, to inspect the effectiveness of intra-frame and inter-frame context aggregation, as well as the proposed feature components, including semantic dependency, spatial position and temporal position, we measure the distinction of performance (mAP) on VID validation set for the variants of our method. The detailed discussions are listed as follows.

  • This is the basic Faster R-CNN single-frame baseline, which treats all video frames as static images. It is a strong baseline which obtains 74.5% mAP using the ResNet-101 backbone. To verify the effectiveness of each component, we do not exploit techniques e.g. temporal post-processing to improve the recognition accuracy.

  • This setting is the basic framework of proposed method in the absence of spatial and temporal position representation. The training process is introduced as described in Section 3.6. As for the testing phase, inference window size is set to 1, namely one frame is taken as input only, which is the same as single-frame baseline (a). The corresponding mAP of method (b) is 77.4%, which surpasses the single-frame baseline (a) by a large margin. The notable improvement (about 3%) shows the validity of the proposed basic framework(intra-frame context aggregation with semantic feature only) for video object detection.

  • This method shares the same settings with method (b) but inference window size is . Compared with method (b), there are 1.9% improvement in terms of mAP. This significant improvement shows that more candidate proposals from adjacent frames boost the performance. It also implies that inter-frame context could significantly improve the performance. We will explore this later in subsequent subsection.

  • It is established on the basic framework (c) and equipped with the spatial position representation. In despite of the excellent performance achieved in method (c), method (d) can further improve the result to 79.8%.

  • This is the ultimate framework of our method. All contexts including semantic feature and spatial-temporal position are involved. It achieves an additional increase of 0.5% compared with method (d). In the case of single-frame baseline, there are about 6% increase, which indicates the effectiveness of our method.

Figure 3: Detection results in mAP inference window size for state-of-the-art pixel-wise aggregation methods [9, 10] and ours. We use the results reported in its corresponding papers.
Figure 4: Example detection results of single-frame baseline, MANet [10] and our method. Each row shows the results of sampled sequential video clip (only the top-scoring box around object is shown). Red and brown boxes represent correct and incorrect detection, respectively. Our method outperforms single-frame baseline and MANet when there are motion blurs and video defocus. Besides, referring to the detection confidences in the figure, superior spatial-temporal consistency can be observed in our method as well.
Figure 5: Examples of the dependency (in Eq. (8)) between the target proposals and the candidate proposals from adjacent frames in the last aggregation unit. Red and blue boxes are target proposals in a key frame and candidate proposals in its neighboring frames, respectively. Each row shows one of the target proposals and its corresponding candidate proposals with highest dependency in both key frame and neighboring frames.

Inference window size.

Inference window size is a key hyper-parameter and we investigate its relationship to detection results in Fig. 3. In addition, Fig. 3 also shows the results of pixel-wise aggregation methods [9, 10] for comparison. First, we can clearly see that the mAP of our method keeps rising with increasing the inference window size, which can be observed in other methods as well. Then MANet [10] and STSN [9] stop growing at 13 and 27, respectively. However, our method does not show obvious signs of stopping until 40 and it reaches 80.5% mAP when = 47. Besides, our method has the best performance all the time. It indicates that our method works well in capturing both short- and long-term dependency among proposals in spatial-temporal domain. Unless otherwise stated, the default value for is 31 and the corresponding mAP is 80.3%.

Number of proposals.

We investigate how the number of proposals () for each frame affects the performance of our proposed method. The mAP is 80.3% for 300. When is 128, the mAP is 79.7%, which is slightly reduced by 0.6% mAP. With more proposals, there are more contexts that can be mined. Hence, the number of proposals matters in our framework.

0 1 7 13 19 25 31
= 128 93.6 105.7 117.3 128.5 138.8 150.4 161.0
= 300 96.0 113.7 153.7 195.9 238.0 279.5 322.2
Table 2: Inference window size runtime (in ) for our STCA method. All the data loading and post-processing time are involved. 0 denotes the basic image detector, namely Faster R-CNN. The runtime’s evaluation is carried out on a single workstation with Titan X Maxwell GPU.

Computational overhead.

Compared with the single-frame baseline, the additional computational overhead of the proposed method comes from the STCA unit and is mainly affected by the inference window size and proposal number. For detailed analysis, we evaluate the runtime of the proposed method with various settings as in Table 2. On average, the newly added time cost of the STCA units is about 7.3 . As for previous state-of-the-art methods, e.g, [10] and STMN [29], the additional computation cost are 9.6 and 28 , respectively. Also note that the runtime of STMN [29] and our STCA is evaluated on Titan X Maxwell GPU and for MANet [10], more powerful Titan X Pascal GPU is utilized. Thus, our STCA is superior to MANet [10] and STMN [29] in speed as well. In addition, decreasing from 300 to 128 can greatly reduce the time cost and meanwhile the mAP decline (0.6%) is acceptable. Thus, all the hyper-parameters of and can be adjusted within reason to achieve a good trade-off.

Methods Temp. Post-Proc. Optic Flow / ID mAP (%)
D&T [30] 75.8
79.8
ST-Lattice [31] -
79.6
FGFA [8] 76.3
78.4
MANet [10] 78.1
80.3
STSN [9] 78.9
80.4
STMN [29] -
80.5
STCA (Ours) 80.3
80.6
Table 3: We compare our proposed method with different state-of-the-are methods on the VID validation set. Detection results (mAP in ) w/ and w/o temporal post-processing are reported. Temporal post-processing includes box-level techniques, Seq-NMS and tubelet resocre. For STMN [29] and ST-Lattice [31], only the result w/ temporal post-processing is reported in their paper. The temporal post-processing adopted for our method is Seq-NMS. ID is short for instance id.

4.3 Comparison with State-of-the-art Methods

We compare our method with the state-of-the-art pixel-wise feature aggregation methods [8, 9, 10, 29] and other relevant methods [30, 31] for video object detection in Table 3. For illustration purposes, we make analysis on these results in terms of temporal post-processing and extra data annotation.

Temporal post-processing. As is known to all, objects in videos exhibit spatial and temporal consistency. More specifically, objects in neighboring video frames would not change dramatically in both spatial and temporal domains. Hence, the detection score of box within the same tubelet should be smooth. To handle this issue, temporal post-processing techniques (, Seq-NMS [23]) are proposed. Without temporal post-processing technique, STSN [9] achieves the most excellent performance of 78.9% mAP among all previous state-of-the-art methods. However, our method can obtain 80.3% mAP, which is about 1.4% higher than it. When temporal post-processing is applied, gain ranges from 1.5% to 4% for the previous state of the arts. D&T [30] and ST-Lattice [31] adopt well-designed tubelet rescore technique and others use Seq-NMS. For our case, Seq-NMS pushes our method to the state-of-the-art but the performance improvement is only 0.3% mAP. And in our experiments, we observe the gap continue to decrease with the increase of inference window size. In some degree, this demonstrates our method can learn the temporal consistency during training and inference, which is consistent with our design, see Fig. 4

for detailed example detection results. Actually, in some realistic situations, , crowded scenes, the heuristic Seq-NMS or other hand-crafted techniques may be out of action. Our method makes full use of spatial-temporal context and is trainable, which may be more promising and robust.

Extra data annotation. Data annotation is usually of high cost, especially for subtle annotation like optical flow. D&T [30] can obtains 79.8% mAP with instance id used only. With flow annotation, FGFA [8] merely obtains 78.4% mAP, which is 1.4% lower than D&T [30]. MANet [10] achieves 80.3% mAP with both instance id and flow annotation. Without the need for any additional annotated information, our proposed method can obtain 80.6% mAP and exceeds the best of them, which shows the superiority of our method. Also STSN [9] and STMN [29] can achieve 80.4% and 80.5% mAP respectively under the same condition. Compared with them, our method is better as well, despite only a marginal gain. Note that the significant gain of our method is not from temporal post-processing but from the proposed framework.

Visualization of examples of the dependency among proposals from separately video frames is in Fig. 5. It can be observed that our STCA learns to find context regions which boost the feature quality across spatial-temporal domain in spite of the spatial or temporal distance.

5 Discussions

Why Seq-NMS brings less gain? In video, low image quality always causes weak detection and what Seq-NMS [23] does is to raise the confidence of the weak detection according to the high-scoring boxes within same tracklet, which can also be termed as keeping temporal consistency. But imagine such a case where weak detection appears rarely and then Seq-NMS [23] will bring no gains. STCA can just make it happen. Specifically, the feature aggregation strategy in STCA is learnable, which means feature can be aggregated adaptively and robustly. After aggregation, proposal’s feature will be discriminative enough, especially for low image quality cases, and then the classification gets easier. Hence, it’s natural that the Seq-NMS [23] in STCA obtains less gain with getting larger. Specifically, the gain of Seq-NMS [23] is 1.4% and 0.3% when is 1 and 31. The extreme case is equals video length and then it obtains global contextual information in both spatial and temporal dimensions, which has the similar effect to Seq-NMS [23]. Eventually, in Fig 4, we can easily observe that the box scores are confident and the temporal consistency is well maintained in STCA.

6 Conclusion and Future Work

In this work, we propose a novel spatial-temporal context aggregation network for video object detection, which is conceptually simple but effective. In comparison with pixel-wise aggregation methods, our method is proposal-level and naturally robust. Besides, our method can learn the dependency among proposals across spatial-temporal domain, which shows the potential to be end-to-end without extra temporal post-processing. A more complex design for framework (, adopting more units or the transformer mechanism [32]) may be investigated in future work.

References

  • [1] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. 2009.
  • [2] Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    , pages 779–788, 2016.
  • [3] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961–2969, 2017.
  • [4] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91–99, 2015.
  • [5] Lichao Huang, Yi Yang, Yafeng Deng, and Yinan Yu. Densebox: Unifying landmark localization with end to end object detection. arXiv preprint arXiv:1509.04874, 2015.
  • [6] Amlan Kar, Nishant Rai, Karan Sikka, and Gaurav Sharma. Adascan: Adaptive scan pooling in deep convolutional neural networks for human action recognition in videos. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3376–3385, 2017.
  • [7] Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei. Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 1725–1732, 2014.
  • [8] Xizhou Zhu, Yujie Wang, Jifeng Dai, Lu Yuan, and Yichen Wei. Flow-guided feature aggregation for video object detection. In Proceedings of the IEEE International Conference on Computer Vision, pages 408–417, 2017.
  • [9] Gedas Bertasius, Lorenzo Torresani, and Jianbo Shi. Object detection in video with spatiotemporal sampling networks. In Proceedings of the European Conference on Computer Vision (ECCV), pages 331–346, 2018.
  • [10] Shiyao Wang, Yucong Zhou, Junjie Yan, and Zhidong Deng. Fully motion-aware network for video object detection. In Proceedings of the European Conference on Computer Vision (ECCV), pages 542–557, 2018.
  • [11] Alexey Dosovitskiy, Philipp Fischer, Eddy Ilg, Philip Hausser, Caner Hazirbas, Vladimir Golkov, Patrick Van Der Smagt, Daniel Cremers, and Thomas Brox. Flownet: Learning optical flow with convolutional networks. In Proceedings of the IEEE international conference on computer vision, pages 2758–2766, 2015.
  • [12] Jifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong Zhang, Han Hu, and Yichen Wei. Deformable convolutional networks. In Proceedings of the IEEE international conference on computer vision, pages 764–773, 2017.
  • [13] Ross Girshick. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 1440–1448, 2015.
  • [14] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008, 2017.
  • [15] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 580–587, 2014.
  • [16] Jifeng Dai, Yi Li, Kaiming He, and Jian Sun. R-fcn: Object detection via region-based fully convolutional networks. In Advances in neural information processing systems, pages 379–387, 2016.
  • [17] Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2117–2125, 2017.
  • [18] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C Berg. Ssd: Single shot multibox detector. In European conference on computer vision, pages 21–37. Springer, 2016.
  • [19] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980–2988, 2017.
  • [20] Han Hu, Jiayuan Gu, Zheng Zhang, Jifeng Dai, and Yichen Wei. Relation networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3588–3597, 2018.
  • [21] Peng Tang, Chunyu Wang, Xinggang Wang, Wenyu Liu, Wenjun Zeng, and Jingdong Wang. Object detection in videos by high quality object linking. arXiv preprint arXiv:1801.09823, 2018.
  • [22] Kai Kang, Hongsheng Li, Junjie Yan, Xingyu Zeng, Bin Yang, Tong Xiao, Cong Zhang, Zhe Wang, Ruohui Wang, Xiaogang Wang, et al. T-cnn: Tubelets with convolutional neural networks for object detection from videos. IEEE Transactions on Circuits and Systems for Video Technology, 28(10):2896–2907, 2018.
  • [23] Wei Han, Pooya Khorrami, Tom Le Paine, Prajit Ramachandran, Mohammad Babaeizadeh, Honghui Shi, Jianan Li, Shuicheng Yan, and Thomas S Huang. Seq-NMS for video object detection. arXiv preprint arXiv:1602.08465, 2016.
  • [24] Xizhou Zhu, Yuwen Xiong, Jifeng Dai, Lu Yuan, and Yichen Wei. Deep feature flow for video recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2349–2358, 2017.
  • [25] Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7794–7803, 2018.
  • [26] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  • [27] Abhinav Shrivastava, Abhinav Gupta, and Ross Girshick. Training region-based object detectors with online hard example mining. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 761–769, 2016.
  • [28] Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems. arXiv preprint arXiv:1512.01274, 2015.
  • [29] Fanyi Xiao and Yong Jae Lee. Video object detection with an aligned spatial-temporal memory. In Proceedings of the European Conference on Computer Vision (ECCV), pages 485–501, 2018.
  • [30] Christoph Feichtenhofer, Axel Pinz, and Andrew Zisserman. Detect to track and track to detect. In Proceedings of the IEEE International Conference on Computer Vision, pages 3038–3046, 2017.
  • [31] Kai Chen, Jiaqi Wang, Shuo Yang, Xingcheng Zhang, Yuanjun Xiong, Chen Change Loy, and Dahua Lin. Optimizing video object detection via a scale-time lattice. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7814–7823, 2018.
  • [32] Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Łukasz Kaiser. Universal transformers. In ICLR, 2019.