A large part of the real world almost never moves. This may be surprising, since moving entities easily attract our attention , and because we ourselves spend most of our waking hours continuously in motion. Objects like roads and buildings, however, stay put. Can we leverage this property of the world to learn visual features suitable for interpreting complex scenes with many moving objects?
In this paper, we hypothesize that a correspondence module learned in static scenes will also work well in dynamic scenes. This is motivated by the fact that the content of dynamic scenes is the same as the content of static scenes. We would like our hypothesis to be true, because correspondences are far cheaper to obtain in static scenes than in dynamic ones. In static scenes, one can simply deploy a Simultaneous Localization and Mapping (SLAM) module to obtain a 3D reconstruction of the scene, and then project reconstructed points back into the input imagery to obtain multiview correspondence labels. Obtaining correspondences in dynamic scenes would require taking into account the motion of the objects (i.e., tracking).
We propose to leverage multiview data of static points in arbitrary scenes (static or dynamic), to learn a neural 3D mapping module which produces features that are correspondable across viewpoints and timesteps. The neural 3D mapper consumes RGB-D (color and depth) data as input, and produces a 3D voxel grid of deep features as output. We train the voxel features to be correspondable across viewpoints, using a contrastive loss. At test time, given an RGB-D video with approximate camera poses, and given the 3D box of an object to track, we track the target object by generating a map of each timestep and locating the object’s features within each map.
In contrast to models that represent video streams in 2D [48, 49, 46], our model’s 3D scene representation is disentangled from camera motion and projection artifacts. This provides an inductive bias that scene elements maintain their size and shape across changes in camera viewpoint, and reduces the need for scale invariance in the model’s parameters. Additionally, the stability of the 3D map under camera motion allows the model to constrain correspondence searches to the 3D area where the target was last seen, which is a far more reliable cue than 2D pixel coordinates. In contrast to models that use 2.5D representations (e.g., scene flow ), our neural 3D maps additionally provide features for partially occluded areas of the scene, since the model can infer their features from context. This provides an abundance of additional 3D correspondence candidates at test time, as opposed to being limited to the points observed by a depth sensor.
Our work builds on geometry-aware recurrent neural networks (GRNNs)
. GRNNs are modular differentiable neural architectures that take as input RGB-D streams of a static scene under a moving camera and infer 3D feature maps of the scene, estimating and stabilizing against camera motion. The work of Harley et al. showed that training GRNNs for contrastive view prediction in static scenes helps semi-supervised 3D object detection, as well as moving object discovery. In this paper, we extend those works to learn from dynamic scenes with independently moving objects, and simplify the GRNN model by reducing the number of losses and modules. Our work also builds on the work of Vondrick et al. 
, which showed that 2D pixel trackers can emerge without any tracking labels, through self-supervision on a colorization task. In this work, we show that 3D voxel trackers can emerge without tracking labels, through contrastive self-supervision on static data. In the fact that we learn features from correspondences established through triangulation, our work is similar to Dense Object Nets, though that work used object-centric data, and used background subtraction to apply loss on static objects, whereas in our work we do moving-object subtraction to apply loss on anything static. We do not assume a priori that we know which objects to focus on.
We test the proposed architectures in simulated and real datasets of urban driving scenes (CARLA  and KITTI ). We evaluate the learned 3D visual feature representations on their ability to track objects over time in 3D. We show that the learned visual feature representations can accurately track objects in 3D, simply supervised by observing static data. Furthermore, our method outperforms 2D and 2.5D baselines, demonstrating the utility of learning a 3D representation for this task instead of a 2D one.
The main contribution of this paper is to show that learning feature correspondences from static 3D points causes 3D object tracking to emerge. We additionally introduce a neural 3D mapping module which simplifies prior works on 3D inverse graphics, and learns from a simpler objective than considered in prior works. Our code and data are publicly available111https://github.com/aharley/neural_3d_tracking.
2 Related Work
2.1 Learning to see by moving
Both cognitive psychology and computational vision have realised the importance of motion for the development of visual perception[16, 50]. Predictive coding theories [36, 14] suggest that the brain predicts observations at various levels of abstraction; temporal prediction is thus of central interest. These theories currently have extensive empirical support: stimuli are processed more quickly if they are predictable [26, 33], prediction error is reflected in increased neural activity [36, 4], and disproven expectations lead to learning . Several computational models of frame predictions have been proposed [36, 10, 40, 43, 31]. Alongside future frame prediction, predicting some form of contextual or missing information has also been explored, such as predicting frame ordering , temporal instance-level associations  color from grayscale , egomotion [19, 1] and motion trajectory forecasting . Most of these unsupervised methods are evaluated as pre-training mechanisms for object detection or classification [28, 47, 48, 17].
Video motion segmentation literature explores the use of videos in unsupervised moving object discovery 
. Most motion segmentation methods operate in 2D image space, and cluster 2D optical flow vectors or 2D flow trajectories to segment moving objects[29, 5, 12], or use low-rank trajectory constraints [7, 41, 8]. Our work differs in that we address object detection and segmentation in 3D as opposed to 2D, by estimating 3D motion of the “imagined” (complete) scene, as opposed to 2D motion of the pixel observation stream.
2.2 Vision as inverse graphics
Earlier works in Computer Vision proposed casting visual recognition as inverse rendering [30, 53], as opposed to feedforward labelling. The “blocks world” of Roberts  had the goal of reconstructing the 3D scene depicted in the image in terms of 3D solids found in a database. A key question to be addressed is: what representations should we use for the intermediate latent 3D structures? Most works seek to map images to explicit 3D representations, such as 3D pointclouds [51, 44, 54, 45], 3D meshes [24, 21], or binary 3D voxel occupancies [42, 20, 52]. The aforementioned manually designed 3D representations, e.g., 3D meshes, 3D keypoints, 3D pointclouds, 3D voxel grids, may not be general enough to express the rich 3D world, which contains liquids, deformable objects, clutter, dirt, wind, etc., and at the same time may be over descriptive when detail is unnecessary. In this work, we opt for learning-based 3D feature representations extracted end-to-end from the RGB-D input stream as proposed by Tung et al.  and Harley et al. . We extend the architectures of those works to handle and learn from videos of dynamic scenes, as opposed to only videos of static scenes.
3 Learning to Track with Neural 3D Mapping
We consider a mobile agent that can move around in a 3D scene, and observe it from multiple viewpoints. The scene can optionally contain dynamic (moving and/or deforming) objects. The agent has an RGB camera with known intrinsics, and a depth sensor registered to the camera’s coordinate frame. It is reasonable to assume that a mobile agent who moves at will has access to its approximate egomotion, since it chooses where to move and what to look at . In simulation, we use ground truth camera poses; in real data, we use approximate camera poses provided by an inertial navigation system (GPS and IMU). In simulation, we use random viewpoints; in real data, we use just one forward-facing camera (which is all that is available). Note that a more sophisticated mobile agent might attempt to select viewpoints intelligently at training time.
Given the RGB-D and egomotion data, our goal is to learn 3D feature representations that can correspond entities across time, despite variations in pose and appearance. We achieve this by training inverse graphics neural architectures that consume RGB-D videos and infer 3D feature maps of the full scenes, as we describe in Section 3.1. To make use of data where some parts are static and other parts are moving, we learn to identify static 3D points by estimating a reliability mask over the 3D scene, as we describe in Section 3.2. Finally, we track in 3D, by re-locating the object within each timestep’s 3D map, as described in Section 3.3. Figure 1 shows an overview of the training and testing setups.
3.1 Neural 3D Mapping
Our model learns to map an RGB-D (RGB and depth) image to a 3D feature map of the scene in an end-to-end differentiable manner. The basic architecture is based on prior works [43, 17], which proposed view prediction architectures with a 3D bottleneck. In our case, the 3D feature map is the output of the model, rather than an intermediate representation.
Let denote the 3D feature map representation, where denote the width, height, depth and number of feature channels, respectively. The map corresponds to a large cuboid of world space, placed at some pose of interest (e.g., surrounding a target object). Every location in the 3D feature map holds a -length feature vector that describes the semantic and geometric content of the corresponding location of the world scene. To denote the feature map of timestep , we write . We denote the function that maps RGB-D inputs to 3D feature maps as . To implement this function, we voxelize the inputs into a 3D grid, then pass this grid through a 3D convolutional network, and -normalize the outputs.
Tung et al.  learned the parameters of by predicting RGB images of unseen viewpoints, and applying a regression loss; Harley et al.  demonstrated that this can be outperformed by contrastive prediction objectives, in 2D and 3D. Here, we drop the view prediction task altogether, and focus entirely on a 3D correspondence objective: if a static point is observed in two views and , the corresponding features should be similar to each other, and distinct from other features. We achieve this with a cross entropy loss [39, 31, 18, 6]:
where is a temperature parameter, which we set to , and the sum over iterates over non-corresponding features. Note that indexing correctly into the 3D scene maps to obtain the correspondence pair requires knowledge of the relative camera transformation across the input viewpoints; we encapsulate this registration and indexing in the notation . Following He et al. , we obtain a large pool of negative correspondences through the use of an offline dictionary, and stabilize training with a “slow” copy of the encoder, , which is learned via high-momentum updates from the main encoder parameters.
Since the neural mapper (a) does not know a priori which voxels will be indexed for a loss, and (b) is fully convolutional, it learns to generate view-invariant features densely in its output space, even though the supervision is sparse. Furthermore, since (a) the model is encouraged to generate correspondable features invariant to viewpoint, and (b) varying viewpoints provide varying contextual support for 3D locations, the model learns to infer corrrespondable features from limited context, which gives it robustness to partial occlusions.
3.2 Inferring static points for self-supervision in dynamic scenes
The training objective in the previous subsection requires the location of a static point observed in two or more views. In a scenario where the data is made up entirely of static scenes, as can be achieved in simulation or in controlled environments, obtaining these static points is straightforward: any point on the surface of the scene will suffice, provided that it projects into at least two camera views.
To make use of data where some parts are static and other parts are moving, we propose to simply discard data that appears to be moving. We achieve this by training a neural module to take the difference of two scene features as input, and output a “reliability” mask indicating a per-voxel confidence that the scene cube within the voxel is static: , where sg stops gradients from flowing from into the function which produces . We implement
as a per-voxel classifier, with 2-layer fully-connected network applied fully convolutionally. We do not assume to have true labels of moving/static voxels, so we generate synthetic labels using static data: given two maps of thesame scene , we generate positive-label inputs with (as normal), and generate negative-label inputs with
, where the shuffle operation ruins the correspondence between the two tensors. After this training, we deploy this network on pairs of frames from dynamic scenes, and use it to select high-confidence static data to further train the encoder.
Our training procedure is then: (1) learn the encoder on static data; (2) learn the reliability function ; (3) in dynamic data, finetune on data selected by . Steps 2-3 can be repeated a number of times. In practice we find that results do not change substantially after the first pass.
3.3 Tracking via point correspondences
Using the learned neural 3D mapper, we track an object of interest over long temporal horizons by re-locating it in a map produced at each time step. Specifically, we re-locate each voxel of the object, and use these new locations to form a motion field. We convert these voxel motions into an estimate for the entire object by fitting a rigid transformation to the correspondences, via RANSAC.
We assume we are given the target object’s 3D box on the zeroth frame of a test video. Using the zeroth RGB-D input, we generate a 3D scene map centered on the object. We then convert the 3D box into a set of coordinates which index into the map. Let denote a voxel feature that belongs to the object. On any subsequent frame , our goal is to locate the new coordinate of this feature, denoted . We do so via a soft spatial argmax, using the learned feature space to provide correspondence confidences:
where denote the set of coordinates in the search region. We then compute the motion of the voxel as . After computing the motion of every object voxel in this way, we use RANSAC to find a rigid transformation that explains the majority of the correspondences. We apply this rigid transform to the input box, yielding the object’s location in the next frame.
Vondrick et al.  computed a similar attention map during tracking (in 2D), but did not compute its soft argmax, nor explain the object’s motion with a single transformation, but rather propagated a “soft mask” to the target frame, which is liable to grow or shrink. Our method takes advantage of the fact that all coordinates are 3D, and makes the assumption that the objects are rigid, and propagates a fixed-size box from frame to frame.
We empirically found that it is critical to constrain the search region of the tracker in 3D. In particular, on each time step we create a search region centered on the object’s last known position. The search region is large, which is half of the typical full-scene resolution. This serves three purposes. The first is: it limits the number of spurious correspondences that the model can make, since it puts a cap on the metric range of the correspondence field, and thereby reduces errors. Second: it “re-centers” the model’s field of view onto the target, which allows the model to incorporate maximal contextual information surrounding the target. Even if the bounds were sufficiently narrow to reduce spurious correspondences, an object at the edge of the field of view will have less-informed features than an object at the middle, due to the model’s convolutional architecture. The third reason is computational: even 2D works  struggle with the computational expense of the large matrix multiplications involved in this type of soft attention, and in 3D the expense is higher. Searching locally instead of globally makes Eq. 2 tractable.
We test our model in the following two datasets:
Synthetic RGB-D videos of urban scenes rendered in the CARLA simulator 
. CARLA is an open-source photorealistic simulator of urban driving scenes. It permits moving the camera to any desired viewpoint in the scene. We obtain data from the simulator as follows. We begin by generating 10000 autopilot episodes of 16 frames each, at 10 FPS. We define 18 viewpoints along a 40m-radius hemisphere anchored to the ego-car (i.e., it moves with the car). In each episode, we sample 6 random viewpoints from the 18 and randomly perturb their pose, and then capture each timestep of the episode from these 6 viewpoints. We discard episodes that do not have an object in-bounds for the duration of the episode.
We treat the Town1 data as the “training” set, and the Town2 data as the “test” set, so there is no overlap between the train and test sets. This yields 4313 training videos, and 2124 test videos.
Real RGB-D videos of urban scenes, from the KITTI dataset . This data was collected with a sensor platform mounted on a moving vehicle, with a human driver navigating through a variety of areas in Germany. We use the “left” color camera, and LiDAR sweeps synced to the RGB images.
For training, we use the “odometry” subset of KITTI; it includes egomotion information accurate to within 10cm. The odometry data includes ten sequences, totalling 23201 frames.
We test our model in the validation set of the “tracking” subset of KITTI, which has twenty labelled sequences, totalling 8008 frames. For supervised baselines, we split this data into 12 training sequences and 8 test sequences. For evaluation, we create 8-frame subsequences of this data, in which a target object has a valid label for all eight frames. This subsequencing is necessary since objects are only labelled when they are within image bounds. The egomotion information in the “tracking” data is only approximate.
We evaluate our model on its ability to track objects in 3D. On the zeroth frame, we receive the 3D box of an object to track. On each subsequent frame, we estimate the object’s new 3D box, and measure the intersection over union (IOU) of the estimated box with the ground truth box. We report IOUs as a function of timesteps.
We evaluate the following baselines. We provide additional implementation details for each baseline (and for our own model) in the supplementary file.
Unsupervised 3D flow . This model uses an unsupervised architecture similar to ours, but with a 2-stage training procedure, in which features are learned first from static scenes and frozen, then a 3D flow module is learned over these features in dynamic scenes. We extend this into a 3D tracker by “chaining” the flows across time, and by converting the trajectories into rigid motions via RANSAC.
2.5D dense object nets . This model learns to map input images into dense 2D feature maps, and uses a contrastive objective at known correspondences across views. We train this model using static points for correspondence labels (like our own model). We extend this model into a 3D tracker by “unprojecting” the learned embeddings into sparse 3D scene maps, then applying the same tracking pipeline as our own model.
2.5D tracking by colorization [46, 22]. This model learns to map input images into dense 2D feature maps, using an an RGB reconstruction objective. The model trains as follows: given two RGB frames, the model computes a feature map for each frame; for each pixel of the first frame’s feature map, we compute that feature’s similarity with all features of the second frame, and then use that similarity matrix to take a weighted combination of the second frame’s colors; this color combination at every pixel is used as the reconstruction of the first frame, which yields an error signal for learning. We extend this model into a 3D tracker in the same way that we extended the “dense object nets” baseline.
3D neural mapping with random features. This model is equivalent to our proposed model but with randomly-initialized network parameters. This model may be expected to perform at better-than-chance levels due to the power of random features  and due to the domain knowledge encoded in the architecture design.
3D fully convolutional siamese tracker (supervised) . This is a straightforward 3D upgrade of a fully convolutional 2D siamese tracker, which uses the object’s feature map as a cross correlation template, and tracks the object by taking an argmax of the correlation heatmap at each step. It is necessary to supervise this model with ground-truth box trajectories. We also evaluate a “cosine windowing” variant of this model, which suppresses correlations far from the search region’s centroid .
3D siamese tracker with random features. This model is equivalent to the 3D siamese supervised tracker, but with randomly-initialized network parameters. Similar to the random version of 3D neural mapping, this model measures whether random features and the implicit biases are sufficient to track in this domain.
Zero motion. This baseline simply uses the input box as its final estimate for every time step. This baseline provides a measure for how quickly objects tend to move in the data, and serves as a lower bound for performance.
All of these are fairly simple trackers. A more sophisticated approach might incrementally update the object template , but we leave that for future work.
We compare our own model against the following ablated versions. First, we consider a model without search regions, which attempts to find correspondences for each object point in the entire 3D map at test time. This model is far more computationally expensive, since it requires taking the dot product of each object feature with the entire scene. Second, we consider a similar “no search region” model but at half resolution, which brings the computation into the range of the proposed model. This ablation is intended to reveal the effect of resolution on accuracy. Third, we omit the “static point selection” (via the function ). This is intended to evaluate how correspondence errors caused by moving objects (violating the static scene assumption) can weaken the model.
4.2 Quantitative results
We evaluate the mean 3D IOU of our trackers over time. Figure 4-left shows the results of this evaluation in CARLA. As might be expected, the supervised 3D trackers perform best, and cosine windowing improves results.
Our model outperforms all other unsupervised models, and nearly matches the supervised performance. The 2.5D dense object net performs well also, but its accuracy is likely hurt by the fact that it is limited exclusively to points observed in the depth map. Our own model, in contrast, can match against both observed and unobserved (i.e., hallucinated or inpainted) 3D scene features. The colorization model performs under the 2.5D dense object net approach, likely because this model only indirectly encourages correspondence via the colorization task, and therefore is a weaker supervision than the multi-view correspondence objectives used in the other methods.
Random features perform worse than the zero-motion baseline, both with a neural mapping architecture and a siamese tracking architecture. Inspecting the results qualitatively, it appears that these models quickly propagate the 3D box off of the object and onto other scene elements. This suggests that random features and the domain knowledge encoded in these architectures are not enough to yield 3D trackers in this data.
We perform the same evaluation in KITTI, and show the results in Figure 4-right. On this benchmark, accuracies are lower for all models, indicating that the task here is more challenging. This is likely related to the fact that (1) the egomotion is imperfect in this data, and (2) the tracking targets are frequently farther away than they are in CARLA. Nonetheless, the ranking of methods is the same, with our 3D neural mapping model performing best. One difference is that cosine windowing actually worsens siamese tracking results in KITTI. This is likely related to the fact that egomotion stabilization is imperfect in KITTI: a zero-motion prior is only helpful in frames where the target is stationary and camera motion is perfectly accounted for; otherwise it is detrimental.
We additionally split the evaluation on stationary vs. moving objects in CARLA. We use a threshold of total distance (in world coordinates) across the 8-frame trajectories to split these categories. At the last timestep, the mean 3D IOU for all objects together it is (as shown in Figure 4-left); for static objects only, the value is ; for moving objects only, it is . This suggests that the model tracks stationary objects more accurately than moving objects, likely because their appearance changes less with respect to the camera and the background.
Finally, we evaluate the top two models in CARLA using the standard 2D tracking metrics, multi-object tracking accuracy (MOTA) and multi-object tracking precision (MOTP) , though we note that our task only has one target object per video. We find that the 3D siamese + cosine model (supervised) achieves a MOTA of and MOTP of , while our model achieves MOTA and MOTP . This suggests that the supervised tracker makes fewer misses, but our method delivers slightly better precision.
4.3 Qualitative results
We visualize our tracking results, along with inputs and colorized visualizations of our neural 3D scene maps, in Figure 2
. To visualize the neural 3D maps, we take a mean along the vertical axis of the grid (yielding a “bird’s eye view” 2D grid), compress the deep features in each grid cell to 3 channels via principal component analysis, normalize, and treat these values as RGB intensities. Comparing the 3D features learned in CARLA vs those learned in KITTI reveals a very obvious difference: the KITTI features appear blurred and imprecise in comparison with the CARLA features. This is likely due to the imperfect egomotion information, which leads to slightly inaccurate correspondence data at training time (i.e., occasional failures by the static point selector).
In Figure 3, we visualize the features learned by Dense Object Nets in this data. From the PCA colorization it appears that objects are typically colored differently from their surroundings, which is encouraging, but the embeddings are not as clear as those in the original work , likely because this domain does not have the benefit of background image subtraction and object-centric losses.
In the supplementary file, we include video visualizations of the learned features and 3D tracking results.
We evaluate ablated versions of our model, to reveal the effect of (1) search regions, (2) resolution, and (3) static point selection in dynamic scenes. Results are summarized in Table 1.
Without search regions, the accuracy of our model drops by 20 IOU points, which is a strong impact. We believe this drop in performance comes from the fact that search regions take advantage of 3D scene constancy, by reducing spurious correspondences in far-away regions of the scene.
Resolution seems to have a strong effect as well: halving the resolution of the wide-search model reduces its performance by 15 IOU points. This may be related to the fact that fewer points are then available for RANSAC to find a robust estimate of the object’s rigid motion.
Since static point selection is only relevant in data with moving objects, we perform this experiment in KITTI (as opposed to CARLA, where the training domain is all static by design). The results show that performance degrades substantially without the static point selection. This result is to be expected, since this ablation causes erroneous correspondences to enter the training objective, and thereby weakens the utility of the self-supervision.
|Ours in CARLA||0.61|
|…without search regions||0.40|
|…without search regions, at half resolution||0.25|
|Ours in KITTI||0.46|
|…without static point selection||0.39|
The proposed model has three important limitations. First, our work assumes access to RGB-D data with accurate egomotion data at training time, with a wide variety of viewpoints. This is easy to obtain in simulators, but real-world data of this sort typically lies along straight trajectories (as it does in KITTI), which limits the richness of the data. Second, our model architecture requires a lot of GPU memory, due to its third spatial dimension. This severely limits either the resolution or the metric span of the latent map . On 12G Titan X GPUs we encode a space sized at a resolution of , with a batch size of 4; iteration time is 0.2s/iter. Sparsifying our feature grid, or using points instead of voxels, are clear areas for future work. Third, our test-time tracking algorithm makes two strong assumptions: (1) a tight box is provided in the zeroth frame, and (2) the object is rigid. For non-rigid objects, merely propagating the box with the RANSAC solution would be insufficient, but the voxel-based correspondences might still be helpful.
We propose a model which learns to track objects in dynamic scenes just from observing static scenes. We show that a multi-view contrastive loss allows us to learn rich visual representations that are correspondable not only across views, but across time. We demonstrate the robustness of the learned representation by benchmarking the learned features on a tracking task in real and simulated data. Our approach outperforms prior unsupervised 2D and 2.5D trackers, and approaches the accuracy of supervised trackers. Our 3D representation benefits from denser correspondence fields than 2.5D methods, and is invariant to the artifacts of camera projection, such as apparent scale changes of objects. Our approach opens new avenues for learning trackers in arbitrary environments, without requiring explicit tracking supervision: if we can obtain an accurate pointcloud reconstruction of an environment, then we can learn a tracker for that environment too.
This material is based upon work funded and supported by the Department of Defense under Contract No. FA8702-15-D-0002 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center. We also acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), AiDTR, the DARPA Machine Common Sense program, and the AWS Cloud Credits for Research program.
-  Agrawal, P., Carreira, J., Malik, J.: Learning to see by moving. In: ICCV (2015)
-  Bernardin, K., Elbs, A., Stiefelhagen, R.: Multiple object tracking performance metrics and evaluation in a smart room environment. In: Sixth IEEE International Workshop on Visual Surveillance, in conjunction with ECCV. vol. 90, p. 91. Citeseer (2006)
-  Bertinetto, L., Valmadre, J., Henriques, J.F., Vedaldi, A., Torr, P.H.: Fully-convolutional siamese networks for object tracking. In: European conference on computer vision. pp. 850–865. Springer (2016)
-  Brodski, A., Paasch, G.F., Helbling, S., Wibral, M.: The faces of predictive coding. Journal of Neuroscience 35(24), 8997–9006 (2015)
-  Brox, T., Malik, J.: Object segmentation by long term analysis of point trajectories. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV. pp. 282–295 (2010)
-  Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709 (2020)
-  Cheriyadat, A., Radke, R.J.: Non-negative matrix factorization of partial track data for motion segmentation. In: ICCV (2009)
-  Costeira, J., Kanade, T.: A multi-body factorization method for motion analysis. ICCV (1995)
-  Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., Koltun, V.: CARLA: An open urban driving simulator. In: CORL. pp. 1–16 (2017)
-  Eslami, S.M.A., Jimenez Rezende, D., Besse, F., Viola, F., Morcos, A.S., Garnelo, M., Ruderman, A., Rusu, A.A., Danihelka, I., Gregor, K., Reichert, D.P., Buesing, L., Weber, T., Vinyals, O., Rosenbaum, D., Rabinowitz, N., King, H., Hillier, C., Botvinick, M., Wierstra, D., Kavukcuoglu, K., Hassabis, D.: Neural scene representation and rendering. Science 360(6394), 1204–1210 (2018). https://doi.org/10.1126/science.aar6170
-  Florence, P.R., Manuelli, L., Tedrake, R.: Dense object nets: Learning dense visual object descriptors by and for robotic manipulation. In: CoRL (2018)
-  Fragkiadaki, K., Shi, J.: Exploiting motion and topology for segmenting and tracking under entanglement. In: CVPR (2011)
-  Franconeri, S.L., Simons, D.J.: Moving and looming stimuli capture attention. Perception & psychophysics 65(7), 999–1010 (2003)
-  Friston, K.: Learning and inference in the brain. Neural Networks 16(9), 1325–1352 (2003)
-  Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: The kitti dataset. International Journal of Robotics Research (IJRR) (2013)
-  Gibson, J.J.: The Ecological Approach to Visual Perception. Houghton Mifflin (1979)
-  Harley, A.W., Lakshmikanth, S.K., Li, F., Zhou, X., Tung, H.Y.F., Fragkiadaki, K.: Learning from unlabelled videos using contrastive predictive neural 3d mapping. In: ICLR (2020)
-  He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. In: CVPR (2020)
-  Jayaraman, D., Grauman, K.: Learning image representations tied to ego-motion. In: ICCV (2015)
-  Kar, A., Häne, C., Malik, J.: Learning a multi-view stereo machine. In: NIPS (2017)
-  Kato, H., Ushiku, Y., Harada, T.: Neural 3d mesh renderer. In: CVPR (2018)
-  Lai, Z., Lu, E., Xie, W.: MAST: A memory-augmented self-supervised tracker. In: CVPR (2020)
-  Lee, H.Y., Huang, J.B., Singh, M., Yang, M.H.: Unsupervised representation learning by sorting sequences. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 667–676 (2017)
-  Loper, M., Mahmood, N., Romero, J., Pons-Moll, G., Black, M.J.: Smpl: A skinned multi-person linear model. ACM Trans. Graph. 34(6), 248:1–248:16 (Oct 2015). https://doi.org/10.1145/2816795.2818013, http://doi.acm.org/10.1145/2816795.2818013
-  Matthews, L., Ishikawa, T., Baker, S.: The template update problem. IEEE transactions on pattern analysis and machine intelligence 26(6), 810–815 (2004)
-  McClelland, J.L., Rumelhart, D.E.: An interactive activation model of context effects in letter perception: I. an account of basic findings. Psychological review 88(5), 375 (1981)
-  Menze, M., Geiger, A.: Object scene flow for autonomous vehicles. In: CVPR (2015)
Misra, I., Zitnick, C.L., Hebert, M.: Unsupervised learning using sequential verification for action recognition. In: ECCV (2016)
-  Ochs, P., Brox, T.: Object segmentation in video: A hierarchical variational approach for turning point trajectories into dense regions. In: ICCV (2011)
-  Olshausen, B.: Perception as an inference problem. In: The Cognitive Neurosciences. MIT Press (2013)
-  Oord, A.v.d., Li, Y., Vinyals, O.: Representation learning with contrastive predictive coding. arXiv:1807.03748 (2018)
-  Patla, A.E.: Visual control of human locomotion. In: Advances in psychology, vol. 78, pp. 55–97. Elsevier (1991)
-  Pinto, Y., van Gaal, S., de Lange, F.P., Lamme, V.A., Seth, A.K.: Expectations accelerate entry of visual stimuli into awareness. Journal of Vision 15(8), 13–13 (2015)
-  Pont-Tuset, J., Perazzi, F., Caelles, S., Arbeláez, P., Sorkine-Hornung, A., Van Gool, L.: The 2017 davis challenge on video object segmentation. arXiv:1704.00675 (2017)
-  Rahimi, A., Recht, B.: Random features for large-scale kernel machines. In: Advances in neural information processing systems. pp. 1177–1184 (2008)
-  Rao, R.P., Ballard, D.H.: Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nature neuroscience 2(1), 79 (1999)
-  Roberts, L.: Machine perception of three-dimensional solids. Ph.D. thesis, MIT (1965)
-  Schultz, W., Dayan, P., Montague, P.R.: A neural substrate of prediction and reward. Science 275(5306), 1593–1599 (1997)
-  Sohn, K.: Improved deep metric learning with multi-class N-pair loss objective. In: NIPS. pp. 1857–1865 (2016)
-  Tatarchenko, M., Dosovitskiy, A., Brox, T.: Single-view to multi-view: Reconstructing unseen views with a convolutional network. In: ECCV (2016)
-  Tomasi, C., Kanade, T.: Shape and motion from image streams under orthography: A factorization method. Int. J. Comput. Vision 9(2), 137–154 (Nov 1992). https://doi.org/10.1007/BF00129684, http://dx.doi.org/10.1007/BF00129684
-  Tulsiani, S., Zhou, T., Efros, A.A., Malik, J.: Multi-view supervision for single-view reconstruction via differentiable ray consistency. In: CVPR (2017)
-  Tung, H.Y.F., Cheng, R., Fragkiadaki, K.: Learning spatial common sense with geometry-aware recurrent networks. In: CVPR (2019)
Tung, H.F., Harley, A.W., Seto, W., Fragkiadaki, K.: Adversarial inverse graphics networks: Learning 2d-to-3d lifting and image-to-image translation with unpaired supervision. ICCV (2017)
-  Vijayanarasimhan, S., Ricco, S., Schmid, C., Sukthankar, R., Fragkiadaki, K.: Sfm-net: Learning of structure and motion from video. arXiv:1704.07804 (2017)
-  Vondrick, C., Shrivastava, A., Fathi, A., Guadarrama, S., Murphy, K.: Tracking emerges by colorizing videos. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 391–408 (2018)
Walker, J., Doersch, C., Gupta, A., Hebert, M.: An uncertain future: Forecasting from static images using variational autoencoders. In: ECCV (2016)
-  Wang, X., Gupta, A.: Unsupervised learning of visual representations using videos. In: ICCV (2015)
-  Wang, X., Jabri, A., Efros, A.A.: Learning correspondence from the cycle-consistency of time. In: CVPR (2019)
-  Wiskott, L., Sejnowski, T.J.: Slow feature analysis: Unsupervised learning of invariances. Neural computation 14(4), 715–770 (2002)
-  Wu, J., Xue, T., Lim, J.J., Tian, Y., Tenenbaum, J.B., Torralba, A., Freeman, W.T.: Single image 3d interpreter network. In: ECCV. pp. 365–382 (2016)
-  Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X., Xiao, J.: 3d shapenets: A deep representation for volumetric shapes. In: CVPR. pp. 1912–1920. IEEE Computer Society (2015)
Yuille, A., Kersten, D.: Vision as Bayesian inference: analysis by synthesis? Trends in Cognitive Sciences10, 301–308 (2006)
-  Zhou, T., Brown, M., Snavely, N., Lowe, D.G.: Unsupervised learning of depth and ego-motion from video. In: CVPR (2017)
6 Implementation details
6.1 Neural 3D mapping
Our 3D convolutional network has the following architecture. The input is a 4-channel 3D grid shaped
. The four channels are RGB and occupancy. “Unoccupied” voxels have zeros in all channels. This input is passed through a 3D convolutional network, which is an encoder-decoder with skip connections. There are three encoder layers: each have stride-2 3D convolutions, with kernel size, and the output channel dimensions are . There are three decoder layers: the first two have stride-2 transposed convolutions with stride 2 and kernels with output channels; after each of these we concatenate the same-resolution layer from the encoder half; the last layer has a kernel and channels.
At training time encodes a metric space sized meters. At test time we “zoom in” on the area of interest (i.e., the search region), and encode a space sized meters.
We train with the Adam optimizer for iterations at a learning rate of .
Unsupervised 3D flow . The original work by Harley et al.  only estimated flow for pairs of frames. We extend this into a 3D tracker by deploying the flow module in an object-centric manner, and by “chaining” the flows together across time. We effectively compute the flow from the first frame directly to every other frame in the sequence, but with an alignment step designed to keep the object within the field of view of the flow module. Specifically, given the object location in the first frame, we extract a volume of flow vectors in the object region, and run RANSAC to find a rigid transformation that explains the majority of the flow field. We apply this rigid transform to the box, yielding the object’s location in the next frame. Then, we back-warp the next frame according to the box transformation, to re-center the voxel grid onto the object. Estimating the flow between the original frame and the newly backwarped frame yields the residual flow field; adding this residual to the original flow provides the new cumulative motion of the object. We repeat these steps across the length of the input video, back-warping according to the cumulative flow and estimating the residual.
2.5D dense object nets . While the original work used a margin loss for contrastive learning, we update this to a cross entropy loss with a large offline dictionary and a coupled “slow” encoder, consistent with state-of-the-art contrastive learning . We extend this model into a 3D tracker as follows. First, we use the available depth data to “unproject” the learned embeddings into sparse 3D scene maps. Then, we use the same tracking pipeline as our own model (with search regions and soft argmaxes and RANSAC). The differences between this model and our own are (1) it learns a 2D CNN instead of a 3D one, and (2) its 3D output is constrained to the locations observed in the depth map, instead of producing features densely across the 3D grid.
2.5D tracking by colorization [46, 22]. We use the latest iteration of this method , which uses the LAB colorspace at input and output, and color dropout at training time; this outperforms the quantized-color cross entropy loss of the original work. We extend this model into a 3D tracker in the same way that we extended the “dense object nets” baseline. Therefore, the only difference between the two 2.5D methods is in the training: this method can train on arbitrary data but has an objective that only indirectly encourages correspondence (through an RGB reconstruction loss), while “2.5D dense object nets” requires static scenes and directly encourages correspondences (through a contrastive loss).
3D neural mapping with random features.
We use the same inputs, architecture, outputs, and the same hyperparameters for resolution and search regions, but do not train the parameters.
7 Detailed results
In Tables 2 and 3, we provide the numerical values of the data plotted in the main paper. The notation IOU@N denotes 3D intersection over union at the Nth frame of the video. Note that IOU@0 in this task, since tracking is initialized with a ground-truth box in the zeroth frame.
|Random 3D neural mapping||0.13||0.07||0.05||0.03|
|Random 3D siamese||0.05||0.02||0.01||0.00|
|Random 3D siamese + cosine||0.48||0.22||0.10||0.05|
|2.5D colorization [46, 22]||0.41||0.29||0.25||0.19|
|2.5D dense object nets ||0.66||0.47||0.39||0.33|
|3D flow ||0.58||0.47||0.38||0.29|
|3D siamese  (supervised)||0.69||0.65||0.63||0.61|
|3D siamese + cosine  (supervised)||0.75||0.71||0.69||0.65|
|Random 3D neural mapping||0.10||0.04||0.03||0.02|
|Random 3D siamese||0.00||0.01||0.00||0.00|
|Random 3D siamese + cosine||0.40||0.11||0.03||0.01|
|2.5D colorization [46, 22]||0.30||0.20||0.11||0.10|
|2.5D dense object nets ||0.68||0.55||0.40||0.31|
|3D flow ||0.55||0.42||0.39||0.22|
|3D siamese  (supervised)||0.61||0.60||0.58||0.58|
|3D siamese + cosine  (supervised)||0.60||0.56||0.55||0.55|