Functionally Modular and Interpretable Temporal Filtering for Robust Segmentation

10/09/2018 ∙ by Jörg Wagner, et al. ∙ 2

The performance of autonomous systems heavily relies on their ability to generate a robust representation of the environment. Deep neural networks have greatly improved vision-based perception systems but still fail in challenging situations, e.g. sensor outages or heavy weather. These failures are often introduced by data-inherent perturbations, which significantly reduce the information provided to the perception system. We propose a functionally modularized temporal filter, which stabilizes an abstract feature representation of a single-frame segmentation model using information of previous time steps. Our filter module splits the filter task into multiple less complex and more interpretable subtasks. The basic structure of the filter is inspired by a Bayes estimator consisting of a prediction and an update step. To make the prediction more transparent, we implement it using a geometric projection and estimate its parameters. This additionally enables the decomposition of the filter task into static representation filtering and low-dimensional motion filtering. Our model can cope with missing frames and is trainable in an end-to-end fashion. Using photorealistic, synthetic video data, we show the ability of the proposed architecture to overcome data-inherent perturbations. The experiments especially highlight advantages introduced by an interpretable and explicit filter module.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 7

page 8

page 9

page 10

page 13

page 14

page 15

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The performance of autonomous systems, such as mobile robots or self-driving cars, is heavily influenced by their ability to generate a robust representation of the current environment. Errors in the environment representation are propagated to subsequent processing steps and are hard to recover. For example, a common error is a missed detection of an object, which might lead to a fatal crash. In order to increase the reliability and safety of autonomous systems, robust methods for observing and interpreting the environment are required.

Deep learning based methods have greatly advanced the state-of-the-art of perception systems. Especially vision-based perception benchmarks (e.gCityscapes [Cordts et al. (2016)] or Caltech [Dollár et al. (2009)]) are dominated by approaches utilizing deep neural networks. From a safety perspective, a major disadvantage of such benchmarks is that they are recorded at daytime under idealized environment conditions. To deploy autonomous systems in an open world scenario without any human supervision, one not only has to guarantee their reliability in good conditions, but also has to make sure that they still work in challenging situations (e.gsensor outages or heavy weather). One source of such challenges are perturbations inherent in the data, which significantly reduce the information provided to the perception system. We denote failures originating from data-inherent perturbations in accordance to the classification of uncertainties [Kiureghian and Ditlevsen (2009), Kendall and Gal (2017)] as aleatoric failures. These failures cannot be resolved using a more powerful model or additional training data. To solve aleatoric failures, one has to enhance the information provided to the perception system. This can be achieved by fusing the information of multiple sensors, utilizing context information or by considering temporal information. A second class of failures are epistemic failures, which are model or dataset dependent. They can be mitigated by using more training data and/or a more powerful model [Kiureghian and Ditlevsen (2009)].

In this work, we focus on tackling aleatoric failures of a single frame semantic segmentation model using temporal consistency. Temporal integration is achieved by recurrently filtering a representation of the model using a functionally modularized filter (Fig. 1). In contrast to other available approaches, our filter consists of multiple submodules, decomposing the filter task into less complex and more transparent subtasks. The basic structure of the filter is inspired by a Bayes estimator, consisting of a prediction step and an update step.

Figure 1: Overview of the functionally modularized representation filter with its subcomponents. For visualization purposes we use images to represent feature maps.

We model the prediction of the representation as an explicit geometric projection given estimates of the scene geometry and the scene dynamics. The scene geometry and dynamics are represented as a per pixel depth and a 6-DoF camera motion. Both parameters are estimated within the filter using two task-specific subnetworks.

The decomposition of the prediction task into a model-based transformation as well as a depth and a motion estimation introduces several advantages. Instead of having to learn dynamics of a high-dimensional representation, we now can model motion separately in a low-dimensional space. The overall filter can therefore be subdivided into two subfilters: A motion filter, which predicts and integrates low-dimensional camera motion and a feature filter, which handles the integration and prediction of abstract scene features.

An advantage of our approach is its improved transparency, interpretability and explicitness. Within the filter, we estimate two human interpretable representations: a depth map and a camera motion (Fig 1, blue boxes). These representations can be used to inspect the functionality of the model, to split the filter into pre-trainable subnetworks, or to debug and validate network behavior. Besides its modularity, our model is trainable in an end-to-end fashion. In contrast to other methods, the proposed filter also works in cases when the current image is not available. Methods that for example use the optical flow fail in such situations due to their inability to compute a meaningful warping.

2 Related Work

In this section, we give an overview of approaches that use temporal information to improve segmentation models against aleatoric failures.

Feature-level temporal filtering. A common approach to temporally stabilize network predictions are feature-level filters. These filters are applied to one or several feature representations, which are integrated using information of previous time steps. Several works implement such a filter using fully learned, model-free architectures. Fayyaz et al. (2016) and Valipour et al. (2017)

generate a feature representation for each image in a sequence and use recurrent neural networks to temporally filter them.

Jin et al. (2016) utilize a sequence of previous images to predict a feature representation of the current image. The predicted representation is fused with the one of the current image and propagated through a decoder network. The Recurrent Fully Convolutional DenseNet Wagner et al. (2018) utilizes a hierarchical filter concept to increase the robustness of a segmentation model. Being model-free, these filters require many parameters and are therefore harder to train. Due to their low interpretability, it is quite difficult to include constraints and to inspect or debug their behavior.

A second class of feature-level filters utilizes a partially model-based approach to integrate features. These approaches use an explicit model to implement the temporal propagation of features and learn a subnetwork to fuse the propagated features with features of the current time step. A common model to implement the propagation is optical flow. The replacement field parametrizing the flow can be predicted in the model Vu et al. (2018) or computed using classical methods Gadde et al. (2017); Nilsson and Sminchisescu (2016). These models are well suited to reduce epistemic failures, but often fail to resolve aleatoric failures. This is due to their dependence on the availability of the current frame. More sophisticated feature propagation models exist Zhou et al. (2017); Mahjourian et al. (2016, 2018); Yin and Shi (2018), which additionally constrain the transformation. Such a model was recently used to temporally aggregate learned features within a multi-task model Radwan et al. (2018). Our model is also partially model-based and utilizes a more sophisticated propagation model similar to Radwan et al. (2018). In contrast to all presented model-based approaches, our filter is not dependent on the availability of the current frame.

Post-processing based temporal integration. Some approaches use post-processing steps to integrate the predictions of single-frame segmentation models. Lei and Todorovic (2016) propose the Recurrent-Temporal Deep Field

model for video segmentation, which combines a convolutional neural network, a recurrent temporal restricted Boltzmann machine, as well as a conditional random field.

Kundu et al. (2016) propose a long-range spatio-temporal regularization using a conditional random field operating on a feature space, optimized to minimize the distance between features associated with corresponding points.

Our temporal integration approach differs from these post-processing methods, due to the integration of rich feature representations instead of segmentations. The modular structure of our filter, with its human interpretable representations, makes it also more transparent.

Spatio-temporal fusion. Other approaches build semantic video segmentation networks using spatio-temporal features. Tran et al. (2016) and Zhang et al. (2014) use 3D convolutions to compute such features. The Recurrent Convolutional Neural Network of Pavel et al. (2017) is another spatio-temporal architecture. This method uses layer-wise recurrent self-connections as well as top-down connections to stabilize representations. These approaches require a large number of parameters. Additionally, it is quite difficult to integrate physical constraints.

3 Functionally Modularized Temporal Filtering

The aim of this work is to improve the robustness of a deep neural network , which receives a measurement and produces a pixel-wise semantic segmentation . We assume the model consists of two parts: a feature encoder and a semantic decoder . The feature encoder generates an abstract feature representation of the image . This representation is up-sampled and refined by the semantic decoder to produce a dense segmentation :

(1)

Due to data-inherent perturbations, the representation is an approximation of the true feature representation without perturbations. Using a temporal filter , we try to improve the estimate of the features and, as a result, the estimate of the semantic decoder:

(2)

All prior knowledge about scene features and dynamics, aggregated from previous time steps, is encoded in the hidden state .

Framing Eq. 2 in the context of a Bayesian estimator, the recurrent filter module has to propagate the belief about the hidden state one time step into the future, update the belief using the current filter input , and compute an improved estimate of the true feature representation. To make our filter module more transparent, we adopt the basic structure of a Bayesian estimator and split the filter into a prediction and update module:

(3)

The prediction module propagates the hidden state, while the update module refines it using information in for deriving an improved estimate of the encoder representation . The prediction module therefore has to learn the complex dynamics of a high-dimensional hidden-state. To increase explainability and divide the prediction task into easier subtasks, we split the hidden state into a high-dimensional static state encoding all scene features and a low-dimensional dynamic state encoding scene dynamics (Fig. 1).

The prediction of can now be performed fully model-based using a geometric projection . This is possible, since the prediction only has to account for spatial feature displacements. To compute a valid projection, estimates of the scene geometry and the scene dynamics are required. We encode the scene geometry as a per-pixel depth and derive it from the static hidden state by means of a depth decoder . A 3D rigid transformation is used to characterize the scene dynamics, assuming the dynamics are dominated by camera motion. The predicted static hidden state is updated in a second module using the new information of the input . The two modules and form the static feature filter.

Scene dynamics, represented by a low-dimensional state , are filtered in a second subfilter. A motion estimation module is used to project from the high-dimensional scene feature space into a low-dimensional motion feature space. The transformation is fully learned, enabling the model to generate a representation well-suited for motion integration.

By decoupling motion and scene features, it is much easier to incorporate auxiliary information such as acceleration data of the sensor. This kind of information can now be fused (see module in Fig. 1) much more targeted with the appropriate motion features derived from image pairs (see module in Fig. 1). An additional advantage of the decoupling is a global modelling of camera motion. The motion is guaranteed to be consistent across spatial scene features and can be estimated using correlations across full image pairs.

Another way of looking at our model is that it consists of an undelying multi-task model:

(4)

predicting a segmentation, a depth map and a 3D rigid transformation. The encoder representation is integrated over time using an additional filter module, which utilizes decoder outputs to propagate previous knowledge. As a result, decoders operate on a filtered encoder representation or are filtered separately (see ), making the functionality of our model not dependent on new inputs . This property sets our filter apart from other approaches.

The overall filter is set-up to increase transparency and interpretability, by modularizing functionalities, using model-based computations, and introducing human interpretable representations. Compared to other architectures, it is hence much easier to debug and validate the model, inspect intermediate results and pre-train subnetworks. These properties are also particularly relevant with regard to safety analysis. From a multi-task perspective, the two auxiliary tasks may also benefit segmentation, due to the implicit regularization Ruder (2017).

3.1 Feature Filter

Depth Estimation. We compute a per-pixel depth using a decoder network operating on the filtered representation . The depth decoder consists of three convolutional layers with kernel size 33, 11, and 1

1, respectively. We apply batch normalization and use ReLU nonlinearities in each layer. The predicted depth is therefore always positive and valid. The number of features in the first two layers is set to

and the last layer predicts one value per pixel. Instead of directly predicting depth values, we let the decoder provide the inverse depth , which puts less focus on wrong predictions in larger distance. For supervision during training, we use two losses. A L1 loss on the inverse depth:

(5)

and a scale-invariant gradient loss Ummenhofer et al. (2017) to take dependencies of depths into account:

(6)
(7)

Prediction / Geometric Projection. To make the prediction of features more explicit, we use a geometric projection Zhou et al. (2017); Mahjourian et al. (2016). Let be the coordinates of each pixel at time step and the camera intrinsic matrix. The projection can be implemented as:

(8)

To keep the notation short, we avoided all conversions related to homogeneous coordinates. The coordinates are continuous and have to be discretized. Additionally, it is necessary to account for ambiguities, in cases where multiple pixels at time step are assigned to the same pixel at time step . We resolve these ambiguities by using the transformed pixels with smaller depth (objects closer to the sensor). This projection is differentiable with respect to scene features. In contrast to other methods, our implementation does not depend on information of time step . This is an important property for resolving aleatoric failures.

Update / Feature Fusion. The update module enables the network to weight the predicted representation and the input representation depending on the information of these two representations (data-dependent weighting). For each pixel position a weighting value is estimated that indicates whether one can rely on prior knowledge () or on information of new inputs (). This weight matrix is calculated similarly to convolution LSTM gates, but contains only one value per pixel instead of one value per pixel and feature:

(9)

The convolutional operator is indicated by , and are 33 kernels, and is a bias. Using and element-wise multiplications , the update module computes:

(10)

3.2 Motion Filter

The motion filter consists of two components: a motion estimator and a motion integration module. Both modules are model-free and learned during training (see Section 4.1).

Motion Estimation. Using the decoder network , the motion estimation module learns a projection from the high-dimensional scene feature space to the low-dimensional motion feature space. To stabilize motion estimates, the projected features are combined in the fusion module with acceleration data of the camera. The motion estimation module is depicted in Fig. 2. Pairs of encoder representations concatenated along the feature dimension are used as the input of the motion decoder. We apply batch normalization and utilize ReLU nonlinearities in each convolutional and fully connected layer.

Figure 2:

Motion estimation module. Conv(f,s): convolutional layer with f filters and a stride of s; FC: fully connected layer;

: Concatenation of features.

Temporal Motion Integration. If the input is noisy, the features

computed in the motion estimator contain only limited information. In order to still obtain a meaningful motion estimate, we integrate motion features over time in a model-free filter. This filter is based on a gated recurrent unit (GRU) 

Cho et al. (2014) and defined by:

(11)
(12)
(13)

To infer the 3D rigid camera transformation from the filtered hidden state , we propagate it through two additional fully connected layers with and

features, respectively. The output layer predicts the translation vector

and the sinus of the rotation angles . The first layer applies batch normalization and uses a ReLU nonlinearity and the output layer uses no nonlinearity for the translation vector and clips the angle sinus estimates to . Based on the clipped values, we compute the rotation matrix .

Motion Supervision. All parameters of the motion filter are trained using ground-truth camera translation vectors and rotation matrices . The losses are based on the relative transformation between the predicted and ground-truth motion as defined by Vijayanarasimhan et al. (2017):

(14)
(15)

4 Experiments

4.1 Implementation Details

Dataset. We evaluate our filter using the SceneNet RGB-D McCormac et al. (2017) dataset which consists of 5M photorealistically rendered RGB-D images recorded from 15K indoor trajectories. Besides the camera motion, all scenes are assumed static. Due to its simulated nature, the dataset provides labels for semantic segmentation, depth estimation, and camera motion estimation. We split the training data into a training and validation set and use the provided validation data to setup the test set. For training, we use all non-overlapping sequences of length 7 generated from the training trajectories. The test set is constructed by sampling 5 non-overlapping sequences of length 7 from each test trajectory, resulting in 5,000 test sequences.

To add aleatoric uncertainty, all sequences are additionally perturbed with noise, clutter, and changes in lighting conditions. Noise is simulated by adding zero-mean Gaussian noise to each pixel. Clutter is introduced by setting subregions of each image to the pixel mean, computed on a per sequence basis. The clutter is generated once per sequence and applied to each frame. Thus, the resulting clutter pattern is the same in each frame, comparable to dirt on the camera lens. To simulate rapid changes in lighting conditions, we increase or decrease the intensity of frames by a random value and let this offset decay over time. Such a noise pattern occurs, for example, when the light is suddenly switched off in a room. We include a more detailed description of the used perturbations in the supplementary material.

Unfiltered Baseline. We use the Pyramid Scene Parsing Network (PSPNet) Zhao et al. (2017) as the basis for all architectures (Fig. 3, highlighted in green). The used PSPNet is comparatively small to keep the computational effort and the required memory of the filtered models manageable.

To train our filter module, we additionally need ground-truth depth maps and camera motions. In order to make the comparison of the resulting filtered architecture with the unfiltered baseline fairer, we use a multi-task version of the PSPNet (MPSPNet) in the evaluation. This model operates on image pairs and additionally predicts camera motion and per-pixel depth maps. It can thus also take advantage of all the benefits of multi-task learning Ruder (2017). The full MPSPNet (see Fig. 3) uses the depth decoder introduced in Section 3.1 as well as the motion decoder introduced in Section 3.2. To predict a valid rigid transformation, we reuse the last fully connected layer of the motion filter .

Figure 3: Multi-task PSPNet. Conv(f,s): convolutional layer with f filters and a stride of s; Pool(s): Pooling level with kernel size s producing 32 features.

Filtered models. Building upon MPSPNet, we set-up our filtered version using the functionally modularized filter concept introduced in Section 3. We call our filtered model FMTNet.

As an additional temporally filtered baseline, we use a model-free, feature-level filter. Such a filter is well suited to solve aleatoric failures, as it does not necessarily require information of the current frame. We use a filter module (denoted by MFF) similar to the one introduced in Wagner et al. (2018) (Fig. 4). This filter module receives the representation of MPSPNet as input and generates an improved estimate (cf. Eq. 2). In the following,

Figure 4: Structure of the filter module MFF.

we will refer to the MPSPNet with model-free filter MFF as MFF-MPSPNet. To be comparable with respect to the filter complexity, the number of parameters in the filter MFF matches the number of parameters in our modularized filter. In the case of our filter, we count the parameters of the depth and motion decoder to the filter, since these decoders are required for filtering. The use of all three decoders in the MFF-MPSPNet guarantees comparable training signals, but is not necessary. Hence, we do not assign the depth and motion decoder weights to the filter MFF, resulting in MFF-MPSPNet having 1.4 times the parameters of FMTNet.

Training Procedure. All models MPSPNet, FMTNet, and MFF-MPSPNet are trained using the multi-task loss introduced by Kendall and Gal (2017), which learns the optimal weighting between the cross-entropy segmentation loss, the two depth losses, as well as the two motion losses. We train using Adam Kingma and Ba (2014)

with a weight decay of 0.0001 and apply dropout with probability 0.1 in the decoders. All components of FMTNet and MFF-MPSPNet that do not belong to the filter are initialized with the corresponding weights of the trained MPSPNet.

Due to its modularity, we can additionally pre-train two components of our filter. First, we pre-train all weights of the motion filter, while keeping the encoder weights fixed. Second, we pre-train the weights of the feature update module as well as the encoder, while keeping all decoders fixed. The second training is performed with sequences containing the same image, perturbed with aleatoric noise. Finally, we fine-tune the overall architecture.

4.2 Evaluation

To evaluate the segmentation performance, we use the Mean Intersection over Union (Mean IoU) on test sequences, computed with 13 classes: bed, books, ceiling, chair, floor, furniture, objects, painting, sofa, table, TV, wall, and window. In the following two experiments, we first evaluate the motion filter and the update module of the feature filter on toy-like data. In the third experiment, we compare our approach with the unfiltered and filtered baseline using the test dataset described in Section 4.1.

Static Feature Integration. To test the functionality of the feature update module, we use a separate static toy-dataset with sequences of length four (see Fig. 5a). Each frame of a

Frame 1 Frame 2 Frame 3 Frame 4 Mean IoU
Table 1: Mean IoU of FMTNet on static toy-data computed for each frame in the sequence.
Figure 5: Static toy-data. a): Occluded input sequence; b): Ground-truth semantic segmentation; c): Semantic prediction of FMTNet; d): Update gate , white corresponds to a value of one (use new information).

sequence contains the same clean image (random image of the SceneNet RGB-D dataset without any of the in Section 4.1 introduced aleatoric perturbations), 50 of which is replaced by Gaussian noise. We fine-tune the encoder network and feature update module of the FMTNet on the toy-data. Due to the static nature of the sequences, we remove the motion filter and use the identity transformation (static camera) in the feature filter.

As shown in Fig. 5, FMTNet integrates information over time. It has learned a meaningfull data-dependent weighting between previous information stored in the hidden filter state and information provided by new frames (see weights in Fig. 5d). The same behavior can be seen in Tab. 5, which reports the Mean IoU on a per-frame basis, computed using 300,000 test sequences. The performance of our model increases over time due to new information.

Temporal Motion Integration. In order to obtain a meaningful motion estimate for images that do not contain any information, it is essential to propagate and aggregate dynamics over time. Using a dynamic toy-dataset, we evaluate the ability of our motion filter to perform these two tasks. The dataset contains sequences of length 10 for which we have replaced the last five frames with Gaussian noise. In Fig. 6 and Tab. 2, we report the performance of our motion filter, which has been fine-tuned on the dynamic toy-data. We use the translation norm  (Eq. 14) and the rotation angle (Eq. 15

) of the relative transformation between predicted and gound-truth motion as evaluation metrics.

Frame 1-2 Frame 2-3 Frame 3-4 Frame 4-5 Frame 5-6 Frame 6-7 Frame 7-8 Frame 8-9 Frame 9-10

Table 2: Translation norm and rotation angle of FMTNet on partially temporally occluded dynamic toy-data, computed for each frame pair using a test set of 30,000 sequences.

In the first four computation steps, the translation norm and rotation angle decrease, as the filter integrates information. In the next five steps, the filter still delivers meaningful predictions, which slowly get worse due to accumulating errors. In Fig. 6, we show the successive projection of the first frame, computed with ground-truth motions and predicted motions, respectively. We use the ground-truth depth maps for both successive projections.

Figure 6: Dynamic toy-data. a): Input sequence; b): Ground-truth projection of frame one; c): Geometric projection of frame one using motion estimates of FMTNet.

Comparison with baselines. To compare our model with the introduced baselines, we use the test set described in Section 4.1. In Tab. 3, we report the Mean IoU of all models on a per-frame basis. Results show clear superiority of the filtered models (MFF-PSPNet, FMTNet), compared to the unfiltered baseline (MPSPNet). Only for the first frame, MPSPNet outperforms the filtered architectures. This is most likely due to not yet well-initialized hidden filter states. Our model surpasses the other filtered baseline and does not seem to be so strongly affected by poorly initialized hidden states. Unexpectedly, the performance of our model decreases again from Frame 5 forward. We suspect that this is due to the fairely simple design of our feature update module. A more sophisticated fusion approach could counter this behavior. We plan to further investigate this deficiency in the future. An example prediction of FMTNet is included in the supplementary material.

Frame 1 Frame 2 Frame 3 Frame 4 Frame 5 Frame 6 Frame 7
MPSPNet
MFF-PSPNet
FMTNet (ours)
Table 3: Mean IoU of all models on test sequences which are perturbed by aleatoric noise. MPSPNet: unfiltered baseline; MFF-PSPNet: filtered baseline; FMTNet: our model.

5 Conclusion

In this paper, we have introduced a functionally modularized temporal representation filter to tackle aleatoric failures of a single frame segmentation model. The main idea behind the filter is to decompose the filter task into less complex and more transparent subtasks. The resulting filter consists of multiple submodules, which can be pre-trained, debugged, and evaluated independently. In contrast to many other approaches in the literature, our filter also works in challenging situation, e.gbrief sensor outages. Using a simulated dataset, we showed the superiority of our model compared to classical baselines. In the future, we plan to extend our filter to explicitly model dynamic objects in the scene.

References

1 Simulation of Aleatoric Perturbations

Aleatoric failures originate from perturbations inherent in the data. To simulate such perturbations, we add noise, clutter, and changes in lighting conditions to all sequences. In the following, we give a detailed desciption of the process used to generate these perturbations. After applying the perturbations to the clean sequences generated from the SceneNet RGB-D dataset, we clip pixels to the interval  to get valid images. Example sequences are shown in Fig. 1.

Noise

is simulated by adding independent Gaussian noise with zero mean to each pixel. The variance of the noise is independently sampled for each sequence from the interval 

.

Clutter is introduced by setting subregions of each image to the pixel mean , computed on a per-sequence basis. The clutter is generated once per-sequence and applied to each frame. Thus, the resulting clutter pattern is the same in each frame, comparable to dirt on the camera lens. The perturbed images are calculated by:

(1)

where is a per-sequence clutter mask, the per-sequence pixel mean, and the clean image. The clutter mask is generated by summing Gaussian kernels whose centers are randomly placed (uniformly sampled) within the image dimensions. Each kernel is normalized to the maximum value one. The number of kernels is uniformly sampled for each sequence from the interval 

. In addition, we uniformly sample the standard deviation of each dimension independently from the interval 

. The kernels are truncated at three times the standard deviation.

Changes in lighting conditions are simulated by increasing or decreasing the intensity of frames. For each sequence, we uniformly sample one frame and a scaling factor from the interval . In addition, we draw a multiplier which with a probability of 0.5 is either 1 or -1. The perturbed images are calculated by:

(2)
(3)
Figure 1: Example sequences of the data used for training and in the evaluation. One sequence of length 7 is shown per row. Each sequence is perturbed with noise, clutter, and changes in lighting conditions.

2 Example Prediction of FMTNet

In Fig. 2, we show an example prediction of our FMTNet. In addition to visualizing the predicted semantic segmentation (Fig. 2c), we also show the predicted depth map (Fig. 2e) and the update gate (Fig. 2f), which are two of the human interpretable representations computed within our functionally modularized temporal filter. The model is able to predict a meaningful depth map as well as camera motion, which are required to propagate information over time. This is especially visible in the last frame of the sequence – although the last frame is missing, the model is still able to produce a meaningful semantic segmentation. In Fig. 2f, we show the gate of our update module. A white pixel corresponds to a gate value of one, which means that the model uses information provided by the current input frame. A black pixel, on the other hand, corresponds to a gate value of zero – the model relies on prior knowledge of previous frames. As expected, the gate of the first frame is fully white, since the filter has to rely on new information. In the last frame, the gate is mainly black, since no meaningful information is provided in that frame. The gate values at the right border of all frames are more white, as the model has never seen these areas before due to camera motion.

Figure 2: Example prediction of our FMTNet. a): Input sequence; b): Ground-truth semantic segmentation. c): Predicted semantic segmentation; d): Ground-truth deph; e): Predicted depth; f): Update gate , white corresponds to a value of one (use new information).