Semantic segmentation, which provides pixel-wise labels, has witnessed a tremendous progress recently. As shown in , it outputs dense predictions and partitions the image to semantically meaningful parts. It has numerous applications including autonomous driving, augmented reality and robotics . The work in  presented a fully convolutional network and provides a method for end-to-end training of semantic segmentation. It yields a coarse heat-map followed by in-network upsampling to get dense predictions. Following the work of fully convolutional networks, many attempts were made to improve single image semantic segmentation. In  a full deconvolution network is presented with stacked deconvolution layers. The work in 
provided a method to incorporate contextual information using recurrent neural networks. However, one missing element is that the real-world is not a set of still images. In real-time camera or recorded video, much information is perceived from temporal cues. For example, the difference between a walking or standing person is hardly recognizable in still images but it is obvious in a video.
Video segmentation has been extensively investigated using classical approaches. The work in  reviews the literature in binary video segmentation. It mainly focuses on semi-supervised approaches  that propagate the labels in one or more annotated frames to the entire video. In  a method that uses a combination of Recurrent Neural Networks (RNN) and CNN for RGB-D video segmentation is presented. However, their proposed architecture is difficult to train because of the vanishing gradient. It does not utilize pre-trained networks and it cannot process large images as number of their parameters is quadratic with respect to the input size.
Gated Recurrent Architectures10]. They are successfully employed in many tasks. Captioning and text processing in particular  
. Gated Recurrent Unit (GRU) is a more recent gated architecture. It is shown that GRU has similar performance to LSTM but with reduced number of gates thus fewer parameters
. The main bottleneck with these previous architectures is that they only work with vectors and therefore, do not preserve spatial information in images or feature maps. In convolutional GRU is introduced for learning spatio-temporal features from videos and used for video captioning and action recognition.
Inspired by these methods we design a gated recurrent FCN architecture to solve many of the shortcomings of the previous approaches. Contributions include:
A novel architecture that can incorporate temporal data into FCN for video segmentation. A Convolutional Gated Recurrent FCN architecture is designed to efficiently utilize spatiotemporal information.
An end-to-end training method for online video segmentation.
An experimental analysis on video binary segmentation and video semantic segmentation is presented on recent benchmarks.
An overview of the suggested method is shown in Figure 1. Where a sliding window of input video frames is fed to the recurrent fully convolutional network(RFCN) and resulted in the segmentation of the last frame. The paper is structured as follows. In Section 2 necessary background is discussed. The proposed method is presented in details in section 3. Section 4 presents experimental results and discussion on recent benchmarks. Finally, section 5 summarizes the article and presents potential future directions.
This section will review FCN and RNN which will be repeatedly referred to throughout the article.
2.1 Fully Convolutional Networks (FCN)
Convolutional neural networks that are initially designed with image classification tasks in mind. Later, it became apparent that CNN can also be used for segmentation by doing pixel-wise classification. However, dense pixel-wise labeling is extremely inefficient using regular CNN. In  the idea of using a fully convolutional neural network that is trained for pixel-wise semantic segmentation is presented. In this approach, all the fully connected layers of CNN networks are replaced with convolutional layers. This design allows the network to accommodate any input size since it is not restricted to a fixed output size, fully connected layers. More importantly, now it is possible to get a course segmentation output (called heat-map) by only one forward pass of the network.
This coarse map needs to be up-sampled to the original size of the input image. Simple bi-linear interpolation can be used however, an adaptive up-sampling is shown to have a better result. In a new layer with learnable filters that applies up-sampling within the network is presented. It is an efficient way to learn the up-sampling weights through back-propagation. These types of layers are commonly known as deconvolution. The filters of deconvolution layers can be seen as a basis to reconstruct the input image or just to increase the spatial size of feature maps. Skip architecture can be used for an even finer segmentation. In this architecture heat maps from earlier pooling layers are merged with the final heatmap for an improved segmentation. This architecture is termed as FCN-16s or FCN-8s based on the pooling layers that are used.
2.2 Recurrent Neural Networks
Recurrent Neural Networks can be applied on a sequence if inputs and are able to capture the temporal relation between them. A hidden unit in each recurrent cell allows it to have a dynamic memory that is changing according to what it had hold before and the new input. The simplest recurrent unit can be modeled as in equation 1.
Where, is the hidden unit, is the input, is the output, is the current time step and
is the activation function.
When propagating the error in recurrent units, due to the chain law, we see that the derivative of each node is dependent on all of earlier nodes. This chain dependency can be arbitrary long based on length of the input vector. It was observed that it will cause vanishing gradient problem, especially for longer input vectors. Gated recurrent architectures have been proposed as a solution and they were empirically successful in many tasks. Two popular choices of these architectures are presented in this section.
2.2.1 Long Short Term Memory (LSTM)
LSTM utilizes three gates to control the flow of signal within the cell. These gates are input, output and forget gate and each of them has its own set of weights. These weights can be learned with back-propagation. At the inference stage, the values in the hidden unit changes based on the sequence of inputs that is has seen and can be roughly interpreted as a memory. This memory can be used for the prediction of the current state. Equations 2 shows how the gates values and hidden states are computed. , and are the gates and and are the internal and the hidden state respectively.
2.2.2 Gated Recurrent Unit (GRU)
GRU uses the same gated principal of LSTM but with a simpler architecture. Therefore, it is not as computationally expensive as LSTM and uses less memory. Equations 3 describe the mathematical model of the GRU. Here, and are the gates and is the hidden state.
GRU is simpler than LSTM since the output gate is removed from the cell and the output flow is controlled by two other gates indirectly. The cell memory is also updated differently in GRU. LSTM updates its hidden state by summation over flow after input gate and forget gate. In the other hand, GRU assumes a correlation between memorizing and forgetting and controls both by one gate only .
An overview of the method is presented in Figure 1. A recurrent fully convolutional network (RFCN) is designed that utilizes the spatiotemporal information for video segmentation. The recurrent unit in the network can either be LSTM, GRU or Conv-GRU (which is explained in 3.2). A sliding window over the video frames is used as input to the network. This allows on-line video segmentation as opposed to off-line batch processing. The window data is forwarded through the RFCN to yield a segmentation for the last frame in the sliding window. Note that the recurrent unit can be applied on the coarse segmentation (heat map) or intermediate feature maps. The network is trained in an end-to-end fashion using pixel-wise classification logarithmic loss. Two main approaches are explored in our method: (1) conventional recurrent units, and (2) convolutional recurrent units 1. Specifically, four different network architectures under these two approaches are used as detailed in the following sections.
3.1 Conventional Recurrent Architecture for Segmentation
RFC-Lenet is a fully convolutional version of Lenet. Lenet is a well known shallow network. Because it is common, we used it for baseline comparisons on synthetic data. We embed this model in a recurrent node to capture temporal data. The final network is named as RFC-Lenet in Table 1.
The output of deconvolution a 2D map of dense predictions that is then flattened into 1D vector as the input to a conventional recurrent unit. The recurrent unit takes this vector for each frame in the sliding window and outputs the segmentation of the last frame (Figure 1).
|input: 2828||input: 120180||input: 240360|
|Conv: F(5), P(10), D(20)||
|Conv: F(5), S(3), P(10), D(20)||
|Conv: F(11), S(4), P(40), D(64)|
|Pool 22||Pool 22||Pool 33|
|Conv: F(5), D(50)||Conv: F(5), D(50)||Conv: F(5), P(2) D(256)|
|Conv: F(3), D(500)||Conv: F(3), D(500)||Conv: F(3), P(1) D(256)|
|Conv: F(1), D(1)||Conv: F(1), D(1)||Conv: F(3), P(1) D(256)|
|Conv: F(3), P(1) D(256)|
|Conv: F(3), D(512)|
|Conv: F(3), D(128)|
|DeConv: F(10), S(4)||Flatten||ConvGRU: F(3), D(128)|
|Flatten||GRU: W(100100)||Conv: F(1), D(1)|
|GRU: W(784784)||DeConv: F(10), S(4)||DeConv: F(20), S(8)|
shows the stride in the convolution layer.is number of feature maps generated by the layer (It is same as the previous layer if it is not mentioned).
RFC-12s is another architecture that is used for baseline comparisons, to compare end-to-end and decoupled training as detailed in section 4. The RFC-Lenet architecture requires a large weight matrix in the recurrent unit since it processes vectors of the flattened full sized image. One way to overcome this problem is to apply the recurrent layer on the down-sampled heatmap before deconvolution. This leads to this second architecture termed as RFC-12s in Table 1
. In this network, vectorized coarse output maps are given to the recurrent unit. The recurrent unit operates on a sequence of these coarse maps and produces a coarse map corresponding the last frame in the sequence. Later, the deconvolution layer generates dense prediction from the output the recurrent unit. In this way, the recurrent unit is allowed to work on much smaller vectors and therefore reduces the variance in the network.
3.2 Convolutional Gated Recurrent Architecture (Conv-GRU) for Segmentation
Conventional recurrent units are designed for processing text data not images. Therefore, using them on images without any modification causes two main issues. 1) The size of weight parameters becomes very large since vectorized images are large 2) Spatial connectivity between pixels are ignored. For example, using a recurrent unit on a feature map with the spatial size of and number of channels requires number of weights. This will cause a memory bottleneck and inefficient computations. It will also create a larger search space for the optimizer, thus it will be harder to train.
Convolutional recurrent units, akin to regular convolutional layer, convolve three dimensional weights with their input. Therefore, to convert a gated architecture to a convolutional one, dot products should be replaced with convolutions. Equations 4 show this modification for the GRU. The weights are of size of where , , and are kernel’s height and width, number of input channels, and number of filters, respectively. Learning filters that convolve with the entire image instead of learning individual weights for each pixel, makes it much more efficient. This layer can be applied on either final heat map or intermediate feature maps.
RFC-VGG in Table 1 is an example of this approach, where intermediate feature maps are fed into a convolutional gated recurrent unit. Then a convolutional layer converts its output to a heat map. It is based on VGG-F 
network. The reason for switching to the RFC-VGG architecture is to use pre-trained weights from VGG-F. Initializing weights of our filters by VGG-F trained weights, alleviates over-fitting problems as these weights are the result of extensive training on Imagenet dataset. The last two pooling layers are dropped from VGG-F to allow a finer segmentation with a reduced network. Figure2 shows the detailed architecture of RFC-VGG.
RFCN-8s is the recurrent version of FCN-8s architecture and is used in our semantic segmentation experiments. FCN-8s network is commonly used in many state of the art segmentation methods as it provides more detailed segmentation. It is loaded with a pre-trained with VGG-16 and it employs the skip architecture that combines pool3 and pool4 layers, with the final layer to have a finer segmentation. In RFCN-8s the convolutional gated recurrent unit is placed before pool3 layer where the skip connections start branching.
This section describes the experimental analysis and results. First, the datasets are presented followed by the training method and hyper-parameters used. Then both quantitative and qualitative analyses are presented. All experiments are performed on our implemented open source library that supports convolutional gated recurrent architectures. The implementation is based on Theano and supports using different FCN architectures as a recurrent node. The key features of this implementation are: (1) The ability to use any arbitrary CNN or FCN architecture as a recurrent node. In order to utilize temporal information. (2) Support for three gated recurrent architectures which are, LSTM, GRU, and Conv-GRU. (3) It includes deconvolution layer for in the network upsampling and supports skip architecture for finer segmentation. A public version of the code for the library along with the trained models will be published after the anonymous review.
Moving MNIST dataset is synthesized from original MNIST by moving the characters in random but consistent directions. The labels for segmentation is generated by thresholding input images after translation. We consider each translated image as a new frame. Therefore we can have arbitrary length image sequences.
Change Detection Dataset This dataset provides realistic, diverse set of videos with pixel-wise labeling of moving objects. The dataset includes both indoor and outdoor scenes. It focuses on moving object segmentation. In the foreground detection, videos with similar objects were selected such as humans or cars. Accordingly, we chose six videos: Pedestrians, PETS2006, Badminton, CopyMachine, Office, and Sofa.
SegTrack V2 is a collection of fourteen video sequences with objects of interest manually segmented. The dataset has sequences with both single or multiple objects. In the latter case, we consider all the segmented objects as one and we perform foreground segmentation.
Davis dataset includes fifty densely annotated videos with pixel accurate groundtruth for the most salient object. Multiple challenges are included in the dataset such as occlusions, illumination changes, fast motion, motion blur and nonlinear deformation.
Synthia is a synthetic semantic segmentation dataset for urban scenes. It contains pixel level annotations for thirteen classes. It has over 200,000 images with different weather conditions (rainy, sunset, winter) and seasons (summer, fall). Since the dataset is large only a portion of it from Highway sequence for summer condition is used for our experiments.
CityScapes is a real dataset focused on urban scenes gathered by capturing videos while driving in different cities. It contains 5000 finely annotated 20000 coarsely annotated images for thirty classes. The coarse annotation includes segmentation for all frames in the video and each twentieth image in the video sequence is finely annotated. It provides various locations (fifty cities) and weather conditions throughout different seasons.
The main experiments’ setup includes using Adadelta 
for optimization as it gave much faster convergence than stochastic gradient descent. The loss function used throughout the experiments is the logistic loss, and the maximum number of epochs used for training is 500. The evaluation metrics used for the binary video segmentation is precision, recall, F-measure and IoU. Metrics formulation is shown in5, 6 and 7 where tp, fp, fn denote true positives, false positives, and false negatives respectively. As for multi-class segmentation mean class IoU, per-class IoU, mean category IoU and per-category IoU is used. Note that category IoU considers only category of classes instead of the specific classes when computing tp, fp and fn.
|FC-VGG Extra Conv||0.7519||0.7466||0.7493||0.7813|
In the first set pf experiments conducted, a fully convolutional VGG is used as a baseline denoted as FC-VGG and is compared against the recurrent version RFC-VGG. To avoid overfitting, first five layers of the network are initialized with the weights of a pre-trained networked and only lightly tuned. Table 2 shows the results of the experiments on SegTrackV2 and DAVIS datasets. In these experiments, the data is split into half for training and the other half as keep out test set. RFC-VGG outperforms the FC-VGG architecture on both datasets with about 3% and 5% on DAVIS and SegTrack respectively. A comparison between using RFC-VGG versus using an extra convolutional layer with the same filter size (FC-VGG Extra Conv) is also presented. This result ensures that using the recurrent network to employ temporal data is the reason for the boost of performance not just merely adding extra convolutional filters.
Figure 3 shows the qualitative analysis of RFC-VGG against FC-VGG. It shows that utilizing temporal information through the recurrent unit gives better segmentation for the object. This can be contributed to the implicit learning of the motion of segmented objects in the recurrent units. It also shows that using conv-GRU as the recurrent unit enables the extraction of temporal information from feature maps. Note that the performance of the RFCN network depends on its baseline fully convolutional network. Thus, RFCN networks can be seen as a method to improve their baseline segmentation network by embedding them into a recurrent module that utilizes temporal data.
|Mean Class IoU||Per-Class IoU|
|Category IoU||Per-category IoU|
The same architecture was used for semantic segmentation on synthia dataset after modifying it to support the thirteen classes. A comparison between FC-VGG and RFC-VGG is presented in terms of mean class IoU and per-class IoU for some of the classes. Table3 presents the results on synthia dataset. RFC-VGG has 5.7% over FC-VGG in terms of mean class IoU. It also shows the per-class IoU generally improves in the case of RFC-VGG. Interestingly, the highest improvement is with the car and pedestrian class that benefits the most from a learned motion pattern compared to sky or buildings that are mostly static. Figure4 first row shows the qualitative analysis on Synthia. The second image shows the car’s enhanced segmentation with RFC-VGG.
Finally, experimental results on cityscapes dataset using FCN-8s and its recurrent version RFCN-8s is shown in Table4. It uses mean category IoU and per-category IoU for the evaluation. It clearly demonstrates that RFCN-8s outperforms FCN-8s with 3.5% on mean category IoU. RFCN-8s generally improves on the per-category IoU, with the highest improvement in vehicle category. Hence, again the highest improvement is in the category that is affected the most by temporal data. Figure 4 bottom row shows the qualitative evaluation on cityscapes data to compare FCN-8s versus RFCN-8s. The third image clearly shows that the moving bus is better segmented with the recurrent version. Note that the experiments were conducted on images with less resolution than the original data and with a reduced version of FCN-8s due to memory constraints. Therefore, finer categories such as human and object are poorly segmented. However, using original resolution will fix this problem and its recurrent version should have better results as well.
4.3 Additional Experiments
In this section, experiments using conventional recurrent layers for segmentation is presented. These experiments provide further analysis on different recurrent units and their effects on the RFCN. A comparison between end-to-end training versus the decoupled one is also presented. The moving MNIST and change detection datasets are used for this part. Images of MNIST dataset are relatively small (2828) which allows us to test our RFC-Lenet network 1. A fully convolutional Lenet is compared against RFC-Lenet. Table 5 shows the results that were obtained. The results of RFC-Lenet with GRU is better than FC-Lenet with 2% improvement. Note that GRU gave better results than LSTM in as well.
We used real data from motion detection benchmark for the second set of experiments. The training and test splits are 70% and 30% from each sequence throughout these experiments. Baseline FC-12s is compared against its recurrent version, RFC-12s. It is also compared against the decoupled training of the FC-12s and the recurrent unit. Where GRU is trained on the heat map output from FC-12s. Table 6 shows the results of these experiments, where the RFC-12s network had a 1.4% improvement over FC-12s. We observe less relative improvement compared to using Conv-GRU because in regular GRU spatial connectivities are ignored. However, incorporating the temporal data still helped the segmentation accuracy.
5 Conclusion and Future Work
We presented a novel method that exploits implicit temporal information in videos to improve segmentation. This approach utilizes convolutional gated recurrent network which allows it to use preceding frames in segmenting the current frame. We performed extensive experiments on six datasets with different segmentation objective. We showed that embedding FCN networks as a recurrent module, consistently improved the results through out different datasets. Specifically, a 5% improvement in Segtrack and 3% improvement in Davis in F-measure over a plain fully convolutional network; a 5.7% improvement on Synthia in mean IoU, and 3.5% improvement on CityScapes in mean category IoU, over the plain fully convolutional network. Our suggested architecture can be applied on any FCN like single frame segmentation and then be used to process videos in an online fashion with an improved performance.
For future work, we would like to enhance the results of the semantic segmentation and apply our recurrent method to more single-image segmentation networks, for a more complete comparison with the state of the art. Another direction is to explore the potential of incorporating shape constraints from the depth data within the network. Thus combining motion and shape cues for better video segmentation.
-  V. Badrinarayanan, F. Galasso, and R. Cipolla. Label propagation in video sequences. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 3265–3272. IEEE, 2010.
-  N. Ballas, L. Yao, C. Pal, and A. Courville. Delving deeper into convolutional networks for learning video representations. arXiv preprint arXiv:1511.06432, 2015.
-  F. Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. J. Goodfellow, A. Bergeron, N. Bouchard, and Y. Bengio. Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012.
-  Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is difficult. Neural Networks, IEEE Transactions on, 5(2):157–166, 1994.
-  K. Cho, B. van Merriënboer, D. Bahdanau, and Y. Bengio. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259, 2014.
-  J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014.
-  M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele. The cityscapes dataset for semantic urban scene understanding. arXiv preprint arXiv:1604.01685, 2016.
-  J. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term recurrent convolutional networks for visual recognition and description. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2625–2634, 2015.
-  N. Goyette, P.-M. Jodoin, F. Porikli, J. Konrad, and P. Ishwar. Changedetection. net: A new change detection benchmark dataset. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2012 IEEE Computer Society Conference on, pages 1–8. IEEE, 2012.
-  S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
-  J. Johnson, A. Karpathy, and L. Fei-Fei. Densecap: Fully convolutional localization networks for dense captioning. arXiv preprint arXiv:1511.07571, 2015.
-  F. Li, T. Kim, A. Humayun, D. Tsai, and J. M. Rehg. Video segmentation by tracking many figure-ground segments. In Proceedings of the IEEE International Conference on Computer Vision, pages 2192–2199, 2013.
-  G. Lin, C. Shen, I. Reid, et al. Efficient piecewise training of deep structured models for semantic segmentation. arXiv preprint arXiv:1504.01013, 2015.
-  J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3431–3440, 2015.
-  O. Miksik, V. Vineet, M. Lidegaard, R. Prasaath, M. Nießner, S. Golodetz, S. L. Hicks, P. Pérez, S. Izadi, and P. H. Torr. The semantic paintbrush: Interactive 3d mapping and recognition in large outdoor spaces. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pages 3317–3326. ACM, 2015.
-  H. Noh, S. Hong, and B. Han. Learning deconvolution network for semantic segmentation. In Proceedings of the IEEE International Conference on Computer Vision, pages 1520–1528, 2015.
-  M. S. Pavel, H. Schulz, and S. Behnke. Recurrent convolutional neural networks for object-class segmentation of rgb-d video. In Neural Networks (IJCNN), 2015 International Joint Conference on, pages 1–8. IEEE, 2015.
-  F. Perazzi, J. P.-T. B. McWilliams, L. Van Gool, M. Gross, and A. Sorkine-Hornung. A benchmark dataset and evaluation methodology for video object segmentation.
-  F. Perazzi, O. Wang, M. Gross, and A. Sorkine-Hornung. Fully connected object proposals for video segmentation. In Proceedings of the IEEE International Conference on Computer Vision, pages 3227–3234, 2015.
-  S. A. Ramakanth and R. V. Babu. Seamseg: Video object segmentation using patch seams. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, pages 376–383. IEEE, 2014.
-  G. Ros, L. Sellart, J. Materzynska, D. Vazquez, and A. M. Lopez. The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3234–3243, 2016.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
-  V. Vineet, O. Miksik, M. Lidegaard, M. Nießner, S. Golodetz, V. A. Prisacariu, O. Kähler, D. W. Murray, S. Izadi, P. Perez, and P. H. S. Torr. Incremental dense semantic stereo fusion for large-scale semantic scene reconstruction. In IEEE International Conference on Robotics and Automation (ICRA), 2015.
-  O. Vinyals, S. V. Ravuri, and D. Povey. Revisiting recurrent neural networks for robust asr. In Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on, pages 4085–4088. IEEE, 2012.
-  F. Visin, K. Kastner, A. Courville, Y. Bengio, M. Matteucci, and K. Cho. Reseg: A recurrent neural network for object segmentation. arXiv preprint arXiv:1511.07053, 2015.
-  D. Wolf, J. Prankl, and M. Vincze. Enhancing semantic segmentation for robotics: The power of 3-d entangled forests. IEEE Robotics and Automation Letters, 1(1):49–56, 2016.
-  M. D. Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012.
-  H. Zhang, A. Geiger, and R. Urtasun. Understanding high-level semantics by modeling traffic patterns. In Proceedings of the IEEE International Conference on Computer Vision, pages 3056–3063, 2013.
-  S. Zheng, S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Su, D. Du, C. Huang, and P. H. Torr. Conditional random fields as recurrent neural networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 1529–1537, 2015.