Convolutional Gated Recurrent Networks for Video Segmentation

11/16/2016 ∙ by Mennatullah Siam, et al. ∙ University of Alberta 0

Semantic segmentation has recently witnessed major progress, where fully convolutional neural networks have shown to perform well. However, most of the previous work focused on improving single image segmentation. To our knowledge, no prior work has made use of temporal video information in a recurrent network. In this paper, we introduce a novel approach to implicitly utilize temporal data in videos for online semantic segmentation. The method relies on a fully convolutional network that is embedded into a gated recurrent architecture. This design receives a sequence of consecutive video frames and outputs the segmentation of the last frame. Convolutional gated recurrent networks are used for the recurrent part to preserve spatial connectivities in the image. Our proposed method can be applied in both online and batch segmentation. This architecture is tested for both binary and semantic video segmentation tasks. Experiments are conducted on the recent benchmarks in SegTrack V2, Davis, CityScapes, and Synthia. Using recurrent fully convolutional networks improved the baseline network performance in all of our experiments. Namely, 5 respectively, 5.7 categorical mean IoU in CityScapes. The performance of the RFCN network depends on its baseline fully convolutional network. Thus RFCN architecture can be seen as a method to improve its baseline segmentation network by exploiting spatiotemporal information in videos.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 6

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Semantic segmentation, which provides pixel-wise labels, has witnessed a tremendous progress recently. As shown in [14][16][29][25], it outputs dense predictions and partitions the image to semantically meaningful parts. It has numerous applications including autonomous driving[28][21][7], augmented reality[15] and robotics[23] [26]. The work in [14] presented a fully convolutional network and provides a method for end-to-end training of semantic segmentation. It yields a coarse heat-map followed by in-network upsampling to get dense predictions. Following the work of fully convolutional networks, many attempts were made to improve single image semantic segmentation. In [16] a full deconvolution network is presented with stacked deconvolution layers. The work in [25]

provided a method to incorporate contextual information using recurrent neural networks. However, one missing element is that the real-world is not a set of still images. In real-time camera or recorded video, much information is perceived from temporal cues. For example, the difference between a walking or standing person is hardly recognizable in still images but it is obvious in a video.

Figure 1: Overview of the Proposed Method of Recurrent FCN. The recurrent part is unrolled for better visualisation

Video segmentation has been extensively investigated using classical approaches. The work in [18] reviews the literature in binary video segmentation. It mainly focuses on semi-supervised approaches[1] [19][20] that propagate the labels in one or more annotated frames to the entire video. In [17] a method that uses a combination of Recurrent Neural Networks (RNN) and CNN for RGB-D video segmentation is presented. However, their proposed architecture is difficult to train because of the vanishing gradient. It does not utilize pre-trained networks and it cannot process large images as number of their parameters is quadratic with respect to the input size.

Gated Recurrent Architectures

alleviate the vanishing or exploding gradients problem in recurrent networks. Long Short Term Memory (LSTM) architecture is one of the earliest attempt to design these networks

[10]. They are successfully employed in many tasks. Captioning and text processing in particular [11] [8]

. Gated Recurrent Unit (GRU)

[5] is a more recent gated architecture. It is shown that GRU has similar performance to LSTM but with reduced number of gates thus fewer parameters[6]

. The main bottleneck with these previous architectures is that they only work with vectors and therefore, do not preserve spatial information in images or feature maps. In

[2] convolutional GRU is introduced for learning spatio-temporal features from videos and used for video captioning and action recognition.

Inspired by these methods we design a gated recurrent FCN architecture to solve many of the shortcomings of the previous approaches. Contributions include:

  • A novel architecture that can incorporate temporal data into FCN for video segmentation. A Convolutional Gated Recurrent FCN architecture is designed to efficiently utilize spatiotemporal information.

  • An end-to-end training method for online video segmentation.

  • An experimental analysis on video binary segmentation and video semantic segmentation is presented on recent benchmarks.

An overview of the suggested method is shown in Figure 1. Where a sliding window of input video frames is fed to the recurrent fully convolutional network(RFCN) and resulted in the segmentation of the last frame. The paper is structured as follows. In Section 2 necessary background is discussed. The proposed method is presented in details in section 3. Section 4 presents experimental results and discussion on recent benchmarks. Finally, section 5 summarizes the article and presents potential future directions.

2 Background

This section will review FCN and RNN which will be repeatedly referred to throughout the article.

2.1 Fully Convolutional Networks (FCN)

Convolutional neural networks that are initially designed with image classification tasks in mind. Later, it became apparent that CNN can also be used for segmentation by doing pixel-wise classification. However, dense pixel-wise labeling is extremely inefficient using regular CNN. In [13] the idea of using a fully convolutional neural network that is trained for pixel-wise semantic segmentation is presented. In this approach, all the fully connected layers of CNN networks are replaced with convolutional layers. This design allows the network to accommodate any input size since it is not restricted to a fixed output size, fully connected layers. More importantly, now it is possible to get a course segmentation output (called heat-map) by only one forward pass of the network.

This coarse map needs to be up-sampled to the original size of the input image. Simple bi-linear interpolation can be used however, an adaptive up-sampling is shown to have a better result. In

[13] a new layer with learnable filters that applies up-sampling within the network is presented. It is an efficient way to learn the up-sampling weights through back-propagation. These types of layers are commonly known as deconvolution. The filters of deconvolution layers can be seen as a basis to reconstruct the input image or just to increase the spatial size of feature maps. Skip architecture can be used for an even finer segmentation. In this architecture heat maps from earlier pooling layers are merged with the final heatmap for an improved segmentation. This architecture is termed as FCN-16s or FCN-8s based on the pooling layers that are used.

2.2 Recurrent Neural Networks

Recurrent Neural Networks[24] can be applied on a sequence if inputs and are able to capture the temporal relation between them. A hidden unit in each recurrent cell allows it to have a dynamic memory that is changing according to what it had hold before and the new input. The simplest recurrent unit can be modeled as in equation 1.

(1a)
(1b)

Where, is the hidden unit, is the input, is the output, is the current time step and

is the activation function.

When propagating the error in recurrent units, due to the chain law, we see that the derivative of each node is dependent on all of earlier nodes. This chain dependency can be arbitrary long based on length of the input vector. It was observed that it will cause vanishing gradient problem, especially for longer input vectors

[4]. Gated recurrent architectures have been proposed as a solution and they were empirically successful in many tasks. Two popular choices of these architectures are presented in this section.

2.2.1 Long Short Term Memory (LSTM)

LSTM[10] utilizes three gates to control the flow of signal within the cell. These gates are input, output and forget gate and each of them has its own set of weights. These weights can be learned with back-propagation. At the inference stage, the values in the hidden unit changes based on the sequence of inputs that is has seen and can be roughly interpreted as a memory. This memory can be used for the prediction of the current state. Equations 2 shows how the gates values and hidden states are computed. , and are the gates and and are the internal and the hidden state respectively.

(2a)
(2b)
(2c)
(2d)
(2e)
(2f)

2.2.2 Gated Recurrent Unit (GRU)

GRU uses the same gated principal of LSTM but with a simpler architecture. Therefore, it is not as computationally expensive as LSTM and uses less memory. Equations 3 describe the mathematical model of the GRU. Here, and are the gates and is the hidden state.

(3a)
(3b)
(3c)
(3d)

GRU is simpler than LSTM since the output gate is removed from the cell and the output flow is controlled by two other gates indirectly. The cell memory is also updated differently in GRU. LSTM updates its hidden state by summation over flow after input gate and forget gate. In the other hand, GRU assumes a correlation between memorizing and forgetting and controls both by one gate only .

3 Method

An overview of the method is presented in Figure 1. A recurrent fully convolutional network (RFCN) is designed that utilizes the spatiotemporal information for video segmentation. The recurrent unit in the network can either be LSTM, GRU or Conv-GRU (which is explained in 3.2). A sliding window over the video frames is used as input to the network. This allows on-line video segmentation as opposed to off-line batch processing. The window data is forwarded through the RFCN to yield a segmentation for the last frame in the sliding window. Note that the recurrent unit can be applied on the coarse segmentation (heat map) or intermediate feature maps. The network is trained in an end-to-end fashion using pixel-wise classification logarithmic loss. Two main approaches are explored in our method: (1) conventional recurrent units, and (2) convolutional recurrent units 1. Specifically, four different network architectures under these two approaches are used as detailed in the following sections.

3.1 Conventional Recurrent Architecture for Segmentation

RFC-Lenet is a fully convolutional version of Lenet. Lenet is a well known shallow network. Because it is common, we used it for baseline comparisons on synthetic data. We embed this model in a recurrent node to capture temporal data. The final network is named as RFC-Lenet in Table 1.

The output of deconvolution a 2D map of dense predictions that is then flattened into 1D vector as the input to a conventional recurrent unit. The recurrent unit takes this vector for each frame in the sliding window and outputs the segmentation of the last frame (Figure 1).

Network Architectures
RFC-Lenet RFC-12s RFC-VGG
input: 2828 input: 120180 input: 240360

Recurrent Node

Conv: F(5), P(10), D(20)

Recurrent Node

Conv: F(5), S(3), P(10), D(20)

Recurrent Node

Conv: F(11), S(4), P(40), D(64)
Relu Relu Relu
Pool 22 Pool 22 Pool 33
Conv: F(5), D(50) Conv: F(5), D(50) Conv: F(5), P(2) D(256)
Relu Relu Relu
Pool(22) Pool(22) Pool(33)
Conv: F(3), D(500) Conv: F(3), D(500) Conv: F(3), P(1) D(256)
Relu Relu Relu
Conv: F(1), D(1) Conv: F(1), D(1) Conv: F(3), P(1) D(256)
- - Relu
Conv: F(3), P(1) D(256)
Relu
Conv: F(3), D(512)
Conv: F(3), D(128)
DeConv: F(10), S(4) Flatten ConvGRU: F(3), D(128)
Flatten GRU: W(100100) Conv: F(1), D(1)
GRU: W(784784) DeConv: F(10), S(4) DeConv: F(20), S(8)
Table 1: Proposed networks in detail. is a filter with size of . indicates

zero padding.

shows the stride in the convolution layer.

is number of feature maps generated by the layer (It is same as the previous layer if it is not mentioned).

RFC-12s is another architecture that is used for baseline comparisons, to compare end-to-end and decoupled training as detailed in section 4. The RFC-Lenet architecture requires a large weight matrix in the recurrent unit since it processes vectors of the flattened full sized image. One way to overcome this problem is to apply the recurrent layer on the down-sampled heatmap before deconvolution. This leads to this second architecture termed as RFC-12s in Table 1

. In this network, vectorized coarse output maps are given to the recurrent unit. The recurrent unit operates on a sequence of these coarse maps and produces a coarse map corresponding the last frame in the sequence. Later, the deconvolution layer generates dense prediction from the output the recurrent unit. In this way, the recurrent unit is allowed to work on much smaller vectors and therefore reduces the variance in the network.

3.2 Convolutional Gated Recurrent Architecture (Conv-GRU) for Segmentation

Figure 2: RFC-VGG architecture. A sequence of images is given as input to the network. The output of the embedded FC inside the recurrent unit is given to a Conv-GRU layer. One last convolutional layer maps the output of the recurrent unit into a coarse segmentation map. Then, the deconvolutional layer up-samples the coarse segmentation into a dense segmentation.

Conventional recurrent units are designed for processing text data not images. Therefore, using them on images without any modification causes two main issues. 1) The size of weight parameters becomes very large since vectorized images are large 2) Spatial connectivity between pixels are ignored. For example, using a recurrent unit on a feature map with the spatial size of and number of channels requires number of weights. This will cause a memory bottleneck and inefficient computations. It will also create a larger search space for the optimizer, thus it will be harder to train.

Convolutional recurrent units, akin to regular convolutional layer, convolve three dimensional weights with their input. Therefore, to convert a gated architecture to a convolutional one, dot products should be replaced with convolutions. Equations 4 show this modification for the GRU. The weights are of size of where , , and are kernel’s height and width, number of input channels, and number of filters, respectively. Learning filters that convolve with the entire image instead of learning individual weights for each pixel, makes it much more efficient. This layer can be applied on either final heat map or intermediate feature maps.

(4a)
(4b)
(4c)
(4d)

RFC-VGG in Table 1 is an example of this approach, where intermediate feature maps are fed into a convolutional gated recurrent unit. Then a convolutional layer converts its output to a heat map. It is based on VGG-F [22]

network. The reason for switching to the RFC-VGG architecture is to use pre-trained weights from VGG-F. Initializing weights of our filters by VGG-F trained weights, alleviates over-fitting problems as these weights are the result of extensive training on Imagenet dataset. The last two pooling layers are dropped from VGG-F to allow a finer segmentation with a reduced network. Figure

2 shows the detailed architecture of RFC-VGG.

RFCN-8s is the recurrent version of FCN-8s architecture and is used in our semantic segmentation experiments. FCN-8s network is commonly used in many state of the art segmentation methods as it provides more detailed segmentation. It is loaded with a pre-trained with VGG-16 and it employs the skip architecture that combines pool3 and pool4 layers, with the final layer to have a finer segmentation. In RFCN-8s the convolutional gated recurrent unit is placed before pool3 layer where the skip connections start branching.

4 Experiments

This section describes the experimental analysis and results. First, the datasets are presented followed by the training method and hyper-parameters used. Then both quantitative and qualitative analyses are presented. All experiments are performed on our implemented open source library that supports convolutional gated recurrent architectures. The implementation is based on Theano

[3] and supports using different FCN architectures as a recurrent node. The key features of this implementation are: (1) The ability to use any arbitrary CNN or FCN architecture as a recurrent node. In order to utilize temporal information. (2) Support for three gated recurrent architectures which are, LSTM, GRU, and Conv-GRU. (3) It includes deconvolution layer for in the network upsampling and supports skip architecture for finer segmentation. A public version of the code for the library along with the trained models will be published after the anonymous review.

4.1 Datasets

In this paper six datasets are used: 1) Moving MNIST. 2) Change detection[9]. 3) Segtrack version 2[12]. 4) Densely Annotated VIdeo Segmentation (Davis) [18]. 5) Synthia[21]. 6) CityScapes[7]

Moving MNIST dataset is synthesized from original MNIST by moving the characters in random but consistent directions. The labels for segmentation is generated by thresholding input images after translation. We consider each translated image as a new frame. Therefore we can have arbitrary length image sequences.

Change Detection Dataset[9] This dataset provides realistic, diverse set of videos with pixel-wise labeling of moving objects. The dataset includes both indoor and outdoor scenes. It focuses on moving object segmentation. In the foreground detection, videos with similar objects were selected such as humans or cars. Accordingly, we chose six videos: Pedestrians, PETS2006, Badminton, CopyMachine, Office, and Sofa.

SegTrack V2[12] is a collection of fourteen video sequences with objects of interest manually segmented. The dataset has sequences with both single or multiple objects. In the latter case, we consider all the segmented objects as one and we perform foreground segmentation.

Davis[18] dataset includes fifty densely annotated videos with pixel accurate groundtruth for the most salient object. Multiple challenges are included in the dataset such as occlusions, illumination changes, fast motion, motion blur and nonlinear deformation.

Synthia[21] is a synthetic semantic segmentation dataset for urban scenes. It contains pixel level annotations for thirteen classes. It has over 200,000 images with different weather conditions (rainy, sunset, winter) and seasons (summer, fall). Since the dataset is large only a portion of it from Highway sequence for summer condition is used for our experiments.

CityScapes[7] is a real dataset focused on urban scenes gathered by capturing videos while driving in different cities. It contains 5000 finely annotated 20000 coarsely annotated images for thirty classes. The coarse annotation includes segmentation for all frames in the video and each twentieth image in the video sequence is finely annotated. It provides various locations (fifty cities) and weather conditions throughout different seasons.

4.2 Results

The main experiments’ setup includes using Adadelta [27]

for optimization as it gave much faster convergence than stochastic gradient descent. The loss function used throughout the experiments is the logistic loss, and the maximum number of epochs used for training is 500. The evaluation metrics used for the binary video segmentation is precision, recall, F-measure and IoU. Metrics formulation is shown in

5, 6 and 7 where tp, fp, fn denote true positives, false positives, and false negatives respectively. As for multi-class segmentation mean class IoU, per-class IoU, mean category IoU and per-category IoU is used. Note that category IoU considers only category of classes instead of the specific classes when computing tp, fp and fn.

(5)
(6)
(7)
Figure 3: Qualitative results of experiments with SegtracV2 and Davis datasets, where network prediction are overlaid on the input. The top row is for FC-VGG and the bottom row is for RFC-VGG.
Precision Recall F-measure IoU
SegTrack V2 FC-VGG 0.7759 0.6810 0.7254 0.7646
RFC-VGG 0.8325 0.7280 0.7767 0.8012
FC-VGG Extra Conv 0.7519 0.7466 0.7493 0.7813
DAVIS FC-VGG 0.6834 0.5454 0.6066 0.6836
RFC-VGG 0.7233 0.5586 0.6304 0.6984
Table 2: Comparison of RFC-VGG with its baseline counterpart on DAVIS and SegTrack

In the first set pf experiments conducted, a fully convolutional VGG is used as a baseline denoted as FC-VGG and is compared against the recurrent version RFC-VGG. To avoid overfitting, first five layers of the network are initialized with the weights of a pre-trained networked and only lightly tuned. Table 2 shows the results of the experiments on SegTrackV2 and DAVIS datasets. In these experiments, the data is split into half for training and the other half as keep out test set. RFC-VGG outperforms the FC-VGG architecture on both datasets with about 3% and 5% on DAVIS and SegTrack respectively. A comparison between using RFC-VGG versus using an extra convolutional layer with the same filter size (FC-VGG Extra Conv) is also presented. This result ensures that using the recurrent network to employ temporal data is the reason for the boost of performance not just merely adding extra convolutional filters.

Figure 3 shows the qualitative analysis of RFC-VGG against FC-VGG. It shows that utilizing temporal information through the recurrent unit gives better segmentation for the object. This can be contributed to the implicit learning of the motion of segmented objects in the recurrent units. It also shows that using conv-GRU as the recurrent unit enables the extraction of temporal information from feature maps. Note that the performance of the RFCN network depends on its baseline fully convolutional network. Thus, RFCN networks can be seen as a method to improve their baseline segmentation network by embedding them into a recurrent module that utilizes temporal data.

Figure 4: Qualitative results of experiments with Synthia and cityscapes datasets, where network prediction are overlaid on the input. First row: Synthia with FC-VGG. Second row: Synthia with RFC-VGG. Third row: CityScapes with FCN-8s. Forth row: CityScapes with RFCN-8s:
Mean Class IoU Per-Class IoU
Car Pedestrian Sky Building Road Sidewalk Fence Vegetation Pole
FC-VGG 0.755 0.504 0.275 0.946 0.958 0.840 0.957 0.762 0.883 0.718
RFC-VGG 0.812 0.566 0.487 0.964 0.961 0.907 0.968 0.865 0.909 0.742
Table 3: Semantic Segmentation Results on Synthia Highway Summer Sequence for RFC-VGG compared to FC-VGG
Category IoU Per-category IoU
Flat Nature Sky Construction Vehicle
FCN-8s 0.53 0.917 0.710 0.792 0.683 0.585
RFCN-8s 0.565 0.928 0.739 0.814 0.719 0.652
Table 4: Semantic Segmentation Results on CityScapes for RFCN-8s compared to FCN-8s

The same architecture was used for semantic segmentation on synthia dataset after modifying it to support the thirteen classes. A comparison between FC-VGG and RFC-VGG is presented in terms of mean class IoU and per-class IoU for some of the classes. Table3 presents the results on synthia dataset. RFC-VGG has 5.7% over FC-VGG in terms of mean class IoU. It also shows the per-class IoU generally improves in the case of RFC-VGG. Interestingly, the highest improvement is with the car and pedestrian class that benefits the most from a learned motion pattern compared to sky or buildings that are mostly static. Figure4 first row shows the qualitative analysis on Synthia. The second image shows the car’s enhanced segmentation with RFC-VGG.

Finally, experimental results on cityscapes dataset using FCN-8s and its recurrent version RFCN-8s is shown in Table4. It uses mean category IoU and per-category IoU for the evaluation. It clearly demonstrates that RFCN-8s outperforms FCN-8s with 3.5% on mean category IoU. RFCN-8s generally improves on the per-category IoU, with the highest improvement in vehicle category. Hence, again the highest improvement is in the category that is affected the most by temporal data. Figure 4 bottom row shows the qualitative evaluation on cityscapes data to compare FCN-8s versus RFCN-8s. The third image clearly shows that the moving bus is better segmented with the recurrent version. Note that the experiments were conducted on images with less resolution than the original data and with a reduced version of FCN-8s due to memory constraints. Therefore, finer categories such as human and object are poorly segmented. However, using original resolution will fix this problem and its recurrent version should have better results as well.

4.3 Additional Experiments

In this section, experiments using conventional recurrent layers for segmentation is presented. These experiments provide further analysis on different recurrent units and their effects on the RFCN. A comparison between end-to-end training versus the decoupled one is also presented. The moving MNIST and change detection datasets are used for this part. Images of MNIST dataset are relatively small (2828) which allows us to test our RFC-Lenet network 1. A fully convolutional Lenet is compared against RFC-Lenet. Table 5 shows the results that were obtained. The results of RFC-Lenet with GRU is better than FC-Lenet with 2% improvement. Note that GRU gave better results than LSTM in as well.

Precision Recall F-measure
FC-Lenet 0.868 0.922 0.894
LSTM 0.941 0.786 0.856
GRU 0.955 0.877 0.914
RFC-Lenet 0.96 0.877 0.916
Table 5: Precision, Recall, and F-measure on FC-Lenet, LSTM, GRU, and RFC-Lenet tested on the moving MNIST dataset

We used real data from motion detection benchmark for the second set of experiments. The training and test splits are 70% and 30% from each sequence throughout these experiments. Baseline FC-12s is compared against its recurrent version, RFC-12s. It is also compared against the decoupled training of the FC-12s and the recurrent unit. Where GRU is trained on the heat map output from FC-12s. Table 6 shows the results of these experiments, where the RFC-12s network had a 1.4% improvement over FC-12s. We observe less relative improvement compared to using Conv-GRU because in regular GRU spatial connectivities are ignored. However, incorporating the temporal data still helped the segmentation accuracy.

Precision Recall F-measure
FC-12s 0.827 0.585 0.685
RFC-12s (D) 0.835 0.587 0.69
RFC-12s (EE) 0.797 0.623 0.7
Table 6: Precision, Recall, and F-measure on motion detection dataset. FCN-12s is the baseline FCN network. RFC-12s is its counterpart recurrent network. RFC-12s is trained in two ways. Decoupled (D) where first, the GRU layer alone is trained with the rest of the network fixed and then the whole network finely tuned together. End-to-end (EE) where the whole network is trained at once.

5 Conclusion and Future Work

We presented a novel method that exploits implicit temporal information in videos to improve segmentation. This approach utilizes convolutional gated recurrent network which allows it to use preceding frames in segmenting the current frame. We performed extensive experiments on six datasets with different segmentation objective. We showed that embedding FCN networks as a recurrent module, consistently improved the results through out different datasets. Specifically, a 5% improvement in Segtrack and 3% improvement in Davis in F-measure over a plain fully convolutional network; a 5.7% improvement on Synthia in mean IoU, and 3.5% improvement on CityScapes in mean category IoU, over the plain fully convolutional network. Our suggested architecture can be applied on any FCN like single frame segmentation and then be used to process videos in an online fashion with an improved performance.

For future work, we would like to enhance the results of the semantic segmentation and apply our recurrent method to more single-image segmentation networks, for a more complete comparison with the state of the art. Another direction is to explore the potential of incorporating shape constraints from the depth data within the network. Thus combining motion and shape cues for better video segmentation.

References