Dense Optical Flow based Change Detection Network Robust to Difference of Camera Viewpoints

12/08/2017
by   Ken Sakurada, et al.
0

This paper presents a novel method for detecting scene changes from a pair of images with a difference of camera viewpoints using a dense optical flow based change detection network. In the case that camera poses of input images are fixed or known, such as with surveillance and satellite cameras, the pixel correspondence between the images captured at different times can be known. Hence, it is possible to comparatively accurately detect scene changes between the images by modeling the appearance of the scene. On the other hand, in case of cameras mounted on a moving object, such as ground and aerial vehicles, we must consider the spatial correspondence between the images captured at different times. However, it can be difficult to accurately estimate the camera pose or 3D model of a scene, owing to the scene changes or lack of imagery. To solve this problem, we propose a change detection convolutional neural network utilizing dense optical flow between input images to improve the robustness to the difference between camera viewpoints. Our evaluation based on the panoramic change detection dataset shows that the proposed method outperforms state-of-the-art change detection algorithms.

READ FULL TEXT VIEW PDF

page 1

page 3

page 5

page 7

page 8

09/30/2021

Uncertainty Estimation of Dense Optical-Flow for Robust Visual Navigation

This paper presents a novel dense optical-flow algorithm to solve the mo...
09/16/2016

Dense Wide-Baseline Scene Flow From Two Handheld Video Cameras

We propose a new technique for computing dense scene flow from two handh...
11/29/2018

Weakly Supervised Silhouette-based Semantic Change Detection

This paper presents a novel semantic change detection scheme with only w...
07/30/2020

Epipolar-Guided Deep Object Matching for Scene Change Detection

This paper describes a viewpoint-robust object-based change detection ne...
09/14/2022

INV-Flow2PoseNet: Light-Resistant Rigid Object Pose from Optical Flow of RGB-D Images using Images, Normals and Vertices

This paper presents a novel architecture for simultaneous estimation of ...
10/19/2018

DGC-Net: Dense Geometric Correspondence Network

This paper addresses the challenge of dense pixel correspondence estimat...
01/16/2022

Pursuing 3D Scene Structures with Optical Satellite Images from Affine Reconstruction to Euclidean Reconstruction

How to use multiple optical satellite images to recover the 3D scene str...

1 Introduction

This paper addresses the problem of detecting temporal scene changes from a pair of images captured at two different time points. In most change detection problems, it is assumed that input images are accurately registered, especially for surveillance and satellite camera systems [1, 2, 3, 4]. However, for moving cameras, such as vehicle-mounted cameras and mobile devices, we should consider the difference of the camera poses of the input images, because it is difficult to capture a scene from similar viewpoints every time due to the high flexibilities of camera pose and shutter timing.

Although structure-from-motion (SfM) [5, 6] is utilized in many change detection methods to estimate the camera poses of input images, the estimated camera poses can be unreliable due to the lack of the feature correspondence between the images due to the scene changes. Furthermore, for city-scale problems, computational costs of pixel-level image registration via batch optimization (e.g., bundle adjustment) [7], and dense 3D reconstruction based on multi-view stereo [8, 9, 10] are prohibitively high. Thus, it is preferable to detect scene changes from roughly registered images using embedded location information, such as Global Positioning System (GPS) metadata.

Several methods based on deep neural networks have been proposed for scene change detection thus far [11, 12, 13]. However, they assumed that pixel correspondences, or relative camera poses between input images, are known. The work by [14] proposed a change detection method that differentiates the feature maps extracted from input images utilizing convolutional neural networks (CNNs) [15]

trained with ImageNet

[16] or SUN [17] dataset. Although an advantage of this method is the high generalizability of the trained networks (i.e. scene-independence) owing to the large-scale image dataset, they are not optimized for change detection.

This paper presents a novel method for detecting scene changes from a roughly registered image pair using a dense optical flow based change detection network (DOF-CDNet). To the best of our knowledge, the proposed networks are the first CNNs considering the difference of camera viewpoints of an input image pair for the kinds of change detection tasks defined in this work. Specifically, we improve the robustness to the difference of camera viewpoints by the adding optical flow information between the image pairs as the input to the proposed change detection network (Figure 1).

2 Related Work

In the fields of computer vision and remote sensing, there has been the extensive study of change detection between images captured at different times, such as scene anomaly detection from surveillance camera images and urbanization and deforestation monitoring by detecting land surface changes from satellite images. In recent years, several methods of ground-level, wide area scene change detection from images captured by vehicle-mounted cameras and mobile devices have been proposed for the purposes of updating 3D maps for autonomous driving, infrastructure inspection, disaster response, and agricultural automation

[11, 18].

In the case of surveillance and satellite cameras, the pixel correspondences between images taken at different time points are almost always known. The work by [19] proposed a background subtraction method for obstacle detection on railway tracks. Although the camera mounted on a railway train moves, it always follows the same path and the difference between the camera viewpoints dependent on the shutter timing is small. Hence, the scene changes can be reasonably detected with accuracy by modeling the appearance of the scene [20].

On the other hand, in the case of cameras mounted on moving objects such as cars and mobile robots, it is necessary to consider spatial correspondences between images captured at different time points. If there are enough common parts that are useful as visual feature points between the scenes at different time points, it is possible to estimate relative camera pose utilizing SfM and detect structural scene changes based on multi-view geometry [21, 22]. Furthermore, deconvolutional networks for scene change detection has been proposed as a technology that learns the change mask from input RGB images which are aligned by their estimated depths [12].

Figure 2: Flowchart of the proposed change detection method

However, in the cases of a drastic scene change or an insufficient number of images, it is difficult to accurately estimate camera pose owing to lack of common feature points between images. Additionally, the computation cost of city-scale SfM is prohibitively expensive.

In this study, scene changes are detected from an image pair roughly aligned with GPS data. So far, for the same purpose, a method has been proposed that extracts grid features utilizing CNNs trained with large-scale image classification datasets, such as ImageNet [16] and SUN [17], and detects the scene changes differentiating the grid features [14]. The method has high generalization capability and less dependence on a target scene since the network is trained with a large-scale image dataset. However, it also has a problem that the network is not optimized for scene change detection.

Although several convolutional networks for change detection have been proposed [11, 13], they require the additional information of pixel-level correspondences between input images, such as 3D model and depth imagery of a scene. Therefore, we propose a novel change detection method for the robustness to the difference of camera viewpoints, for which the CNN is trained by inputting not only an RGB image pair, but also the estimated dense optical flow (Figure 1).

3 Scene Change Detection Dataset

There are several publicly available change detection datasets acquired by satellite, surveillance and vehicle-mounted cameras [23]. In the most of these datasets, either the pixel-level correspondence between input images is known, or it is possible to densely reconstruct the scenes from a sufficient number of multi-view images.

However, in case of a city or regional scale modeling from ground-level images, capturing an entire city with surveillance cameras is infeasible. Additionally, to detect structural scene changes of an entire city from movies captured by vehicle-mounted cameras based on multi-view geometry, a large amount of image data and computational resources are necessary [21, 22]. For applications such as autonomous driving and pedestrian navigation that need frequent and sequential updating 3D maps, it is necessary to monitor whole-city changes with low-cost change detection methods and then to accurately remeasure the change areas using autonomous agents, such as self-driving cars.

The objective of this study is detecting scene changes from an image pair roughly aligned with GPS information, instead of that is accurately aligned with methods like relative pose estimation, to reduce the computational costs. Thus, we evaluate our method on the panoramic change detection (PCD) dataset [14], in which image pairs are roughly aligned. This dataset consists of two subsets, named “TSUNAMI” and “GSV,” and each subset consists of 100 panoramic image pairs and the hand-labeled change mask of each pair. The camera viewpoints of each image pair are different, because the images are captured by a vehicle-mounted camera every few months or years. Therefore, it is necessary to detect scene changes based on the difference between camera viewpoints.

Figure 3:

Example of feature matching based on deep matching before (top) and after (bottom) outlier rejection by RANSAC whose model is the five-point algorithm.

4 Dense Optical Flow based ConvNet

To improve robustness against the difference of camera viewpoints, we propose DOF-CDNet, which estimates scene change probability of each pixel between input images utilizing not only the RGB images but also the estimated dense optical flow. Figure

2 shows the flowchart of the proposed method. The details of the dense optical flow estimation and the network architecture are described below.

4.1 Dense Optical Flow Estimation

There are various types of dense optical flow estimation methods based on image features, learning algorithms, etc. [24]. The proposed method of this study exploits DeepFlow [25], which is not based on learning algorithms, but is extended to add geometric constrains (Figure 2).

More concretely, tentative matching points between input images , captured at time , are calculated by DeepMatching. From these tentative matching points, outliers are removed by random sample consensus with the model defined by the five-point algorithm [26] (Figure 3). The optical flow of each pixel is estimated using only the inliers. The outlier removal, which uses epipolar constraint, can improve the estimation accuracy of optical flow, because there are scene changes without correspondence between input images.

Encoder Decoder
CR (64, 3, 1) CBRD (512, 4, 2)
CBR (128, 4, 2) CBRD (512, 4, 2)
CBR (256, 4, 2) CBRD (512, 4, 2)
CBR (512, 4, 2) CBR (512, 4, 2)
CBR (512, 4, 2) CBR (256, 4, 2)
CBR (512, 4, 2) CBR (128, 4, 2)
CBR (512, 4, 2) CBR (64, 4, 2)
CBR (512, 4, 2) C (1, 3, 1)
Table 1: Network architecture of DOF-CDNet

Figure 4: Estimation accuracy of change detection. The proposed methods, CDNet and DOF-CDNet, are compared with Dense-SIFT, CNN-feature [14], and DeconvNet [12], based methods using score, weakly supervised networks (WS-Net) and fully supervised networks (FS-Net) [13], using mIOU.

4.2 Network Architecture

The network architecture of DOF-CDNet is based on U-net [27, 28], which is one of state-of-the-art segmentation networks (Table 1

). C, B, R, and D represent the layers of convolution, batch normalization, ReLU, and dropout, respectively. From left to right, the numbers in parentheses indicate the number of layers, spatial filter size and stride amount of convolution filters, respectively. All of the ReLUs in encoder are “Leaky ReLU.”

It is difficult to generate the ground-truth optical flow for real-scene imagery. Moreover, there are errors in the optical-flow estimated in Section 4.1

. Therefore, in this study, the estimated optical flow vector

is exploited as the input and its estimation error is modeled from the training data.

Images, and , captured at times, and , and the optical flow image, , are concatenated in the channel direction and are input as an eight-channel image. Each pixel value is normalized in . The change mask as the ground-truth, , is given to the output of the network as training data in the grayscale image ranging in 111We set through all the experiments in this paper.. The loss function is defined as follows:

(1)

where is the pixel value of the change mask estimated by the trained networks. In the prediction step, change probability, , of each pixel, , is calculated as

(2)

Figure 5: Example of scene change detection of TSUNAMI.

Figure 6: Example of scene change detection of GSV.

Figure 7: Failure cases of the proposed method.

5 Evaluation

To evaluate the effectiveness of the proposed method, we conducted the experiments using the PCD dataset. We compared the change detection network (CDNet), whose input is only a scene’s image pair, and DOF-CDNet, whose input is both an image pair and the optical flow image.

The PCD dataset is composed of panoramic image pairs, and , each , taken at two different time points, and , and the change mask, . The subsets, TSUNAMI and GSV, each contains 100 image pairs. First, the optical flow image, is estimated from and by the method described in section 4.1. Next, from the image set, , patch images, each , are cropped sliding with 56 pixels width and resized to . Furthermore, data augmentation was performed by rotating the patches. Thus, 10,400 sets of image patches are generated.

Estimation accuracies of the proposed methods are evaluated using the dataset through five-fold cross-validation 222For five-hold cross-validation, the image patches are generated after each 100 image pairs of TSUNAMI and GSV are divided by the same ratio (Training : Test = 4 : 1), respectively.. Figure 4 shows scores and mean intersection-over-union (mIOU) of each method. Both CDNet and DOF-CDNet outperform the CNN grid feature-based method [14], the DeconvNet-based method [12], the weakly supervised networks, and the fully supervised networks [13]. Furthermore, in the GSV dataset, whose changes are comparatively small, an optical flow based method (i.e., DOF-CDNet) is effective for reducing errors in large optical flow areas (see Figure 5 and Figure 6). Figure 7 shows failure cases of the proposed method. The change detection errors can be caused by the estimation error of the optical flow, especially for change area, and the lack of the training samples of the camera viewpoint changes. Although CDNet outperforms DOF-CDNet in the cases, the proposed methods consistently outperform existing methods. Figure 8 and Figure 9 show the additional results of the proposed method.

6 Conclusion

This paper proposed a CNN based on dense optical flow to introduce the robustness against the difference of camera viewpoints change between input images. To estimate the optical flow between images with scene changes, the proposed method exploits DeepFlow with the extension of geometric constrains. The estimated optical flow is exploited as input and its estimation error is modeled from training data. The experimental results verified the effectiveness of the proposed methods.

The reason for the improvement of accuracy utilizing optical flow being small, especially for the TSUNAMI dataset, is that appearance changes owing to the difference of camera viewpoints are not large, and the ratio of the detection errors owing to the appearance change to the entire change region is small 333

It should be noted that the standard deviations of

-scores of CDNet and DOF-CDNet are 0.114 and 0.103 for TSUNAMI, 0.131 and 0.128 for GSV, respectively. The results indicate that optical flow information can make the scene change detection more stable.. Furthermore, the proposed method has errors for a wide variety of changes of scenes and camera viewpoints, because of the lack of the training data. Therefore, we plan to create a large-scale change detection dataset that contains a wide variety of camera viewpoint changes. We will also improve and evaluate the proposed method under severer viewpoint conditions.