A Multi-Domain Feature Learning Method for Visual Place Recognition

02/26/2019 ∙ by Peng Yin, et al. ∙ Carnegie Mellon University USTC 0

Visual Place Recognition (VPR) is an important component in both computer vision and robotics applications, thanks to its ability to determine whether a place has been visited and where specifically. A major challenge in VPR is to handle changes of environmental conditions including weather, season and illumination. Most VPR methods try to improve the place recognition performance by ignoring the environmental factors, leading to decreased accuracy decreases when environmental conditions change significantly, such as day versus night. To this end, we propose an end-to-end conditional visual place recognition method. Specifically, we introduce the multi-domain feature learning method (MDFL) to capture multiple attribute-descriptions for a given place, and then use a feature detaching module to separate the environmental condition-related features from those that are not. The only label required within this feature learning pipeline is the environmental condition. Evaluation of the proposed method is conducted on the multi-season NORDLAND dataset, and the multi-weather GTAV dataset. Experimental results show that our method improves the feature robustness against variant environmental conditions.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In the last decade, the robotics community has achieved numerous breakthroughs in vision-based simultaneous localization and mapping (SLAM) [17] that have enhanced the navigation abilities of unmanned ground vehicles (UGV) and unmanned aerial vehicles (UAV) in complex environment. Visual place recognition (VPR) [7] or loop closure detection (LCD) helps robots to find loop closure in SLAM framework and is an essential element for accurate mapping and localization. Although many methods have been proposed in recent years, VPR is still a challenging problem under varying environmental conditions. Traditional VPR approaches that use handcrafted features to learn place descriptors for local scene description, often fail to extract valid features when encountering significant changes [9] in environmental conditions, such as changes in season, weather, illumination, as well as viewpoints.

Fig. 1: The pipeline of our proposed conditional visual place recognition method. In summary, there exists three core modules: 1) a CapsuleNet [12]

based feature extraction module that is responsible for extracting condition-related and condition-invariant features from the raw visual inputs; 2) a condition enhanced feature separation module to further separate condition-related ones in the joint feature-distribution; 3) a trajectory searching mechanism for finding best matches based on the feature differences of query trajectory features.

Ideally, the place recognition method should be able to capture condition-invariant features for robust loop closure detection, since the appearance of scene objects (e.g., roads, terrains and houses) is often highly related to environmental conditions, and that each object has its own appearance distribution under variant conditions. To the best of our knowledge, there are few VPR methods that have explored how to improve the place recognition performance against variant environmental conditions [4]. A major drawback of these methods is that the change in environmental conditions affects the local features, resulting in decreased accuracy of VPR. In this paper, we propose the condition-directed visual place recognition method to address this issue. Our work consists of two parts: feature extraction and feature separation.

Firstly, in the feature extraction step, we utilize a CapsuleNet-based network [12] to extract multi-domain place features, as shown in Fig. 1

. Traditional convolutional neural network (CNN) is efficient in object detection , regression and segmentation, but as pointed out by Hinton, the inner connections of objects are easily lost with the deep convolutional and max pooling operations. For instance, in face detection tasks, even if the facial objects (nose, eyes, mouth, lips) are in incorrect layouts, the traditional CNN method may still consider the image as a human face, since it contains all the necessary features of a human face. This problem also exists in place recognition tasks, since different places may contain similar objects but with different arrangements. CapsuleNet uses an dynamic routing method to cluster the shallow convolutional layer features in an unsupervised way. In this paper, we demonstrate another application of CapsuleNet, which could capture feature distribution under specific conditions.

The main contributions of this work can be summarized as follows:

  • We propose the use of CapsuleNet-based feature extraction module, and show its robustness in the conditional feature learning for the visual place recognition task.

  • We propose a feature separation method for the visual place recognition task, where features are indirectly separated based on the relationship between condition-related and condition-invariant features in an information-theoretic view.

The outline of the paper is as follows: Section II introduces the related works on visual-based place recognition methods. Section III describes our conditional visual place recognition method, which has two components: feature extraction and feature separation. In Section IV, we evaluate the proposed method on two challenging datasets: the NORDLAND [5] dataset which has same trajectories under multiple season conditions and a GTAV dataset which is generated on the same trajectory under different weather conditions in a game simulator. Finally, we provide concluding remarks in Section V.

Ii Related Work

Visual place recognition (VPR) methods have been well studied in past several years, and can be classified into two categories: feature- and appearance-based. In feature-based VPR, descriptive features are transformed into local place descriptors. Then, place recognition can be achieved by extracting the current place descriptors and searching similar place indexes in the bag of words. On the contrary, appearance-based VPR uses feature descriptors that are extracted from the entire image, and performs place recognition by assessing feature similarities. SeqSLAM 

[9]

describes image similarities by directly using the sum of absolute difference (SAD) between frames, while vector of locally aggregated descriptors (VLAD) 

[15] aggregates local invariant features into a single feature vector and uses Euclidean distance between vectors to quantify image similarities.

Recently, many works have investigated CNN-based features for appearance-based VPR tasks. Sünderhauf et al[14] first used pre-trained VGG model to extract middle-layer CNN outputs as image descriptors in the sequence matching pipeline. However, a pre-trained network can not be further trained for place recognition task, since the data labels are hard to define in VPR task. Recently, Chen et al[2] and Garg et al[4] address the conditional invariant VPR as an image classification task and rely on precise but expensive human labeling for semantic labels. Arandjelovic et al[1] developed NetVLAD, which is a modified form of the VLAD features, with CNN networks to improve the feature robustness.

The approach that comes closest to our method is the work of Porav et al. [11], where they learn invertible generators based on the CycleGAN [8], The original CycleGAN method can transform the image from one domain to another domain, but such transformation is limited to only two domains. Thus, for multiple domain place recognition task, the method of Porav et al. requires transformation model between each pair of conditions. In contrast, our method can learn more than two conditions in the same structure.

Iii Proposed Method

In this section, we investigate the details of two core modules in our conditional visual place recognition method.

Iii-a Feature Extraction

Iii-A1 Vlad

VLAD is a feature encoding and pooling method, which encodes a set of local feature descriptors extracted from an image by using a clustering method such as K-means clustering. For the feature extraction module, we extract multi-domain place features from the raw image, by utilizing a CapsuleNet module. Let

be the strength of the association of data vector to the cluster , such that and , where is the clusters number. VLAD encodes feature by considering the residuals

(1)

and the joint feature description , where is the local features number.

Assume we can extract lower feature descriptors (each is denoted as ) from the raw image, we can construct a new VLAD like module with the following equation,

(2)

where is the residual function measuring similarities between and , and is the weighting of capsule vector involved with the cluster center.

Iii-A2 Modified CapsuleNet

In order to transform Eq.2 into an end-to-end learning block, we consider two aspects:

  1. Constructing the residual function ;

  2. Assigning the weights .

With lower layer features extracted from the shallow convolution layer, we use matrix to map lower level features into higher level features, where is the CNN unit number in the shallow convolution layer.

If we want to integrate the lower-higher feature mapping within a single layer, the local lower level feature should have a linear mapping layer to represent the residual function

(3)

where and

are the linear transformation weighting and bias for the

capsule center.

Furthermore, to estimate the local capsule features weighting

, we apply a soft assignment estimation defined as

(4)

where

is the probability that the

local capsule feature belonging to capsule cluster . Therefore, Eq.2 can be written in the following format,

(5)

In order to learn the parameters , and , we apply the iterative dynamic routing mechanism as described in [12]. For the output of higher level features, we assume the last dimensions are assigned as the condition features, e.g. is in the case where the condition is season.

Iii-B Feature Separation

In the previous section, we described the feature extraction module . In this section, we use an additional decoder module , and two reconstruction modules on feature and image domain to achieve the feature separation. Naturally, condition-invariant feature and condition-related feature are highly correlated. Fig. 2 shows the relationship between information and . and are the joint entropy and the mutual entropy respectively, while and are the conditional entropy. From the view of information theory, feature separation can be achieved in the following ways:

  • Decrease the conditional entropy : less conditional entropy enforces the unique mapping from to ;

  • Improve the geometric feature extraction capability: the more accurate geometry we capture, the higher LCD accuracy we can achieve;

  • Reduce the mutual entropy : use environmental conditions to direct feature extraction.

We add these three restrictions in our feature separation module.

Fig. 2: The relationship of condition-related and condition-invariant feature in the information theory view.
Fig. 3: The framework of feature separation. The networks are combined with four modules: the feature extraction module as given in the previous section; a classification module estimating the environmental conditions; a decoder module mapping the extracted feature back to the data domain; a discriminator module distinguishing the generated data and raw data; and two reconstruction loss modules on data and feature domain respectively.

Iii-B1 Conditional Entropy Reduction

measures the uncertainty of feature given the data sample . The conditional entropy can be achieved, if and only if is the deterministic mapping of . Thus, reducing can improve the uniqueness mapping from to , where is the parameter in the encoder module. However, improving the condition entropy is intractable, since we can not access the data-label pair directly. An alternative approach is to optimize the upper bound of , and the upper bound can be obtained through the following equation,

(6)

where

is the Kullback-Leibler divergence. And

measures the uncertainty of the predicted feature with a given sample data . Since we can not extract features from the directly, we add an additional feature encoder module after the decoder module (see Fig .3). Eq. 6 can be converted into

(7)

where is the Feature Reconstruction Loss between feature extracted from the raw data and the reconstructed data. As we can see in Eq. 7, the original is transformed into its upper bound , and the upper bound is reduced only when the feature domain and data domain are perfectly matched.

Iii-B2 Feature Extraction Improvement

Condition entropy reduction sub-module can restrict the mapping uncertainty from data domain to the feature domain, this restriction is highly related to the generalization ability of the encoder module. For the place recognition task, there will be highly diverse scenes in practice, however, we can only generate limited samples for network training. In theory, the GAN method use a decoder and discriminator module can learn the the potential feature-to-data transformation with limited samples. Thus, we improve the data generalization ability by applying GANs.

(8)

As demonstrated by Goodfellow et.al [6], with iterative updating of the decoder module and the discriminator module , GAN could pull the generated data distribution closer to the real data distribution, and improve the generalization ability of the decoder module .

Iii-B3 Mutual entropy reduction

is the mutual entropy, which can be extended by

(9)

where, reducing the mutual entropy is equivalent to reducing the right-hand term in the above equation. Since the conditional entropy satisfies , we can find the upper bound of by ignoring

(10)

For the condition-related features, we apply a soft-max based classification module , to reduce the conditional entropy . Furthermore, we apply an image reconstruction loss to further restrict the uncertainty given a sample data ,

(11)

where and are the raw image and reconstructed one respectively.

By combining Eq.[7, 8, 11] and

, the joint loss function can be obtained as

(12)

Iv Experiment Results

In this section, we analyze the performance of our method on two datasets and compare it with three feature extraction methods for the visual place recognition task. The experiments are conducted on a single NVIDIA 1080Ti card with 64G RAM on the Ubuntu 14.04 system. For our method, our can inference the local place image with just , and each feature is just .

(a) Nordland datasets
(b) GTAV datasets
Fig. 4: Precision-Recall curve of various VPR methods on the two datasets. The method is considered to be good if the curve is in the upper-right corner. As we see, MDFL outperforms other methods in most of the cases.

Iv-a Datasets

The datasets we used here are the Nordland dataset [13] and the GTAV dataset [16]. The Nordland dataset was recorded on a train in Norway during four different seasons, and each sequence follows the same track. In each sequence, we generate frames from the video at Hz, and the first frames of each sequence is used for training, and the last frames for testing. Note that we train on all four Norldand seasonal datasets, using the seasonal labels to find the condition dependent/invariant features, and then test on the last 1000 frames of each dataset. In the training procedure, we randomly select frames and their corresponding status labels from the four sequences.

The second dataset GTAV [16] contains trajectories on the same track under three different weather status (sunny, rainy and foggy). This dataset is more challenging than the Nordland dataset, since the viewpoints are variant in the GTAV dataset. We generate more than frames in each sequence, frames are used as training data, and the remaining frames for testing.

For each dataset, all images are resized to in RGB format. The loop closure detection mechanism is followed as in the original SeqSLAM method; sequences of image features are matched instead of a single image. For more details about the structure of the SeqSLAM, we refer the reader to [9].

Iv-B Accuracy Analysis

To investigate the place recognition accuracy, we compare our feature extraction method with three methods in sequential matching: the original feature in SeqSLAM that uses sum of absolute difference as local place feature description; convolution layer feature from VGG network, which is trained on the large-scale image classification dataset [10]; adversarial feature learning-based unsupervised feature obtained from the generative adversarial networks [3]; The place is considered as being matched when the distance between current frame and target frame is limited within 10 frames. We evaluate the performances in the precision-recall curve (PR-curve), area under curve (AUC) index, inference time, and storage requirement.

(a) AUC on Nordland datasets
(b) AUC on GTAV datasets
Fig. 5: AUC index of the various VPR methods. The methods match images from the two datasets under different conditions.

Fig.4 and  5 show the precision-recall curve and AUC index respectively for all the methods on the Nordland datasets and the GTAV datasets. In Fig. 5, the label spr-sum, spr-fall, etc. refer to the performance using the same network and same model, with different testing sequences.

In general, all the methods perform better in the Nordland dataset than in the GTAV dataset, since the viewpoints are stable and the geometric changes are smooth due to the constant speed of the train. In contrast, test sequences in GTAV datasets have significant viewpoints differences. Furthermore, limited field of view and multiple dynamic objects in GTAV also introduces additional feature noises, which causes a significant difference in scene appearance.

VGG features perform well under normal conditions, such as summer-winter in Nordland, but perform poorly under unusual conditions, which indicates that the VGG features trained on normal environmental conditions do not generalize. BiGAN does not perform well on either datasets, and this is mainly because it does not take into account the condition of the scene and considers all the images as a joint manifold. For example, the same place under different weather conditions will be encoded differently using BiGAN. Since SeqSLAM uses gray images to ignore the appearance changes under different environmental conditions, the image based features in SeqSLAM are robust against changing conditions as we can see in the Nordland dataset. However, its matching accuracy decreases greatly in the GTAV dataset, since raw image features are very sensitive to the changing viewpoints.

In general, MDFL outperforms the above features in most cases of the NORDLAND and GTAV datasets, and can handle complex situation well, but is not the best in some situations, such as spring-summer in NORDLAND and rain-summer in GTAV. One potential reason is that, in each dataset, we only consider one type of environmental condition (Season or weather), but we did not take into account the illumination changes. Since the illumination changes continuously, it is not easy to set this type of environmental condition in training. The geometric features instructed by the environment labels can capture more common geometry details among the multiple weather or season conditions. Another advantage lies in the structure of the CapsuleNet-like architecture that enables MDFL to cluster lower level geometry features into high level descriptions. The benefit of this mechanism can be better seen in the GTAV dataset, where the extracted features are more robust to the viewpoints difference. Table I shows the average AUC results of different methods in both datasets, and our MDFL method outperforms all the other methods.

Dataset Caps SeqSLAM VGG16 BiGAN
GTAV 0.790 0.518 0.715 0.627
Nordland 0.912 0.876 0.804 0.345
TABLE I: Average AUC index on Nordland and GTAV datasets

V Conclusion

In this paper, we propose a novel multi-domain feature learning method for visual place recognition task. At the core of our framework lies the idea of extracting condition-invariant features for place recognition under various environmental conditions. We use a CapsuleNet-based module to capture multi-domain features from the raw image, and apply a feature separation module to indirectly separate condition-related and condition-invariant features. Based on the extracted condition-invariant features, experiments on the multi-season condition NORDLAND dataset and the multi-weather condition GTAV datasets, demonstrate the robustness of our method. The major limitation for our method is that the shallow layer CapsuleNet-based module can only cluster lower level features, and can not capture the semantic descriptions for the place recognition. In our future work, we will investigate hierarchical CapsuleNet network module to extract higher level semantic features for place recognition.

References