STN-Homography: estimate homography parameters directly

06/06/2019
by   Qiang Zhou, et al.
Zhejiang University
0

In this paper, we introduce the STN-Homography model to directly estimate the homography matrix between image pair. Different most CNN-based homography estimation methods which use an alternative 4-point homography parameterization, we use prove that, after coordinate normalization, the variance of elements of coordinate normalized 3×3 homography matrix is very small and suitable to be regressed well with CNN. Based on proposed STN-Homography, we use a hierarchical architecture which stacks several STN-Homography models and successively reduce the estimation error. Effectiveness of the proposed method is shown through experiments on MSCOCO dataset, in which it significantly outperforms the state-of-the-art. The average processing time of our hierarchical STN-Homography with 1 stage is only 4.87 ms on the GPU, and the processing time for hierarchical STN-Homography with 3 stages is 17.85 ms. The code will soon be open sourced.

READ FULL TEXT VIEW PDF

Authors

page 3

page 4

page 7

10/15/2021

Gait-based Frailty Assessment using Image Representation of IMU Signals and Deep CNN

Frailty is a common and critical condition in elderly adults, which may ...
08/15/2018

SAN: Learning Relationship between Convolutional Features for Multi-Scale Object Detection

Most of the recent successful methods in accurate object detection build...
10/20/2017

Employing Fusion of Learned and Handcrafted Features for Unconstrained Ear Recognition

We present an unconstrained ear recognition framework that outperforms s...
03/07/2017

Deep View Morphing

Recently, convolutional neural networks (CNN) have been successfully app...
11/24/2017

Real-Time Seamless Single Shot 6D Object Pose Prediction

We propose a single-shot approach for simultaneously detecting an object...
11/05/2021

Structure-aware Image Inpainting with Two Parallel Streams

Recent works in image inpainting have shown that structural information ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Homologous is a mapping between two images in a plane from different perspectives. They play a vital role in robotics and computer vision applications such as image stitching

[1, 2], monocular SLAM [3, 4], 3D camera posture reconstruction [5, 6] and virtual touring [7].

The basic approach to tackle a homography estimation is to use two sets of corresponding points in Direct Linear Transform (DLT) method. However, finding the corresponding set of points from images is not always an easy task. In this regard, there have been significant amount of research. Features such as SIFT

[8] and ORB [9] are used to find the interest points, and employing a matching framework, point correspondences are achieved. Commonly, a RANSAC [10] approach is applied on the correspondence set in order to avoid incorrect associations. And, after an iterative optimization process, the best estimate is chosen.

One major problem with such methods such as ORB+RANSAC is their requirements for the hand-crafted features and exhaustive matching step. Convolutional neural network (CNN) automates feature extraction and provides much stronger features than conventional approaches. Their superiority has been shown many times in various tasks

[11, 12, 13, 14]. Recently, attempts have been made to solve the problem of matching with CNN. Flownet[15] achieves optical flow estimation by using a parallel convolutional network model to independently extract features from each image. A correlation layer is used to locally match extracted features against each other and aggregate them with responses. The expanded feature set is then used in further convolutional layers. Finally, a refinement stage consisting of de-convolutions is used to map optical flow estimates back to the original image coordinates. Flownet 2.0[16] was introduced that is using Flownet models as building blocks to create a hierarchical framework to solve the same problem.

Recently, some attempts have been made to tackle the homography estimation with CNN, and acquired higher accuracy than the ORB+RANSAC method. HomographyNet [17] defined the homography between two images by relocation of a set of 4 points, also known as 4-point homography parameterization. Their model is based on the VGG’s architecture [18]

with 8 convolutional layers, a pooling layer after every 2 convolutions, and 2 fully connected layers with an L2 loss function that results from the difference between predicted and true 4-point coordinate values. The work of

[19] used hierarchy of twin convolutional regression networks to estimate the homography between a pair of images and improved the prediction accuracy of 4-point homography compared with [17]. The work of [20]

proposed an unsupervised learning algorithm that trains a Deep Convolutional Neural Network to estimate planar homographies and was also based on 4-point homography parameterization. These works all chose the 4-point homography parameterization because the

parameterization mixes the rotation, translation, scale, and shear components of the homography transformation. The rotation and shear components tend to have a much smaller magnitude than the translation component, and as a result although an error in their values can greatly impact , it will have a small effect on the L2 loss function of the elements of , which is detrimental for training the neural network. The 4-point homography parameterization does not suffer from these problems.

In this paper, we prove that, we can directly estimate the parameterization with CNN, rather than the 4-point homography, and can achieve more accurate results. Specifically, we use STN [21] to estimate the pixel coordinate normalized homography matrix , and the small variance in the magnitude of the elements of the normalized makes it easy to train with CNN.

Our contributions are as follows: (1) We prove that the homography matrix can be directly learned well with CNN after pixel coordinate normalization. (2) We propose a hierarchical STN-Homography model and achieve more accurate results compared with the state of the art. (3) We propose a sequence STN-Homography model which can be trained end to end and gets superior results than the hierarchical STN-Homography model.

2 Dataset

As in [17, 19], we are also using the COCO 2014 dataset (Microsoft Common Objects in Context)[22]. First, all the images are converted to gray-scale and are down-sampled to a resolution of . To prepare training and test samples, we choose 118000 images from trainval set of COCO 2014 dataset and 10000 images from test set of COCO 2014 dataset. Later, three samples from each image (denoted as image_a) are generated in order to increase the dataset size. To achieve this, three random rectangles of size , exclude a boundary region of 32 pixels, is chosen from each image. For each rectangle, a random perturbation in the range of 32 pixels is added to each corner point of the rectangle. This provides us with the target 4-point homography values. Target homography is used with the OpenCV library to warp image_a to get image_b, where image_b is the same size as image_a. Finally, original corner point coordinates are used within the warped image pair (image_a and image_b) to extract the warped patches of patch_a and patch_b. We then calculate the normalized homography matrix with the following equation,

(1)

where is the homography matrix calculated from the previous generated 4-point homography values and with be the width and height of patch_b (patch_a and patch_b has the same size of pixels).

We also use the target 4-point homography with the OpenCV library to warp patch_a to get patch_a_t of same size with patch_a and the patch_a_t will be used to calculate L1 pixel-wise photometric loss. The quintuplet data of (patch_a, patch_b, , patch_a_t) is our training sample and are fed as inputs to the network. Note that, the prediction of the network is normalized , and we need to use Eq. 1 to transform and get the non-normalized homography matrix .

Figure 1: Value histogram of in training dataset.

Since homography matrix can be multiplied by an arbitrary non-zero scale factor without altering the projective transformation, only the ratio of the matrix elements is significant, leaving (

) eight independent ratios corresponding to eight degree of freedom. And so, we alwasy set the last element of

() to be 1.0. In the training sample of quintuplet data, we flatten and take the first eight elements as training input. Fig. 1 shows the value histogram of in training samples after pixel coordinate normalization as depicted in Eq. 1. From Fig. 1 we can clearly see that, after normalization, the variance of the eight independent elements of is very small, which means can be easily regressed with CNN.

3 STN-Homography Architecture

Figure 2: STN-Homography architecture.

Fig. 2 depicts our STN-Homography architecture to predict the pixel coordinate normalized homography matrix . Our Regression Model outputs 8 regression values corresponding to first 8 elements of flattened , leaving the last element of to be 1. The architecture of our Regression Model is similar to VGG Net [18]

. We use 8 convolutional layers with a max pooling layer (

, stride 2) after every two convolutions. The 8 convolutional layers have the following number of filters per layer: 64, 64, 64, 64, 128, 128, 128, 128. The output deep features of last convolutional layer are followed by a global average pooling layer, and then two fully connected layers. The first fully connected layer has 1024 units and the second fully connected layer has 8 units. Dropout with a probability of 0.5 is applied after the first fully connected layer. The input to our Regression Model is a two-channel grayscale images sized

pixels. In other words, the two input images of patch_a and patch_b, which are related by a homography, are stacked channel-wise and fed into the network.

We use two losses in the training of STN-Homography. The first is L2 loss between the regression output and ground true , and the second is the L1 pixel-wise photometric loss between the output of Spatial Transformer and ground truth patch_a_t. The same L1 loss is also used in [20], however [20] is based on 4-point homography while we directly estimate the homography matrix. With the regression output , we use differentiable grid generator and bilinear sampling (more detail in [21]) to warp the patch_a to get patch_a_warp and the compute the L1 loss between patch_a_warp and patch_a_t. The whole network is differentiable and can be trained with back propagation.

4 Hierarchical STN-Homography

4.1 Architecture of Hierarchical STN-Homography

Figure 3: Hierarchical STN-Homography architecture.

As same in [19], we also use a hierarchical model to successively reduces estimation error, as depicted in Fig. 3. In each module, we use a new STN-Homography model to estimate the between patch_a_i and patch_b, and the estimated will be used with OpenCV library to warp image_a_i to prepare image_a_i+1 and patch_a_i+1 for next module. To calculate the final result that can directly transform one image to another, all homography matrix estimations of successive modules are multiplied togehter.

(2)

Warping with predicted homography matrix from each module resulting a visually more similar patch pair (or image pair of image_a_warp_i and image_a). This can be visualized as a geometric morphing process that takes one image and successively makes it to look alike the other, shown in Fig. 3.

4.2 Training

As shown in Fig. 3, when training the hierarchical modules, the following stage module’s training data depends on previous stage module’s predictions. If training these three cascade modules at the same time, we need to do some data processing (e.g., warping image_a to generate image_a_warp) on the fly, which will result in very slow training speed. To speed up training, we adopt a step-by-step training strategy and prepare training data for each stage module offline.

For the training of all stage modules, we use the same training parameters for simplicity. Specifically, we use the momentum optimizer with momentum value of 0.9, batch size of 64 and initial learning rate of 0.05. During the first 1000 training steps, we linearly increase the learning rate from 0.0 to the initial learning rate of 0.05. And then, we continue training 90000 steps during which we update the learning rate from 0.05 to 0.0 with cosine decay method [23].

4.3 Accuracy Results

Figure 4: Mean corner pixel error comparison of various methods for homograph estimation of center aligned image pair

First, we experimentally compared the corner error of our hierarchical STN-Homography network with other reported approaches. The corner error is achieved by calculating L2 distance between target and estimated corner locations and averaging them over 4 corners and all test samples. The approaches used for comparison consist of a traditional one and two convolutional one. The selected traditional approach is based on ORB+RANSAC method and the reference deep convolutional approaches are the HomographyNet by [17] and hierarchical method by [19].

Using our proposed hierarchical STN-Homography, we report in Fig. 4 the progressive improvement. As shown in Fig. 4, the mean corner error of our hierarchical STN-Homography with three stages is only 1.57 pixels. When compared with [17] of 9.2 pixel error, we decreased the mean corner error by 82.9%. When compared with the four-stage hierarchical model of [19], which has the mean corner error of 3.91 pixels, we also decreased the corner error by 59.8%.

4.4 Loss weight comparison

We use two losses for the training of STN-Homography, i.e., the L2 loss for regressed and L1 loss for pixel-wise photometric loss. During the training of previous experiments, we use the same loss weight for these two losses. In this section, we explore the impact of loss weights on our network performance. For simplify, we only use the single STN-Homography model to conduct these experiments. We use the same training parameters for the experiments conducted in Table. 1.

Model name L2 loss weight L1 loss weight mean corner error [pixel]
single STN-Homography 1.0 1.0 5.83
single STN-Homography 1.0 10.0 21.86
single STN-Homography 1.0 0.1 6.21
single STN-Homography 10.0 1.0 4.85
single STN-Homography 0.1 1.0 6.24
Table 1: Impact of loss weights on performation of single STN-Homography model.

As can be seen from the Table. 1, when the weight of L2 loss remains unchanged, increasing or decreasing the weight of L1 loss will lead to poor accuracy of the STN-Homography. However, if the weight of L1 loss is increased, the accuracy of the model will become rather poor, indicating that L2 loss has a more important impact on the performance of our STN-Homography model. There are two reasons for retaining L1 loss in the our STN-Homography model. One is that the L1 photometric loss can improve the network accuracy (as depicted in Table.1, when keeping L2 loss weight be 1.0 unchanged and increasing the L1 loss weight from 0.1 to 1.0, the resulting mean corner error is decreased from 6.21 pixels to 5.83 pixels), and the other is that, we can conduct semi-supervised training with the L1 photometric loss to allow some training samples missing ground truth (as depicted in [20], which only use the L1 photometric loss to conduct unsupervised training).

4.5 Time consumption analysis

We used Tensorflow

[24] to implement our proposed network model. During the test time, we achieved an average processing time of 4.87 milli-seconds for a single STN-Homography model on a GPU. When the same STN-Homography model is used in each stage of hierarchical method, the overall computational complexity is given by,

(3)

where is the end-to-end delay of the whole hierarchical models, is average latency for each STN-Homography model, is the over-head of warping to generate a new image pair, and is number of stages used in the framework.

Model name Time consumption on the GPU [ms]
One-stage hierarchical STN-Homography 4.87
Two-stage hierarchical STN-Homography 11.46
Three-stage hierarchical STN-Homography 17.85
Table 2: Time consumption of our hierarchical STN-Homography

Table. 2 shows the time consumption of our hierarchical STN-Homography model on a GPU. It can be seen that, our three-stage hierarchical STN-Homography has an average processing time of 17.85 ms on the GPU. The real-time processing speed satisfies the requirements for most potential applications.

4.6 Prediction results

Figure 5: Prediction results of our three-stage hierarchical STN-Homography.

Fig. 5 shows the predicted results of our three-stage hierarchical STN-Homography in some test samples. The green boxes in image_a and image_b represent the ground truth corresponding points, and the red boxes in image_a are our predictions. As can be seen from the figure, our model has very small mean corner errors.

5 Sequence STN-Homography

Although the previous proposed hierarchical STN-Homography gets very small mean corner error, the training of multi stage hierarchical STN-Homography is not end to end and rely on image_a when conducting training or test (image_a was warped using the prediction result of current stage to generate input patch_a for next stage). In this section, we proposed Sequence STN-Homography which can be trained end to end and did not rely on image_a when conducting training and test, i.e., Sequential STN-Homography takes input image pair of patch_a and patch_b, and directly output the prediction result of homography values.

5.1 Architecture of Sequence STN-Homography

Figure 6: Architecture of Sequence STN-Homography with 3 stages

Sequence STN-Homography is cascaded with several STN-Homography models as depicted in Fig. 6. The training input of sequence STN-Homography is patch_a, patch_b, , and patch_a_t. Taking there-stage sequence STN-Homography as an example. In stage 1, the STN-Homography model takes input image pair of (patch_a, patch_b) and outputs and patch_a_warp_stage_1, where is used to compute L2 loss with ground truth and patch_a_warp_stage_1 is used to compute the L1 loss with ground truth patch_a_t. In stage 2, the STN-Homography model takes input image pair of (patch_a_warp_stage_1, patch_b) and outputs and patch_a_warp_stage_2, where is used to compute L2 loss with ground truth and patch_a_warp_stage_2 is used to compute L1 loss with ground truth patch_a_t. In stage 3, the STN-Homography model takes input image pair of (patch_a_warp_stage_2, patch_b) and outputs and patch_a_warp_stage_3, where is used to compute L2 loss with ground truth and patch_a_warp_stage_3 is used to compute L1 loss with ground truth patch_a_t.

(4)
(5)
(6)

The prediction output of stage 1 is the normalized homography matrix between patch_a and patch_a_warp_stage_1, as shown in Eq. 4. In stage 2, is the normalized homography between patch_a_warp_stage_1 and patch_b, and is the normalized homography between patch_a and patch_a_warp_stage_2, as shown in Eq. 5. In stage 3, is the normalized homography between patch_a_warp_stage_2 and patch_b, and is the normalized homography between patch_a and patch_a_warp_stage_3, as shown in Eq. 6. Combing thest equations, we can get and

. We develop a Tensor Homography Merge layer to compute

, which is differentiable to be trained with back propagation, as show in Fig. 6.

5.2 Training and Accuracy Results

Figure 7: Mean corner error of our sequence STN-Homography model

For simplify, when training sequence STN-Homography model, we use the same training parameters as used for hierarchical STN-Homography model, i.e., batch size of 64, initial learning rate of 0.05 and total training steps of 90000. Fig. 7 shows the comparison of mean corner error of our sequence STN-Homography model with other reported approaches. At this time, we now only trained two-stage sequence STN-Homography, which achieves the mean corner error of 2.14 pixels, which is lower than the two-stage hierarchical STN-Homography model of 2.6 pixels. We believe that, three-stage sequence STN-Homography model will also be superior than the three-stage hierarchical STN-Homography model.

6 Conclusion

In this paper, we have proposed a hierarchical STN-Homography model to target homography estimation. We showed that, after pixel coordinate normalization of homography matrix, we can directly regress the homography matrix values, rather than to estimate the alternative 4-point homography, and the results are significantly better than the state of the art. We use two losses during the training of each STN-Homography model, and find that the L2 loss of plays a more import role than the L1 photometric loss in the performance of STN-Homography. While, the use of L1 photometric loss can allowing our model to be trained with a semi-supervised manner, that is, some training samples are allowed to lose ground truth values.

References

  • [1] Brown M, Lowe DG. Automatic Panoramic Image Stitching using Invariant Features. International Journal of Computer Vision. 2006 dec;74(1):59–73.
  • [2] Li N, Xu Y, Wang C. Quasi-Homography Warps in Image Stitching. IEEE Transactions on Multimedia. 2018 jun;20(6):1365–1375.
  • [3] Mur-Artal R, Montiel JMM, Tardos JD. ORB-SLAM: A Versatile and Accurate Monocular SLAM System. IEEE Transactions on Robotics. 2015 oct;31(5):1147–1163.
  • [4] Mur-Artal R, Tardos JD. ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras. IEEE Transactions on Robotics. 2017 oct;33(5):1255–1262.
  • [5] Zhang Z, Hanson A. 3D Reconstruction Based on Homography Mapping. ARPA Image Understanding Workshop. 1996 01;.
  • [6] Park HS, Shiratori T, Matthews I, Sheikh Y. 3D Reconstruction of a Moving Point from a Series of 2D Projections. In: Computer Vision – ECCV 2010. Springer Berlin Heidelberg; 2010. p. 158–171.
  • [7] Pan Z, Fang X, Shi J, Xu D. Easy tour. In: Proceedings of the 2004 ACM SIGGRAPH international conference on Virtual Reality continuum and its applications in industry - ACM Press; 2004. .
  • [8] Lowe DG. Distinctive Image Features from Scale-Invariant Keypoints. International Journal of Computer Vision. 2004 nov;60(2):91–110.
  • [9] Rublee E, Rabaud V, Konolige K, Bradski G. ORB: An efficient alternative to SIFT or SURF. In: 2011 International Conference on Computer Vision. IEEE; 2011. .
  • [10] Fischler MA, Bolles RC. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM. 1981 jun;24(6):381–395.
  • [11] Badrinarayanan V, Kendall A, Cipolla R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2017 dec;39(12):2481–2495.
  • [12] He K, Gkioxari G, Dollár P, Girshick R. Mask R-CNN;.
  • [13] Cao Z, Hidalgo G, Simon T, Wei SE, Sheikh Y. OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields;.
  • [14] Mikolov T, Chen K, Corrado G, Dean J.

    Efficient Estimation of Word Representations in Vector Space;.

  • [15] Fischer P, Dosovitskiy A, Ilg E, Häusser P, Hazırbaş C, Golkov V, et al. FlowNet: Learning Optical Flow with Convolutional Networks;.
  • [16] Ilg E, Mayer N, Saikia T, Keuper M, Dosovitskiy A, Brox T. FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks;.
  • [17] DeTone D, Malisiewicz T, Rabinovich A. Deep Image Homography Estimation;.
  • [18] Simonyan K, Zisserman A. Very Deep Convolutional Networks for Large-Scale Image Recognition;.
  • [19] Nowruzi FE, Laganiere R, Japkowicz N. Homography Estimation from Image Pairs with Hierarchical Convolutional Networks. In: 2017 IEEE International Conference on Computer Vision Workshops (ICCVW). IEEE; 2017. .
  • [20] Nguyen T, Chen SW, Shivakumar SS, Taylor CJ, Kumar V. Unsupervised Deep Homography: A Fast and Robust Homography Estimation Model;.
  • [21] Jaderberg M, Simonyan K, Zisserman A, Kavukcuoglu K. Spatial Transformer Networks;.
  • [22] Lin TY, Maire M, Belongie S, Bourdev L, Girshick R, Hays J, et al. Microsoft COCO: Common Objects in Context;.
  • [23] Loshchilov I, Hutter F.

    SGDR: Stochastic Gradient Descent with Warm Restarts;.

  • [24] Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, et al.

    TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems;.