DeepMeshFlow: Content Adaptive Mesh Deformation for Robust Image Registration

12/11/2019 ∙ by Nianjin Ye, et al. ∙ 21

Image alignment by mesh warps, such as meshflow, is a fundamental task which has been widely applied in various vision applications(e.g., multi-frame HDR/denoising, video stabilization). Traditional mesh warp methods detect and match image features, where the quality of alignment highly depends on the quality of image features. However, the image features are not robust in occurrence of low-texture and low-light scenes. Deep homography methods, on the other hand, are free from such problem by learning deep features for robust performance. However, a homography is limited to plane motions. In this work, we present a deep meshflow motion model, which takes two images as input and output a sparse motion field with motions located at mesh vertexes. The deep meshflow enjoys the merics of meshflow that can describe nonlinear motions while also shares advantage of deep homography that is robust against challenging textureless scenarios. In particular, a new unsupervised network structure is presented with content-adaptive capability. On one hand, the image content that cannot be aligned under mesh representation are rejected by our learned mask, similar to the RANSAC procedure. On the other hand, we learn multiple mesh resolutions, combining to a non-uniform mesh division. Moreover, a comprehensive dataset is presented, covering various scenes for training and testing. The comparison between both traditional mesh warp methods and deep based methods show the effectiveness of our deep meshflow motion model.



There are no comments yet.


page 1

page 2

page 3

page 4

page 5

page 6

page 7

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The problem of image registration is a classic vision topic that has been studied for decades [4, 28, 8], but still active not only because its difficulty under certain circumstances, such as low-textured scenes, parallax and dynamic objects, but also due to its widespread applications, such as Panorama creation [3], multi-frame HDR/Denoising [33]

, multi-frame super resolution 

[30], and video stabilization [20].

Figure 1: Comparison between popular image registration methods, deep homography [23], Meshflow [19] and our proposed Deep Meshflow. The source images are aligned to the target image. Two images are blended. Good registration produces less blurs and is free from ghosting effects. We show zoom-in windows to highlight misalignments.

A variety of motion models have been proposed for image registration, among which the homography [8]

is the most popular one given its simplicity and efficiency. Homography is estimated by matching image features 

[22] between two images. The false matches are rejected by RANSAC [6]. One problem is that the quality of estimated homography is highly dependent on the quality of matched image features. Insufficient number of correct matches or uneven distributions can easily damage the performance. Recently, deep homography has been proposed which takes two images as input to the network and output the homography [5, 23]. Compared with feature-based methods, deep homography is more robust against various challenging cases, such as low-light, low-texture, high-noise, etc [32]

. The other problem of homography is its limited degree of freedom. A homography can only describe plan motions or motions caused by camera rotations. Violation of these assumptions can produce incorrect alignments. For images with parallax, a global homograhy is usually used to estimate an initial alignment before subsequent more sophisticated models 

[7, 18, 31, 15]. Mesh-based image warping can represent spatially varying depth variations [18, 20]. Each of mesh grid undergoes a local linear homography, accumulating to a highly nonlinear representation. Igarashi et al. proposed as-rigid-as-possible image warping to enforce local rigidity of each mesh triangles [10]. Later, Liu et al. extends [10] by proposing content-preserving warps that constrains the rigidity of mesh cells according to the image contents [18]. Liu et al. proposed a meshflow motion model that further simplify the estimation of mesh model [19]. The meshflow contains a sparse motion field with motions only located at the mesh vertexes. These methods have been proven to be sufficiently flexible for handling complex scene depth variations [14, 21, 20]. However, one common challenge faced by mesh-based methods is still the quality of image features. Both number of matched features and feature distribution can influence the performance. Zaragoza et al. [31] proposed an As-Projective-As-Possible (APAP) mesh deformation approach and Lin et al. [17]

proposed a spatially varying affine model to alleviate the problem of feature dependent by interpolating ideal features to non-ideal regions. However, they still require a certain number of qualified features to start with. Optical flow 

[26], on the other hand, can estimate per-pixel motions which can preserve fine motion details compared with mesh interpolated motion fields. However, optical flow estimation is more computationally expensive compared with light-weight mesh representations. Synthesis driven applications do not require physically accurate motion estimation at every pixel, estimating optical flow over-kills the requirement and is often not necessary.

Figure 1 shows some examples. Figure 1(a) and (b) show the comparison between our method and deep homography method [23]. The source image is aligned to the target image and two images are blended for illustration. The scene contains multiple planes, e.g., ground and building facade. Misalignments (highlighted by the zoom-in window) can be observed at the building facade of deep homography method while our deep meshflow can align multiple planes and thus, is free from such problem. Figure 1(c) and (d) show the comparison of the traditional meshflow [19]. The feature detection is difficult in this example due to the poor textures, causing the failure of meshflow. In contrast, our deep meshflow is robust to textureless scenes.

In this work, we propose an unsupervised approach with a new architecture for content adaptive deep meshflow estimation. We combine the advantage of deep homography that is robust to textureless regions, and the advantage of meshflow that is light weight for nonlinear motion representation. Specifically, our network takes two images for alignment as input, and output a sparse motion field meshflow, with motions only located at mesh vertexes. We learn a content mask to reject outlier regions, such as moving objects and discontinues large foreground that cannot be registed by the mesh deformation but can influence the overall alignment quality. The capability of content adaptive is similar as the RANSAC procedure when estimating homography 

[8] or mesh warps [20, 19] in traditional approaches. This is realized by a novel triplet loss. Moreover, instead of directly output the mesh at the desired resolution, we first generate several intermediate meshes with different resolutions, e.g., meshes with , , etc. Then we choose the best combination among these meshes, assembling to the final output. This idea is borrowed from video coding x265 [25], in which the block division of a frame can be non-uniform according to the image content. Here, our mesh division is also non-uniform based on both image contents and motions. For regions that require higher degree of freedom, we chose finer scales for registration accuracy, while for regions that are relatively complanate, we chose coarse scales for robustness. This flexibility is realized by our segmentation module in the pipeline, which shows to be more effective than simply chose the finest scale.

In addition, we introduce a comprehensive meshflow dataset for training, within which the testing set contains manually labeled ground-truth point matches for the purpose of evaluation. We split the dataset into categories according to the scene characteristics, including scenes with multiple dominate planes, scenes captured at night, scenes with low-textured regions, with small-foreground and with large-foreground. The experiments show that our method outperforms previous leading traditional mesh-based methods [19, 31], as well as recent deep homography methods [5, 23, 32]. Our contributions can be summarized as:

  • A new unsupervised network structure for deep meshflow estimation, which outperforms previous state-of-the-arts methods.

  • The content-adaptive capability, in terms of rejecting interference regions and adaptive mesh scale selection.

  • A comprehensive dataset contains various scene types for training and testing.

Figure 4: Network structure (a) and triplet loss (b) used for our DeepMeshFlow estimation.

2 Related Works

Global parametric models.

Homography is a wildly used parametric alignment model, which is a matrix with degree of freedom, describing either plan motions in the space or motions induced by pure camera rotations. Traditional methods require sparse feature matches [22, 1, 24] to estimate a homography. However, image features are unreliable with respect to low-textured regions. Recently, deep based solutions have been proposed for improved robustness such as, the supervised approach that train homographyNet under the guidance of random homography proposals [5] or unsupervised approach that directly minimizes warping MSE distance [23]. On the other hand, homography model is restricted by its motion assumptions, violation which can easily introduce misalignments, such as scenes consisting of multiple plans or discontinuity depth variations.

Mesh warping.

To solve the depth parallax issue, mesh-based image warping is more popular. Liu et al proposed Content Preserving Warp (CPW) to encourage mesh cells to under go a rigid motion [18]. Li et al. proposed a duel-feature warping by considering not only image features but also line segments for the warping in low-textured regions [14]. Lin et al. incorporated curve preserving term to preserve curve structures [16]. Liu et al. introduced MeshFlow, a non-parametric warping method for video stabilization [19]. Compared with dense optical flow, meshflow is a sparse motion field with motions only located at mesh vertexes. It detects and tracks image features for meshflow model estimation. In this work, we propose a deep solution, DeepMeshFlow, for the similar purpose, but with largely improved robustness against scenes that suffer from feature detection and matching/tracking problems.

Optical flows.

Optical flow estimates per-pixel dense motion between two images. Compared with global alignment methods, optical flow can produce better results in preserving motion details. The traditional method often adopt coarse-to-fine, variational optimization framework for flow estimation [9, 26, 2]. Recently, flow accuracy has been largely promoted by convolutional networks [29, 27, 11]. For some image/video editing applications, however, the optical flow often requires a series of post-processing before the usage, such as occlusion detection, motion inpainting, outlier filtering. For example, Liu et al., estimated a Steadyflow from raw optical flow by rejecting and inpainting motion inconsistent foregrounds. Our mesh-based representation, on the other hand, is free from such issues. It is light-weight and flexible for various applications, such as multi-frame HDR [33], burst denoising [21], and video stabilization [18, 20, 19].

3 Algorithm

MeshFlow is a motion model that describes non-linear warping between two image views [19]. It has more degrees of freedom compared with homography but suffers less from computational complexity compared with optical flow. It is represented by a mesh of grids so that totally contains

vertices on the mesh. At each vertex, a 2D motion vector is defined so that each grid corresponds to one homography computed by the 4 vectors on its 4 corner vertices. With multiple homography matrices computed on the various mesh grids, the entire image can be warpped in an non-linear manner so as to fit multi-planes in the scene.

3.1 Network Structure

Our method is built upon convolutional neural network which takes two images

and as input, and produces a mesh flow of size as output, where and are the height and width of the mesh with a 2D motion vector being defined on each vertex of the mesh. Given the mesh flow with such a form, each grid of it can be represented by a homography matrix , solved by the 4 motion vectors on its 4 corners. The entire network structure can be divided into four modules: a feature extractor , a mask predictor , a scene segmentation network and a multi-scale mesh flow estimator . and are fully convolutional networks which accept input of arbitrary sizes and produce a concatenation of feature maps. Then servers as a regressor that transfers the features into mesh flows in multiple scales. Then, a scene segmentation network produces a fusion mask that fuses the multi-scale mesh flows into one as the final output. Figure 4 illustrates the network structure, and in this sub-section we briefly introduce and and leave into the next sub-section.

Feature extractor.

Unlike previous DNN based methods that directly utilizes the pixel values as the feature, here our network automatically learns a feature from the input for robust feature alignment. To this end, we build a FCN that takes an input of size , and produces a feature map of size . For inputs and , the feature extractor shares weights and produces feature maps and , i.e.


Mask predictor.

In non-planar scenes, especially those including moving objects, there exists no single homography that can align the two views. Although mesh flow contains multiple homography matrices which can partially solve the non-planar issue, for a local single region, one homography could be still problematic to well align all the pixels. In traditional algorithm, RANSAC is widely applied to find the inliers for homography estimation, so as to solve the most approximate matrix for the scene alignment. Following the similar idea, we build a sub-network to automatically learn the inliers’ positions. Specifically, a sub-network

learns to produce an inlier probability map or mask, highlighting the content in the feature maps that contribute much for the homography estimation. The size of the mask is the same as the size of the feature. With the masks, we further weight the features extracted by

before feeding them to the homography estimator, obtaining two weighted feature maps and as,


Scene segmentation.

The weighted feature maps and are concatenated and fed to the following MeshFlow estimator , to produce mesh flows with different scales. These multi-scale mesh flows are then fused into one by a branch-selection scheme. It is achieved by training a scene segmentation network that segments the image into classes, each one of which corresponds to one branch, i.e.


where is of the same resolution of the finest-scale mesh flow, so its size is .

Figure 5: Our predicted mask for various of scenes. (a) contains a moving car which is removed in the mask. (b) is a regular static scene that the mask equally looks at the entire image. (c) contains large foreground, the fountain, which is successfully rejected by the mask. (d) consists of sea and sky, which cannot provide rich textures. The mask concentrate on the horizon. (e) is an night example. The mask looks at the sky-wheel and buildings.

3.2 MeshFlow Estimator

Multi-scale MeshFlow.

As mentioned above, the output of our network is a mesh flow of size . Directly regressing the input, i.e. the two weighted feature maps and to this mesh flow is not straightforward, as there exists too many degrees of freedom (DoF) being involved. To tackle this issue, we divide the mesh flow regression part into

branches, each of which is responsible for one scale mesh flow. The intuition behind results from the fact that in complex scenes, various planes may differ in scales. A coarse-scaled mesh flow could be better align the two views rigidly, and trends to be easier for training compared with a fine-scaled mesh flow of more DoF. As for its backbone, it follows a ResNet-34 structure, which contains 34 layers of strided convolutions followed by

branches, each of which starts with an adaptive pooling layer and generates a mesh flow with specific size by an additional convolutional layer. In our experiments, we set to 3 so that the 3 branches correspond to mesh flow of size , of size and of size . The coarse-scaled mesh flows are then upsampled to the same scale of before fusing together, noted as . This process is expressed as follows,


MeshFlow fusion.

With computed by the previous steps, we finally fuse the mesh flows into the output mesh flow using the segmentation mask in the following manner,


where and is the vertex coordinate on the mesh. By this strategy, the output mesh flow conveys homography alignment in various scales for each local grid. It has enough DoF to align the two views and is still easy for training.

3.3 Triplet Loss for Training

With the mesh flow estimated, we obtain by computing the homography matrix for each of its grid. Then we warp image to and then further extracts its feature map as . Intuitively, for a local grid , if the homography matrix is accurate enough, should be well aligned with , causing a low loss between them. Considering in real scenes, a single homography matrix cannot satisfy the transformation between the two views, we also normalize the loss by and . Here is the warped version of . So the loss between the warped and is as follows,


where and

indicates a pixel location in the masks and feature maps. Here we utilize spatial transform network 

[12] to achieve the warping operation.

Directly minimizing Eq. 7 may easily cause trivial solutions, where the feature extractor only produces all zero maps, i.e. . In this case, the features learned indeed describe the fact that and are well aligned, but it fails to reflect the fact that the original images and are mis-aligned. To this end, we involve another loss between and , i.e.


and further maximize it when minimizing Eq. 7. This strategy avoids the trivial solutions, and enables the network to learn a discriminative feature map for image alignment.

In practise, we swap the features of and and produce another reversed mesh flow , and a homography matrix is computed for each grid. Following Eq. 7 we involve a loss between the warped and . We also add a constraint that enforces and to be inverse. So, the optimization procedure of the network could be written as follows,


where and are balancing hyper-parameters, and

is a 3-order identity matrix. We set

and in our experiments. We illustrates the loss formulations in Figure 4(b).

3.4 Unsupervised Content-Awareness Learning

As mentioned above, our network contains a sub-network to predict an inlier probability map or mask. It is such designed that our network can be of content-awareness by the two-fold effects. First, we use the masks to explicitly weight the features , so that only highlighted features could be fully fed into MeshFlow estimator . Meanwhile, they are also implicitly involved into the normalized distance between the warped feature and its original counterpart , or and

, meaning only those regions that are really fit for alignment would be taken into account. For those areas containing low texture or moving foreground, because they are non-distinguishable or misleading for alignment, they are naturally removed for local homography estimation in a grid during optimizing the triplet loss as proposed. Such a content-awareness is achieved fully by an unsupervised learning scheme, without any ground-truth mask data as supervision.

To demonstrate the effectiveness of mask, we illustrate an example in Figure 6 and 5. In Figure 6, we visualize the mask if one branch of mesh flow is used only. In this case, for coarse-scaled mesh flow, since each grid covers larger area where a single homography is less likely to represent the transformation, less pixels are highlighted in the mask. However, our DeepMeshFlow solution works in multiple scales, so the highlighted region in the mask is less than the one in mask trained with mesh flow, but more than the one in mask trained with mesh flow. Figure 5 shows mask examples generated in several scenarios. For example, in Figure 5(a)(c) where the scenes contain dynamic objects, our network successfully rejects moving objects, even if the movements are inapparent as the water in (c). These cases are very difficult for RANSAC to find robust inliers. In particular, the most challenging case is Figure 5(a), in which the moving foregrounds are complex, including people and the cars. Our method successfully locates the useful background for the homography estimation. Figure 5(d) is a low-textured example, in which the sky occupies half space of the image. It is challenging for traditional methods where the sky provides no features and the sea causes matching ambiguities. Our predicted mask concentrates on the horizon but with sparse weights on sea waves. Figure 5(e) is a low light example, where only visible areas contain weights as seen. We also conduct an ablation study to reveal the influence if disabling the mask prediction. As seen in Table 1, the accuracy has a significant decrease when mask is removed.

4 Experimental Results

Figure 6: Comparison of masks with respect to different mesh resolutions. (a)the scene; (b)mask produced by mesh; (c)mask produced by mesh; and (d)mask produced by our adaptive mesh.

4.1 Dataset and Implementation Details

To train and evaluate our deep meshflow, we present a comprehensive dataset that contains various of scenes as well as marked point correspondences. We split our dataset into several categories to test the performances under different scenarios. The categories includes: scenes consists of a single plane (SP), scenes mainly consists of multiple dominate planes (MP), scenes with large foreground (LF), scenes with low-textures (LT) and scenes captured with low-light (LL). The first three categories focus on the motion representation capability of motion models while the last two categories concentrate on the capability of feature extraction. Notably, for category LT and LL, they contain all type of scenes SP, MP, and LF. In particular, each category contains around image pairs, thus totally image pairs in the dataset. Figure 7 shows some examples.

Points annotations.

For the testing set, we mark ground-truth point correspondence for the purpose of quantitative evaluation. Figure 8 shows several examples of our annotated correspondences. For each pair, we carefully marked around correspondences which equally distributed on the image. For category multi-plane(MP), we equally separate points on different planes. For category of low-textures(LT), we mark points with extra efforts to make sure the correctness. We marked about 3,000 pairs of images and nearly 30k pairs of matching points for all categories. Figure 8 shows three examples of our annotation.

Figure 7: Examples in our dataset. (a) Examples of single plane (SP), (b) examples of multiple plane (MP), (c) examples of scenes contains large foreground (LF), (d) examples of scenes with low-textures (LT) and (e) examples of scenes with low-light (LL).
Figure 8: Examples of our annotated point correspondences for quantitative evaluation. The first example contains dominate foreground, we mark points on both foreground and background. The second example films the blue sky, we still mark some points according to textures of clouds. The third example films a textureless indoor white wall, we mark correspondences according to the textures and corners.
Figure 9: Comparisons with fixed mesh resolutions. We conduct and fixed meshes to compare with out adaptive mesh. Fixed mesh resolution cannot produce comparable results as ours. For example, the first row, second example, we need a more dense mesh to align nearby handrail, where mesh cannot handle this case. On the other hand, the second row, second example, we require a sparse mesh to align faraway ship. Dense mesh may not receive sufficient constraints as nearby regions (blue sky and sea) do not contain rich textures.

Implementation details.

Our network is trained with 30k iterations by an Adam optimizer [13], whose parameters are set as , , , . The batch size is set to . For every iterations, we multiply the learning rate by . Each iteration costs approximate s and it takes nearly

hours to complete the entire training. The implementation is based on PyTorch and the network training is performed on

NVIDIA RTX 2080 Ti. To augment the training data and avoid black boundaries appearing in the warped image, we randomly crop patches of size from the original image to form and .

4.2 Comparison with Existing Methods

Eye 6.70 8.99 4.73 7.38 7.83 7.13
w/o. Mask 1.82 2.37 2.42 3.16 2.72 2.50
11 mesh 1.78 2.24 2.31 2.40 2.72 2.29
44 mesh 1.60 1.80 2.11 2.57 2.64 2.14
1616 mesh 1.64 2.02 2.30 3.26 3.02 2.45
Meshflow 1.64 2.03 2.26 3.19 3.10 2.44
Unsupervised 1.87 2.63 2.57 2.69 2.46 2.45
Ours 1.57 1.74 1.99 2.20 2.45 1.99
Table 1: Ablation studies on mask, triple loss, training strategy and network backbones. Data represents the distances between transformed points and marked ground-truth points.

Qualitative comparison.

We compare our method with various methods, including classic traditional methods MeshFlow [19], As-Projective-As-Possible mesh Warping [31] and a deep method, supervised deep homography [5]. For the unsupervised deep homography method, it uses aerial images as training data, which ignores the effect of depth parallex. For a more fair comparison, we fine-tune the method with our training data.

The source image is warped to the target image, where two images are blended for illustration. Methods who produces clearer blended images indicate good alignment. For each method we show two examples as shown in Figure 10. The first, second, and third row shows the comparison with As-Projective-As-Possible(APAP), Meshflow, and Unsupervised deep homography approaches, respectively, in which our results are shown in the second and forth columns. We highlight some regions for clearer illustration.

Figure 10: Comparison with existing approaches. We select three methods, APAP [31], Meshflow [19] and Unsupervised deep homography [23], which are mostly related to our method for comparisons. We use zoom-in windows to highlight some regions.

Quantitative comparison.

We verify the performances with our annotated points in the testing set. The comparison is based on the categories. Specifically, we use the estimated mesh/homography to transform the source points to the target points. The average

distances are recorded as an evaluation metric. We report the performances for each category as well as the overall averaged scores in Table 

1. Small number indicates better alignment. Table 1 ‘Eye’ refers to the identity matrix, indicating the original distances if no alignment is performed. As can be seen, the original distances are high, around pixels. After alignment, all methods decrease the original score, indicating that the alignment takes effect. Among all candidates, our method achieves the best result. In particular, we achieved average score of , which surpassed the two competitors with a relatively large margin. Meshflow achieved and unsupervised deep homography achieved on average.

4.3 Ablation Studies

We verify the effectiveness of our design of content adaptive capability, we design two experiments, with and without mask and with fixed mesh resolutions.

W/o mask.

We exclude the mask component in our pipeline to produce the result for comparisons. Table 1 ‘w/o mask’ shows the result. As seen, without the mask, the average performance drops from to . Therefore, the mask is important during the meshflow estimation. In particular, for the low texture(LT) category, score without mask is while score with mask is , improving , which indicates that the mask is particularly helpful with respect to the LT category. For other categories, the scores with mask are also improved to a certain extent.

Mesh resolutions.

We train several different fixed mesh resolutions to compare with our adaptive mesh resolution. Table 1 shows the results. In particular, we conduct mesh, mesh and mesh. As can be seen, non of these fixed resolutions can achieve comparable results as our adaptive mesh resolution. We further demonstrate some visual comparisons with respect to mesh and mesh in Figure 9.

5 Conclusion

We have presented a network architecture for deep mesh flow estimation with content-aware abilities. Traditional feature-based methods heavily relies on the quality of image features which are vulnerable to low-texture and low-light scenes. Large foreground also causes troubles for RANSAC outlier removal. Previous deep based homography pay less attention to the depth disparity issue. They treat the image content equally which can be influenced by non-planer structures and dynamic objects. Our network learns a mask during the estimation to reject outlier regions for robust mesh flow estimation. In addition, we calculate loss with respect to our learned deep features instead of directly comparing the image contents. Moreover, we have provided a comprehensive dataset for two view alignment. The dataset have been divided into 5 categories, regular, low-texture, low-light, small-foregrounds, and large-foregrounds, to evaluate the estimation performance with respect to different aspects. The comparison with previous methods show the effectiveness of our method.


  • [1] H. Bay, T. Tuytelaars, and L. Van Gool (2006) Surf: speeded up robust features. In Proc. ECCV, pp. 404–417. Cited by: §2.
  • [2] M. J. Black. (2015) Efficient sparse-to-dense optical flow estimation using a learned basis and layers. In Proc. CVPR, pp. 120–130. Cited by: §2.
  • [3] M. Brown, D. G. Lowe, et al. (2003) Recognising panoramas.. In Proc. ICCV, Vol. 3, pp. 1218. Cited by: §1.
  • [4] D. Capel (2004) Image mosaicing. In Image Mosaicing and super-resolution, pp. 47–79. Cited by: §1.
  • [5] D. DeTone, T. Malisiewicz, and A. Rabinovich (2016) Deep image homography estimation. arXiv preprint arXiv:1606.03798. Cited by: §1, §1, §2, §4.2.
  • [6] M. A. Fischler and R. C. Bolles (1981) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM 24 (6), pp. 381–395. Cited by: §1.
  • [7] J. Gao, S. J. Kim, and M. S. Brown (2011) Constructing image panoramas using dual-homography warping. In Proc. CVPR, pp. 49–56. Cited by: §1.
  • [8] R. Hartley and A. Zisserman (2003)

    Multiple view geometry in computer vision

    Cambridge university press. Cited by: §1, §1, §1.
  • [9] B. Horn and B. G. Schunck. (1981) Determining optical flow. 17, pp. 185–203. Cited by: §2.
  • [10] T. Igarashi, T. Moscovich, and J. F. Hughes (2005) As-rigid-as-possible shape manipulation. In ACM Trans. Graphics (Proc. of SIGGRAPH), Vol. 24, pp. 1134–1141. Cited by: §1.
  • [11] E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox (2017) Flownet 2.0: evolution of optical flow estimation with deep networks. In Proc. CVPR, pp. 2462–2470. Cited by: §2.
  • [12] M. Jaderberg, K. Simonyan, A. Zisserman, et al. (2015) Spatial transformer networks. In Advances in neural information processing systems, pp. 2017–2025. Cited by: §3.3.
  • [13] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §4.1.
  • [14] S. Li, L. Yuan, J. Sun, and L. Quan (2015) Dual-feature warping-based motion model estimation. In Proc. ICCV, pp. 4283–4291. Cited by: §1, §2.
  • [15] K. Lin, N. Jiang, L. Cheong, M. Do, and J. Lu (2016) Seagull: seam-guided local alignment for parallax-tolerant image stitching. In Proc. ECCV, pp. 370–385. Cited by: §1.
  • [16] K. Lin, S. Liu, L. Cheong, and B. Zeng (2016) Seamless video stitching from hand-held camera inputs. 35 (2), pp. 479–487. Cited by: §2.
  • [17] W. Lin, S. Liu, Y. Matsushita, T. Ng, and L. Cheong (2011) Smoothly varying affine stitching. In Proc. CVPR, pp. 345–352. Cited by: §1.
  • [18] F. Liu, M. Gleicher, H. Jin, and A. Agarwala (2009) Content-preserving warps for 3d video stabilization. In ACM Trans. Graphics (Proc. of SIGGRAPH), Vol. 28, pp. 44. Cited by: §1, §2, §2.
  • [19] S. Liu, P. Tan, L. Yuan, J. Sun, and B. Zeng (2016) Meshflow: minimum latency online video stabilization. In Proc. ECCV, pp. 800–815. Cited by: Figure 1, §1, §1, §1, §1, §2, §2, §3, Figure 10, §4.2.
  • [20] S. Liu, L. Yuan, P. Tan, and J. Sun (2013) Bundled camera paths for video stabilization. ACM Trans. Graphics (Proc. of SIGGRAPH) 32 (4), pp. 78. Cited by: §1, §1, §1, §2.
  • [21] Z. Liu, L. Yuan, X. Tang, M. Uyttendaele, and J. Sun (2014) Fast burst images denoising. ACM Trans. Graphics (Proc. of SIGGRAPH) 33 (6), pp. 232. Cited by: §1, §2.
  • [22] D. G. Lowe (2004) Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60 (2), pp. 91–110. Cited by: §1, §2.
  • [23] T. Nguyen, S. W. Chen, S. S. Shivakumar, C. J. Taylor, and V. Kumar (2018) Unsupervised deep homography: a fast and robust homography estimation model. IEEE Robotics and Automation Letters 3 (3), pp. 2346–2353. Cited by: Figure 1, §1, §1, §1, §2, Figure 10.
  • [24] E. Rublee, V. Rabaud, K. Konolige, and G. R. Bradski (2011) ORB: an efficient alternative to sift or surf.. In Proc. ICCV, Vol. 11, pp. 2564–2571. Cited by: §2.
  • [25] G. J. Sullivan, J. Ohm, W. Han, and T. Wiegand (2012) Overview of the high efficiency video coding (hevc) standard. IEEE Trans. on circuits and systems for video technology 22 (12), pp. 1649–1668. Cited by: §1.
  • [26] D. Sun, S. Roth, and M. J. Black (2010) Secrets of optical flow estimation and their principles. In Proc. CVPR, pp. 2432–2439. Cited by: §1, §2.
  • [27] D. Sun, X. Yang, M. Liu, and J. Kautz (2018) PWC-net: cnns for optical flow using pyramid, warping, and cost volume. In Proc. CVPR, pp. 8934–8943. Cited by: §2.
  • [28] R. Szeliski et al. (2007) Image alignment and stitching: a tutorial. Foundations and Trends® in Computer Graphics and Vision 2 (1), pp. 1–104. Cited by: §1.
  • [29] P. Weinzaepfel, J. Revaud, Z. Harchaoui, and C. Schmid (2013) DeepFlow: large displacement optical flow with deep matching. In Proc. CVPR, pp. 1385–1392. Cited by: §2.
  • [30] B. Wronski, I. Garcia-Dorado, M. Ernst, D. Kelly, M. Krainin, C.K. Liang, M. Levoy, and M. P. (2019) Handheld multi-frame super-resolution. ACM Trans. Graphics (Proc. of SIGGRAPH) 38 (4), pp. 28. Cited by: §1.
  • [31] J. Zaragoza, T. Chin, M. S. Brown, and D. Suter (2013) As-projective-as-possible image stitching with moving dlt. In Proc. CVPR, pp. 2339–2346. Cited by: §1, §1, Figure 10, §4.2.
  • [32] J. Zhang, C. Wang, S. Liu, L. Jia, J. Wang, and J. Zhou (2019) Content-aware unsupervised deep homography estimation. arXiv preprint arXiv:1909.05983. Cited by: §1, §1.
  • [33] L. Zhang, A. Deshpande, and X. Chen Denoising vs. deblurring: hdr imaging techniques using moving cameras. In Proc. CVPR, Cited by: §1, §2.