Learned Multi-Patch Similarity

03/26/2017
by   Wilfried Hartmann, et al.
0

Estimating a depth map from multiple views of a scene is a fundamental task in computer vision. As soon as more than two viewpoints are available, one faces the very basic question how to measure similarity across >2 image patches. Surprisingly, no direct solution exists, instead it is common to fall back to more or less robust averaging of two-view similarities. Encouraged by the success of machine learning, and in particular convolutional neural networks, we propose to learn a matching function which directly maps multiple image patches to a scalar similarity score. Experiments on several multi-view datasets demonstrate that this approach has advantages over methods based on pairwise patch similarity.

READ FULL TEXT

page 1

page 6

page 8

research
06/09/2018

Sparse Over-complete Patch Matching

Image patch matching, which is the process of identifying corresponding ...
research
04/14/2015

Learning to Compare Image Patches via Convolutional Neural Networks

In this paper we show how to learn directly from image data (i.e., witho...
research
11/26/2014

3D-Assisted Image Feature Synthesis for Novel Views of an Object

Comparing two images in a view-invariant way has been a challenging prob...
research
06/27/2022

Patch Selection for Melanoma Classification

In medical image processing, the most important information is often loc...
research
03/05/2015

Jointly Learning Multiple Measures of Similarities from Triplet Comparisons

Similarity between objects is multi-faceted and it can be easier for hum...
research
12/17/2021

Improving neural implicit surfaces geometry with patch warping

Neural implicit surfaces have become an important technique for multi-vi...
research
04/14/2015

Sketch-based 3D Shape Retrieval using Convolutional Neural Networks

Retrieving 3D models from 2D human sketches has received considerable at...

Please sign up or login with your details

Forgot password? Click here to reset