Figure 1. Dense correspondences between the same semantic content ("smiley")
in different scenes and different scales. Top: Input images. Bottom:
Results visualized by warping the colors of the "Target" photo onto the "Source"
using the estimated correspondences from Source to Target. A good result has the
colors of the Target photo, located in the same position as their matching semantic
regions in the Source. Results show the output of the original SIFT-Flow method,
using DSIFT without local scale selections (bottom left), and our method (bottom
Abstract: We seek a practical method
for establishing dense correspondences between two images with similar content,
but possibly different 3D scenes. One of the challenges in designing such a system
is the local scale differences of objects appearing in the two images. Previous
methods often considered only small subsets of image pixels; matching only pixels
for which stable scales may be reliably estimated. More recently, others have considered
dense correspondences, but with substantial costs associated with generating, storing
and matching scale invariant descriptors. Our work here is motivated by the observation
that pixels in the image have contexts -- the pixels around them -- which may be
exploited in order to estimate local scales reliably and repeatably. Specifically,
we make the following contributions. (i) We show that scales estimated in sparse
interest points may be propagated to neighboring pixels where this information cannot
be reliably determined. Doing so allows scale invariant descriptors to be extracted
anywhere in the image, not just in detected interest points. (ii) We present three
different means for propagating this information: using only the scales at detected
interest points, using the underlying image information to guide the propagation
of this information across each image, separately, and using both images simultaneously.
Finally, (iii), we provide extensive results, both qualitative and quantitative,
demonstrating that accurate dense correspondences can be obtained even between very
different images, with little computational costs beyond those required by existing
Scale propagation code: Our MATLAB implementation of the scale propagation method is
If you find this code useful, please cite our paper.
April 18, 2016 New!Yuval Nirkin
has shared a 3D reconstruction project which uses
SIFT flow and our
scale propagation method for 3D reconstruction from multiple views. In doing so,
both SIFT flow and our
scale propagation methods were ported to OpenCV compatible code.
The 3D reconstruction code is available from a dedicated
A pending OpenCV contribution with a port for SIFT flow and out scale
propagation is available on the
Other related papers / projects / codee
T. Hassner, V. Mayzels, and L.
Zelnik-Manor, On SIFTs and their Scales, IEEE Conf. on Computer Vision
and Pattern Recognition (CVPR), Rhode Island, June 2012 (project and code,
Copyright 2014, Moria Tau and Tal Hassner
The SOFTWARE ("scalemaps" and all included files) is provided "as is", without
any guarantee made as to its suitability or fitness for any particular use. It may
contain bugs, so use of this tool is at your own risk. We take no responsibility
for any damage that may unintentionally be caused through its use.