Book Description
This dissertation, "Robust Feature-point Based Image Matching" by Wui-fung, Sze, 施會豐, was obtained from The University of Hong Kong (Pokfulam, Hong Kong) and is being sold pursuant to Creative Commons: Attribution 3.0 Hong Kong License. The content of this dissertation has not been altered in any way. We have altered the formatting in order to facilitate the ease of printing and reading of the dissertation. All rights not granted by the above license are retained by the author. Abstract: Abstract of thesis entitled "Robust Feature-Point Based Image Matching" Submitted by SZE Wui Fung for the degree of Master of Philosophy at The University of Hong Kong in August 2006 Establishing reliable correspondences across images is essential for many computer vision applications. Besides feature extraction, a general point-based image imaging framework comprises two stages, namely putative matching and robust model estimation. The first stage involves establishing putative matches based on photometric similarity measures or some local constraints. The second stage involves recovering the fundamental matrix that describes the geometric model relating two views and simultaneously eliminating mismatched point-pairs priorly generated. This thesis proposes novel methods for both putative matching and robust model estimation, a more reliable matching result than has hitherto been possible. Two different approaches to putative matching, namely correlation matching (intensity-based) and SVD matching (proximity-based), are investigated. For correlation matching, a new similarity measure invariant to image rotation and scaling is proposed. The weakness of SVD matching in handling image rotation and occluded data is solved by preceding the algorithm with a simple transformation (consisting of a translation and a scaling) on the coordinates of extracted image-points. Intertwining these two modified matching algorithms, a new image-matching algorithm is developed to relate images resulting from zooming and photographing with large position offsets. Experimental results on real images demonstrate that our proposed algorithm is able to provide a high proportional of correct matches even when the scaling variation between two images is larger than 3. Regarding the stage of robust estimation, the 8-point algorithm, which is extensively used for computing the fundamental matrix, is analyzed in detail. Simple but effective approaches are incorporated in the 8-point algorithm to minimize a newly derived cost function. The resulting estimate of the fundamental matrix is found to be more accurate than that of the standard 8-point algorithm. The improvement is justified by theory and verified by experiments on both real and synthetic data. We also derive a condition number in terms of the singular values of the pivotal matrix of the 8-point algorithm, and show empirically how the condition number describes the quality of the fundamental matrix for detecting mismatches, which is a key step in the RANSAC approach for robust estimation of the fundamental matrix. It is shown that the condition number can be improved by using more data-points in the 8-point algorithm. Based on this finding, a nested RANSAC algorithm is proposed to selectively increase the size of the sample set when a potential solution is detected. Examples using real images show that the nested RANSAC approach produces results superior to standard RANSAC. DOI: 10.5353/th_b3715326 Subjects: Computer vision Image processing Algorithms