next up previous
Next: Extrapolation Up: Procedure Previous: Procedure


Corner Detection

The first step is to take three or more (the more pictures are taken, the more accurate the calibration) pictures, with the camera being calibrated, of a model plane with easily detectable features (in this case, corners) from different angles. (It is still important to have taken a reasonable number of images to gather homography data; reducing the input needed is beyond the scope of this project.) The pictures are saved as images. Then a routine can search the pictures for the desired features (in this case, corners, which are useful because they are generally easy to find) and place their locations into a set of output files with corresponding coordinates listed. These files may need to be sorted to make sure their lists of points correspond.

This step is the reason this project can be fitted into the category of computer vision. Of course the selection of algorithm for detecting feature points has an impact on the accuracy of the entire calibration. This project uses an algorithm modified from Harris' algorithm (elaborated on page 11 of [[*]]), which makes use of change in the image gradient over a window of a given size to estimate the position of a corner to (supposedly) an accuracy of less than a pixel.

The image gradient is the difference in value between the colors of adjacent pixels. For calibration, black and white images are most useful because comparisons need only be made in one dimension (gray). Corners are easily recognized as spots of very large image gradient, where one color gives way sharply to another. The biggest problem in finding corners in images at random is that some images may have sharp curves defined by lines but not intended to be the edges of objects, such as in the following picture:


next up previous
Next: Extrapolation Up: Procedure Previous: Procedure
Evan Herbst 2003-06-12