Abstract
We present a theory and algorithms for a generic calibration concept that is based on the following recently introduced general ima- ging model. An image is considered as a collection of pixels, and each pixel measures the light travelling along a (half-) ray in 3-space asso- ciated with that pixel. Calibration is the determination, in some com- mon coordinate system, of the coordinates of all pixels’ rays. This model encompasses most pro jection models used in computer vision or photo- grammetry, including perspective and affine models, optical distortion models, stereo systems, or catadioptric systems – central (single view- point) as well as non-central ones. We propose a concept for calibrating this general imaging model, based on several views of ob jects with known structure, but which are acquired from unknown viewpoints. It allows in principle to calibrate cameras of any of the types contained in the gene- ral imaging model using one and the same algorithm. We first develop the theory and an algorithm for the most general case: a non-central camera that observes 3D calibration ob jects. This is then specialized to the case of central cameras and to the use of planar calibration ob jects. The validity of the concept is shown by experiments with synthetic and real data.