A new generation of digital cameras makes use of emitted light pulses, more precisely the time between the emission and the reception of the reflected pulse, for computing the depth of the viewed object. This "Time-of-Flight" principle is replacing other 3D-scan strategies such as stereovision and structured light. Though the concept and possibilities of a ToF-camera essentially differs from these that are offered by "classical" optical cameras, the computer vision community still falls back on proven methods for calibration and structure-from-motion issues.
We propose new techniques, fully exploiting the Time-of-Flight power, avoiding detection and recognition of features in the image. In a further step, we intend to design a new camera model, more general than the familiar pinhole model, providing a uniform framework for both lateral as depth calibration of ToF-cameras.
The theory will be validated by simulations and real experiments (executed by a computer driven robot manipulator). Finally, real life applications will be considered, in cooperation with some of our industrial partners.