Chapter 2
Defocusing DPIV
2.1 Introduction
Defocusing DPIV (DDPIV), at its core, is a special subset of photogrammetry. Three sensors are assembled into a common faceplate such that they are coplanar and their fields of view intersect in a predetermined region called themappable region. In general terms, DDPIV can estimate the depth of features based on the relative location of their images on the three sensors. In the case of fluid measurements, images of the flow markers are used to reconstruct discrete point clouds for instants in time. Velocity is computed either through tracking algorithms or three-dimensional cross-correlation on the discrete clouds.
The primary difference between the defocusing technique and photogrammetry lies in the optical layout and the calibration scheme. The optical layout affords a reconstruction method simpler than backwards ray-tracing—if the sensors are parallel, the position of the image of a point is decoupled relative to the point’sZ and X, Y position.
The calibration scheme used in DDPIV is based loosely on that developed for SPIV and thus is a completely different paradigm than that used in photogrammetry (commonly based on the methods ofTsai[1987]). Rather than generating a pinhole model of the sensors with coefficients to correct for distortion, multi-plane dewarping, as DDPIV’s calibration is called, is given the pinhole model and is made to correct the location of the images so that they fit this model. This yields more precise result than typical Photogrammetry methods, especially for volumes on the order of the instrument size, because it also accounts for differences between the pinhole model and the optical reality that do not propagate themselves as measurable distortion.
imposed by the software.
2.2.1 Capabilities of the optical system
The optical arrangement of the DDPIV camera allows for the measurement of the position of a point in space that is visible in all of the camera’s sensors—as opposed to, for example, a tomographic system which images depth in a slicing manner (such as the typical ultrasound scanners used in the medical field). Thus DDPIV is able to map the surfaces of optically opaque bodies or reconstruct point clouds of suspended markers in a transparent volume.
In either case, the object in question must be surrounded by a medium of stable index of refrac- tion. For example, in experiments involving compressible flow, the changes in the index of refraction of the fluid must be reproducible during the calibration procedure, so that they can simply be “re- moved” as if they were distortions in the optical path. As another example, if it is desired to map both surfaces of a thin membrane, then the index of the fluid must match that of the material of the membrane for the measurements to be accurate.
The physical result of the measurement is that any point within the mappable region will be visible in all three sensors. The images are spaced relative to each other as if the camera was a single lens and the sensors were portions of a single imager near the three off-axis apertures. Thus
Figure 2.2-1: Due to the optical arrangement of the DDPIV camera, the processing of images into three-dimensional point clouds is relatively simple. First, each raw image (a)—one for each aperture—is optionally preprocessed with blurring, normalization, etc., and then a Gaussian fitting algorithm locates each particle image and finds its center to sub-pixel accuracy (b). The list of particle image coordinates for each aperture are then dewarped using the multi-plane dewarping coefficients (c). The actual reconstruction step (d) consists only of looking for particle images that, across all the aperture images, form the same pattern as does the aperture layout. The distance between the particle that generated the particle images and the reference (focal) plane of the camera is proportional to the size of the matching pattern. At the end of the reconstruction step, a pointcloud (e) contains discrete three-dimensional particles. Two such point clouds, reconstructed from image sets exposed at different times, can be used to estimate particle velocities.
a b c
d e
a point on the reference plane will have three images which have no relative offset between them, whereas any point off the reference plane will have three offset images as if they were portions of a single blurred image of that point.
The location of these images is of course dependent on the path that the light rays took from the object to the image. The ability to distinguish relative offset between them correlates directly with the final precision of the position measurement and depends in an indirect sense in the ability to reconstruct the path of the rays. Thus the measurement accuracy is closely related to the multi-plane dewarping calibration.
2.2.2 Capabilities of the software
Generically speaking, the software must be able to perform two tasks: first, it must identify the matching particle images of each single point in space, and second, it must be able to measure the distance between these images, taking into account the optical path of the image-forming rays.
The methods employed in performing these tasks is in some sense dependent on the application.
Because DDPIV was developed primarily for fluid flow,DDPIVassumes the points to be reconstructed (tracer particles) have small, Gaussian-like images1. For a successful measurement, the sensor images must consist of distinguishable particle images of at least 4-pixel-diameter so that their sub-pixel locations can be estimated. Because these images are featureless, the only information available to match corresponding images together is the pattern formed by a particle image triplet relative to the pattern formed by the aperture layout of the camera itself. The location of the images of the triplet is corrected using multi-plane dewarping to take into account the real optics of the experimental medium and the lenses. The size of the resulting corrected pattern corresponds to the depth location of the particle with those three images.
DDPIV then requires any image to be processed to look like a field of particles, so when the experiment includes surface mapping, the surface must be tagged with small dots.
This, in essence, is a limitation imposed by DDPIV and not by the method itself. The three dimensional measurement is an optical one, thus any object generating distinguishable images can be mapped. For example, if a surface has a texture of its own, a cross-correlation-based algorithm could be used to identify and simultaneously measure the spacing of the three images of an area of the surface. Such a depth map would resemble those seen inKanade, Yoshida, Oda, Kano, and Tanaka[1996].
1Technically, it is assumed that they are point sources of light and thus their images are the point spread function, which can be approximated as a Gaussian.