• Tidak ada hasil yang ditemukan

4.2 Developing a Ray Tracing Model

4.2.4 Ray-Tracing with Planes

Figure 4.2-2: Schematic for 3D pinhole optics, showing a point in space (XP, YP, ZP), an aperture at (c, d,0), the point’s image on the sensor at (XR, YR, ZR), and the sensor’s center (XC, YC,−l). Two sets of similar triangles are marked: the pink one allows for calculation of the lateral offset of the sensor relative to the aperture, and the green one can be used to calculate the field of view at the reference plane.

X^ Y^

L

Z^

reference plane (c,d,0)

aperture

sensor

l (X ,Y ,Z )P P P

mappable point

(X ,Y ,Z )R R R

point image

(X ,Y ,-l)C C

sensor center

sensor axis

To find the pixel coordinates of a point in space, three planes will be defined. The first two planes’ intersection defines the light ray passing through the aperture. The third plane is that of the sensor itself.

The first two planes will be defined by three points and thus equation 4.2-6 will be used to calculate their coefficients. Using the standard axis layout for DDPIV (Z being the camera axis direction,X being horizontal, andY being vertical, with the origin at the intersection of the aperture plane with the optical axis), given a pointP in space (Xp, Yp, Zp), and given that the aperture in

question is located at (c, d,0) (see figure4.2-2), then for the first ray plane we choose the points

(Xp, Yp, Zp) (c, d,0) (c−1,0,0) (4.2-12) Note that the third point is arbitrary—we are only interested that the plane contain the aperture and the point in question. Thus to generate a second plane, we can change the arbitrary point; here we take

(Xp, Yp, Zp) (c, d,0) (0, d−1,0) (4.2-13) To define a general sensor plane, we will allow for deviation from the ideal defocusing arrangement—

misalignment. We will call the linear misalignments ∆X, ∆Y, and ∆Z, so that the quantities in equation4.2-4become

XC= cL

L−f + ∆X YC= dL

L−f + ∆Y ZC=− f L

L−f + ∆Z

(4.2-14)

The angular misalignments will be represented as angles defined as follows: β is rotation about X, with 0 being perfectly vertical, γ is rotation about Y, with 0 coinciding with the image plane, andδis the rotation aboutZ, with 0 being the horizontal position2. Commonly when dealing with cameras these angles are referred to as tilt, pan, and roll, respectively. The angular misalignments are factored in by including them in the definition of the plane in the sensor, for which we will use the normal-point version of the general form (equation4.2-9).

Figure 4.2-3: Convention for a rotated sensor, showing the angles in order of rotation.

X^

b

Y^

g

Z^

d

Compound three-dimensional rotations are easy to deal with mathematically if the angles are defined as Euler angles (meaning that order of rotation matters). Consider the basis of the normal

2In reality, these are treated as Euler angles, with the order of rotation as listed, so only the first rotation is about the real axes; the other two are about the newly rotated axes. Refer to figure4.2-3

space coordinate systemXY Z, ˆei. If we take a transformation (for example, a rotation) T(1) and apply it to this basis, we will get a new basis ˆfi. Mathematically, we can write the transformation in summation form as

i=X

α

Tαi(1)i (4.2-15)

Now we can define another transformation based on the space of ˆfi, call it T(2), yielding a basis ˆgi which in turn can be transformed by T(3) to yield ˆhi. Thus we can combine the three transformations by substitution to get a relation between ˆgi and ˆei:

i=X

α

Tαi(1)αj =X

η

Tηj(2)ηk=X

ζ

Tζk(3)ζ

⇒hˆk= X

ζ,η,α

Tζk(3)Tηζ(2)Tαη(1)α

(4.2-16)

Remembering that matrix multiplication can be written as

[AB]ij =X

k

AikBkj (4.2-17)

we can rewrite our sum as

ˆhk=X

α

X

ζ

X

η

Tαη(1)Tηζ(2)

Tζk(3)

α (4.2-18)

and thus

k =T(1)T(2)T(3)α (4.2-19) Now that we have the basis after our transformation, we can calculate the ithcomponent of ˆhk by dotting it with the appropriate basis vector from the reference coordinate space:

ˆhk·eˆαi = [T(1)T(2)T(3)]ik (4.2-20) Again, because our angles are defined as Euler angles, our transformations are rotations about a single axis, so we can write them as

T(1) =

1 0 0

0 cosβ sinβ 0 −sinβ cosβ

T(2) =

cosγ 0 sinγ

0 1 0

−sinγ 0 cosγ

T(3) =

cosδ sinδ 0

−sinδ cosδ 0

0 0 1

(4.2-21)

For reference, the complete 3-axis rotation matrix is

T(t)=

cosδcosγ cosγsinδ sinγ

cosβsinδ−cosδsinβsinγ cosβcosδ−sinβsinδsinγ cosγsinβ

sinβsinδ−cosβcosδsinγ cosδsinβ−cosβsinδsinγ cosβcosγ

(4.2-22)

In the new basis ˆhi defined by this transformation, ˆh1 is the sensor’s x axis, ˆh2 is its y axis (corresponding to an image’s horizontal and vertical axes, respectively), and ˆh3 is the direction normal to the sensor plane. We can calculate these vectors using equation4.2-20, which tells us that they are simply the columns of the matrix in equation4.2-22:

1= (cosδcosγ,−cosβsinδ−cosδsinβsinγ,sinβsinδ−cosβcosδsinγ) hˆ2= (cosγsinδ,cosβcosδ−sinβsinδsinγ,−cosδsinβ−cosβsinδsinγ)

3= (sinγ,cosγsinβ,cosβcosγ)

(4.2-23)

Using point C as defined in equation 4.2-14 and the normal ˆh3 with equation 4.2-9 we can calculate the coefficients of the sensor plane so that it is in the general form of equation4.2-5and obtain

ACCD= sinγ BCCD= cosγsinβ CCCD = cosβcosγ DCCD=− L−fcL + ∆X

sinγ + L−fdL + ∆Y

cosγsinβ

L−ff L + ∆Z

cosβcosγ

(4.2-24)

And, using equation 4.2-6 with the points we defined in equations 4.2-12 and 4.2-13, we can calculate the coefficients of the two planes that define the light ray:

A1=−dZP

B1=ZP

C1=d(XP−c+ 1)−YP D1=dZP(c−1)

A2=−ZP

B1=cZP

C2=−c(YP −d+ 1) +XP D2=−cZP(d−1)

(4.2-25)

Note that if different points are used than those in equations 4.2-12and 4.2-13, the planes will be different (and correspondingly the coefficients in equation 4.2-6), but the result should be the same—by picking P and the aperture location as two of the points, you are ensuring that the intersection will be the light ray from the particle through the aperture. However one must be careful in choosing the third points by defining them relative to the aperture location so to avoid two points coinciding. For example, setting the third point to (0,0,0) will make the calculation fail if the aperture is located at c = 0, d= 0 because then two of the definition points will coincide.

Thus it is safest to choose third points which differ from the second point in at least one component by an additive term.

The intersection of these three planes will yield the space coordinates of point R, which must be converted to the coordinate plane of the sensor by taking the inner product of the vectorR−C with the axes vectors ˆh1 (to yieldx) and ˆh2(to yieldy). The coordinates ofRare calculated with equation4.2-10, at which point they are so long they are not worth mentioning (this whole process can be programmed in the exact steps shown above). To finalize,

x=(R−C)·ˆh1 y=(R−C)·ˆh2

(4.2-26)

There are some special cases of these expressions thatare worth looking at. First, in the case of a perfectly aligned sensor,

x= f L−f

c(L−ZP)−LXP ZP

y= f L−f

d(L−ZP)−LYP

ZP

(4.2-27)

Note that the relationship betweenxandXP is linear, as is that ofy andYP, but they are not linear with respect toZP. This is expected, as the sensors are coplanar with the XY plane. More appropriately put, the X and Y directions are perpendicular to the normal of the sensor planes.

This is a really important condition that makes the defocusing arrangement unique.

The quantity L−ff is the optical magnification M of the system. As mentioned above, it is the proportion between any two corresponding sides of the similar triangles depicted in figure4.2-1. For example,

Figure 4.2-4: The view of a flat dewarping target from a sensor that has -27°of tilt, 15°of pan, and 10°of roll. (Design parameters for Ian Camera.)

0 200 400 600 800 1000 1200 1400

0 200 400 600 800 1000

pixels

pixels

Figure 4.2-5: The view of a flat target at 45°to the camera and centered at 0.8Lfrom the aperture plane as seen by the three apertures with perfectly aligned sensors. (Design parameters for Ian Camera.)

0 200 400 600 800 1000 1200 1400

0 200 400 600 800 1000

pixels

pixels

l L =

1 fL1

L

−1

=

Lf L−f

L = f

L−f =M (4.2-28)

A subset of the perfectly aligned condition is that of two apertures on theX axis symmetrically spaced by a distance 2d(so that the aperture locations are (0, d) and (0,−d)). In this case we have the exact condition under which the equations were originally presented inPereira and Gharib[2002]

(with the exception that he called thetotal separationd)3. Settingc= 0, we arrive at their results:

x=−MLXP ZP

y= Md(L−ZP)−LYP ZP

(4.2-29)

Another interesting case is that for which we allow linear misalignment inX andY. After some simplification, equations4.2-27become

x=Mc(L−ZP)−LXP

ZP

−∆X y=Md(L−ZP)−LYP

ZP

−∆Y

(4.2-30)

which is expected—the linear misalignment should affect nothing but the final location of the particle image. They can thus be measured (most likely through calibration) by imaging a single known point and removed to recover a perfect alignment mapping.

If we now also add the possibility of a linear misalignment in Z,

x=Mc(L−ZP)−LXP

ZP

−∆X+ ∆ZXP−c ZP

y=Md(L−ZP)−LYP ZP

−∆Y + ∆ZYP −d ZP

(4.2-31)

Now let’s go back to a case where the Z alignment is perfect, but instead include a rotation by the angle δabout the Z axis, still maintaining all the sensors on the image plane. Then we arrive at:

x=

Mc(L−ZZP)−LXP

P −∆X

cosδ−

Md(L−ZZP)−LYP

P −∆Y

sinδ

=x∆X,∆Y cosδ−y∆X,∆Y sinδ y=

Mc(L−ZZP)−LXP

P −∆X

sinδ+

Md(L−ZZP)−LYP

P −∆Y

cosδ

=x∆X,∆Y sinδ+y∆X,∆Ycosδ

(4.2-32)

wherex∆X,∆Y andy∆X,∆Y are thex, ycoordinates as defined by equation4.2-30.

3Do not confuse the use ofdto equate to previous results anddas the generalY coordinate of an aperture in the derivations in this document.

This is pure rotation about the Z axis, as expected. This type of rotation is easily measurable using a single calibration plane with a minimum of two points.

Now if all misalignments are zero except forγwe have

x=M c(L−ZP)−LXP

ZPcosγ+ (XP−c) sinγ

y=M(d(L−ZP)−LYP) cosγ+ (cYP−dXP) sinγ ZPcosγ+ (XP −c) sinγ

(4.2-33)

Nonzero values forβandγcan be interpreted as changing the magnification locally on a sensor—

that is, the magnification becomes a function ofx and/ory. A sensor with a different ∆Z as the others experiences the same problem—its magnification differs from that of the other sensors (though in this case it is constant throughout the sensor).

One of the most noticeable types of image distortion in lenses, especially wide angle lenses, is barrel distortion. Mathematically it can be defined by

∆r=QBr3 (4.2-34)

whereris the distance from a particular pixel to the origin of the distortion (so that if the center of the sensor lies on the optical axis of the lens thenr=p

x2+y2) andQBis a coefficient of distortion.

If QB is negative, the distortion is called “barrel” because the edges of an imaged rectangle bulge beyond the corners, whereas if it is positive, it is called “pincushion” because the corners extend beyond the edges. By the logic above, then, distortions such as pincushion and barrel are of the same type as misalignments like ∆Z,β, andγ—they are, in essence, local changes in magnification.

Figure 4.2-6: Pincushion distortion.

0 200 400 600 800 1000

0 200 400 600 800 1000

pixels

pixels

Figure 4.2-7: Barrel distortion.

0 200 400 600 800 1000

0 200 400 600 800 1000

pixels

pixels

Figure 4.2-8: An example of how lenses with barrel distortion can affect the images. Here Ian’s Camera is simulated with barrel distortion approximately equal to the real distortion induced by its 28 mm lenses. The image is that of the dewarping target at the reference plane, which should result in the three sensors’ points mapping directly on top of each other. However the triangle arrangement enforces a different amount and orientation of barrel distortion at each sensor and so the points no longer line up.

0 200 400 600 800 1000 1200 1400

0 200 400 600 800 1000

pixels

pixels