• Tidak ada hasil yang ditemukan

Introduction

Dalam dokumen List of Tables Page (Halaman 179-185)

5.2. Image Co-registration

5.2.1. Introduction

Image registration is the process of transforming different sets of data into one coordinate system. Once images are converted into one coordinate system, images are defined as co-

registered. Several applications find image co-registration as a useful technique to compare or integrate data from different measurements. Some applications are computer vision, medical imaging, satellite imagery, remote sensing, and target recognition. In Figure 74, examples of image registration in computer vision, medical imaging, and satellite imagery are expressed.

Figure 74: (Left to right) Examples of co-registered images in computer vision [64], medical imaging [65], and satellite imagery [66] .

The process of image co-registration improves the overall signal-to-noise ratio of both images. This occurs by enhancing the individual properties of each image and then overlaying together. For example, in the satellite imagery photo above, one image would contain

information on the geological landscape information (i.e., trees, rivers, etc.). The other image would have information related to the architecture landscape (i.e., buildings, houses, etc.).

Through image co-registration, one would extract information from one signal/image, such as where the trees are compared to the buildings.

Overall, the approach to image registration is situation-dependent and requires a carefully selected point transformation model as the reference point between images.

There are four main algorithm classifications for image registration, which define different ways to align images. These were intensity-based, feature-based, spatial domain

methods, or frequency-domain methods. Both photos will either be depicted as a reference image or moving image despite the classification type. The reference image is the static image, and the moving image undergoes a spatial transformation to match the reference image. Intensity-based image classification is an iterative process that aligns images based on their relative intensity patterns via correlation metrics. These correlation metrics consists of the pair of images, the similarity metric, an optimizer, and a transformation type. In this context, the similarity metric evaluates the registration accuracy by how similar the images are without a transformation.

Based on the similarity metric result, the optimizer will either minimize or maximize the similarity metric. The transformation type defines the type of 2D geometric transformation necessary to align the misaligned moving image with the reference image. This process is

repeated until both images are best aligned together. An example of the process flow of intensity- based automatic image registration is shown in Figure 75. The registered image was the entire image with the center of the image and different intensities/labels as a feature point.

Figure 75: Process flow chart of intensity-based automatic image registration classification [67].

The second type of image classification is an algorithm called feature-based classification. This type of registration is based on image features such as points, lines, or contours. These features are isolated in both images and become the number of corresponding distinct points in images. Between these specific image points, a point-by-point correspondence is calculated. A geometric transformation is performed to map the moving image to the reference image. This technique of feature-based automatic image classification is widely used in

computer vision and satellite imagery applications. An example of a feature-based automated image co-registration is expressed in Figure 76.

Figure 76: Examples of feature-based image classification. (Top): Point mapping of aerial photos. (Bottom): Line mapping of the same photo but images of different orientations [66].

The third and fourth type of image classification algorithm is based on the kind of image domain. Image enhancements occur either in spatial or frequency domains. The spatial domain represents the image plane itself, whereas the frequency domain deals with pixel change rate.

Any image enhancements in the spatial domain operate on the direct manipulation of image pixels. Likewise, image modifications in the spatial domain are computational less demanding when compared to the frequency domain.

This section of the dissertation will present how image co-registration was applied to imaging with the large-area iQID. The objective was to develop an automated image co- registration scheme in MATLAB to co-register a photograph and an autoradiograph collected from a sample measured with an iQID. An example of an autoradiograph and photo of fiducial uranium glass shapes are expressed in Figure 77. These shapes were used as fiducial markers for the MATLAB program. The darker colored glass shapes had higher levels of uranium in the glass than the lighter colored pieces. Uranium and progeny isotopes emit radiation that can be detected by the iQID in beta/gamma or alpha mode.

Figure 77: Examples of images to be co-registered (left) autoradiograph from large-area iQID and (right) a photograph of uranium glass shapes

Some differences between both images in Figure 77 are the background color, the orientation of shapes, pixel size and scale, and the need to depict radioactivity colormap in the autoradiograph. These differences will be highlighted in the MATLAB scheme to properly co- register both images. Image modifications of the autoradiograph and photograph will take place directly in the image domain. Therefore, spatial domain algorithm methods for image co-

registration will be explored instead of the frequency domain. Considering image co-registration with large-area iQID is a new application, achieving a MATLAB scheme was unknown. It was also unknown which image classification method, intensity-based or feature-based, would create the best transformation model between both images. Therefore, an assumption was made

regarding the images in Figure 77. There are no apparent image features expect the fiducial markers (i.e., points, lines, or contours), so feature-based image classification will not be used.

Therefore, the intensity-based image classification will be explored to co-register the autoradiograph and photograph.

Dalam dokumen List of Tables Page (Halaman 179-185)