MULTI TIMES IMAGES FUSION BASED ON WAVELET THEORY
S. Rokhsari
a, A. Abed-Elmdoust
b, M. Karimi
ca
GIS Division, Department of Surveying and Geomatic Eng, College of Eng., University of Tehran, Iran -
[email protected]
b
Research Associate, School of Civil ,College of Eng., University of Tehran, [email protected]
cDepartment of Surveying and Geomatic Eng., Shahid Rajaee university, Tehran, [email protected]
Commission VI, WG VI/4
KEY WORDS: Traffic camera, Image fusion, Wavelet.
ABSTRACT
The development of new monitoring systems and the increasing interest of researchers in obtaining reliable measurements have leaded to the development of automatic monitoring moving objects. One way to ensure monitoring object is to use multi time s image fusion. Image fusion is a sub area of the more general topic of data fusion. Image fusion can be roughly defined as the process of combining multiple input images into an image, which contains the relevant information from the inputs. The aim of image fusion is to integrate complementary and redundant information from multiple images to create a composite that contains a better fused image than any of the individual source images. Main purpose of the former is to increase both the spectral and spatial resolution of images by combining multiple images.
In this paper we tried to use this theory for moving object tracking, so with the usage of multi images that are obtained in different times and combination of them with this theory we identify the path of movement of moving object so this result could help us to implement automatic systems that that could monitor objects automatically without human interventation.
So in this paper first we will discuss the principal of fusion and its famous method (wavelet theory) and all process that involved for doing a fusion process.
1. INTRODUCTION
A lot of low and high level image processing algorithms have to be developed to meet all the requirements of an intelligent monitoring system. The performance of the system will depend on the reliable recognition of moving object, their adequate description and the knowledge of how to combine these parameters with information from other sources to solve the problems of monitoring objects.
So here we want to use image fusion method to identify movement path of a moving object so for doing this process we used steps like below:
-Image acquisition in different times and preparation of them In this step we obtained images in different times so these sequence images will show the movement of object in different times then we tried to prepare them so in section.1 and 2 we discussed the principal of fusion for tracking moving object and in section.2 I discussed the steps such as registration and sampling and histogram matching for preparation of input images.
-Using the best method for multi images fusion
For fusion of multi images we used wavelet theory as an important theory for multi image fusion for tracking moving object, so saection.3 will demonstrates the principal of fusion theory for integration of multi images and section.4 demonstrates the principal of famous methods for fusion. Finally section.4 will show the result of fusion so fusion result will show us the path of moving object so the result could us to monitoring moving objects
Figure.1 shows the steps of process for multi images fusion for tracking moving object
Figure.1: The steps of process for multi images fusion for tracking moving object
Tracking moving object
Image acquisition in different times
Images fusion Image preparation
Registration
Resampling
Histogram matching
Wavelet theory
Selection of best form of wavelet
Selection of level of decomposition
Identification of path of movement
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B3, 2012 XXII ISPRS Congress, 25 August – 01 September 2012, Melbourne, Australia
1. DATA FUSION
Data fusion is a good way that could contribute to implement a good monitoring system.
Data fusion is analogous to the ongoing cognitive process used to integrate data continually from their senses to make inferences about the external world.
The multi-sensor data fusion is a method to combine data from multiple (and possibly diverse) sensors in order to make inferences about a physical event, activity, or situation including those applications like automatic identi cation of targets or analysis of battle eld situations [1].
However, following facts have been described [2].
1. Combining data from multiple inaccurate sensors, which have an individual probability of correct inference of less than 0.5, does not provide a signi cant overall advantage
2. Combining data from multiple highly accurate sensors, which have an individual probability of correct inference of greater than 0.95, does not provide a signi cant increase in inference are detected and segmented in the individual sensor domains 3. Decision-Level Fusion: Fusion at the decision level (also called post-decision or post-detection fusion) combines the decisions of independent sensor detection/classi cation paths by Boolean (AND, OR) operators or by a heuristic score (e.g., M-of-N, maximum vote, or weighted sum).
2. IMAGE FUSION
Main purpose of the image fusion is to increase both the spectral and spatial resolution of images by combining multiple images.
For doing image fusion in the best way we should pay attention to 3 concepts [3].
2.1. Image registration
Image registration is the process that transforms several images into the same coordinate system. For Example, for given an image, several copies of the image are out-of-shape by rotation, shearing, twisting so this process will focus on solving these problems
2.2.Image resampling
Image resampling is the procedure that creates a new version of the original image with a different width and height in pixels. Increasing the size is called up sampling, for example on the contrast, decreasing the size is called downsamplig.
2.3.Histogram matching
Consider two images X and Y. If Y is histogram-matched to X, the pixel values of Y is changed, by a nonlinear transform such that the histogram of the new Y is the as that of X.
3. IMAGE FUSION METH ODS
3.1. Pan-sharpening
The goal of pan-sharpening is to fuse a low spatial resolution multispectral image with a higher resolution panchromatic image to obtain an image with high spectral and spatial resolution. The Intensity-Hue-Saturation (IHS) method is a popular pan-sharpening method used for its efficiency and high
spatial resolution. However, the final image produced experiences spectral distortion. HIS stands for Hue Saturation Intensity (Hue Saturation Value), this method contain 3 steps: First: The low resolution RGB image is up sampled and converted to HSI space, Second: The panchromatic band is then matched and substituted for the Intensity band, Third: The HIS image is converted back to RGB space
3.2. PC Spectral Sharpening
We can use PC Spectral Sharpening to sharpen spectral image data with high spatial resolution data. A principal component transformation is performed on the multi-spectral data. The PC band 1 is replaced with the high resolution band, which is scaled to match the PC band 1 so no distortion of the spectral information occurs. Then, an inverse transform is performed; the multi-spectral data is automatically resampled to the high resolution pixel size using a nearest neighbor, bilinear or cubic convolution technique
.
3.3. Wavelet theory for spatial fusion
In many cases we would wish to examine both time and frequency information simultaneously this leads to wavelet transformation.
wavelet
transformation is a type of signal presentation that can give the frequency content of the signal at a particular instant of time what s more it has advantages over traditional Fourier methods in analyzing physical situation where the signal contains discontinuities and sharp spikes so Wavelet algorithms process data at different scales or resolutions.
The continuous wavelet transform (CWT) is defined as the sum over all time of the signal multiplied by scaled, method different frequencies are proceeded differently, it improves the quality of new images so it is a good method for fusion at the pixel level.4. RESULT AND DISCUS SION
Here we had a focus on multi time s image fusion; these images may be captured at different times. The object of the image fusion here is to retain the most desirable characteristics of each image to monitor moving object, We discussed different algorithms for data fusion at this paper but we had focus on Wavelet analysis for fusion of temporal images for monitoring moving object
.
The principle of image fusion using wavelets is to merge the wavelet decompositions of the multi times images using fusion methods applied to approximations coefficients and details coefficients.-In first step ,we tried to select suitable wavelet form for fusion so in our experiment, seven types of wavelet families are examined: Haar Wavelet (HW), Daubechies(db), Symlets, Coiflets ,Biorthogonal , Reverse Biorthogonal , Discrete meyer(dmey) we tried to select the best form of wavelet based on correlation with original image so Daubechies(db1) was selected because of good result.
Then we tried to select level of decomposition based on wavelet theory, the maximum level to apply the wavelet transform depends on how many data points contain in a data set, so we examined selected level based on fusion result so we used decomposition in two levels, it could give us the high quality for fusion. The result was shown on Figure 4 that yellow circles on the picture, shows path of moving object, so fusion
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B3, 2012 XXII ISPRS Congress, 25 August – 01 September 2012, Melbourne, Australia
could lead us to better extraction of moving object for helping to have automatic monitoring system.
Figure2. The First image acquired in T1
Figure3. The Second image acquired in T2
Figure4. The fused image based on wavelet theory .
5. CONCLUSION
The main objective of this study was to overcome the present problems of automatic monitoring with multi time s image fusion. Wavelet theory was used in this study as a good method for fusion at the pixel level. It decomposes an image in to low frequency band and high frequency band in different levels, so we could integrate multi images for moving object tracking what s more during our process we tried to select the best form of wavelet and decomposition level for getting to the best quality for fusion of images.
The result of this analysis will help us to implement automatic system that could monitor moving object without human intervention.
6. REFERENCES
1. Hall,D.,2001. Handbook of multisensor data fusion, CRC Press.
2. Nahin, Pokoski,1980. Multi sensor Data Fusion.
3. Kale,V.,2010. Performance Evaluation of Various Wavelets for Image Compression of Natural and Artificial Images. 4.Wassai,A.,2011, Arithmetic and Frequency Filtering Methods of Pixel-Based Image Fusion Techniques ,India.
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B3, 2012 XXII ISPRS Congress, 25 August – 01 September 2012, Melbourne, Australia