International Journal of Recent Advances in Engineering & Technology (IJRAET)
________________________________________________________________________________________________
________________________________________________________________________________________________
ISSN (Online): 2347 - 2812, Volume-2, Issue -11,12 2014 1
CT AND MR Image Fusion Using Graph Cut Method
1Reena.S.Pandav, 2S.R. Khot
1D. Y. Patil College of Engg. and Technology, Kolhapur, Maharashtra.
2Prof .D. Y. Patil College of Engg. and Technology Kolhapur, Maharashtra.
Emails: 1[email protected], 2[email protected].
Abstract: Medical image fusion is the tool for the clinical applications. Graph cut based algorithm study a novel CT and MR spine image fusion. Bone and Soft tissue structure of the spine is observed more precisely from CT and MR images. The term fusion means an approach to extraction of information acquired in several domains. This Graph cut algorithm allows physicians to observed soft tissue and bony detail on a single image eliminating alignment and correlation needed when both CT and MR images are required for diagnosis. The fused image will be more informative than the source images. Image were registered, pre-processed and then fused. Fusion successfully transfers bone detail and soft tissue detail to the resulting image. In order to evaluate result statistical parameters like mean square error (MSE), root mean square error (RMSE) and peak signal to noise ratio (PSNR) are used to determine fused image is better quality than input image.
Index Term- Image Fusion, Graph Cut, Medical Imaging.
I. INTRODUCTION.
Image fusion is method which produces a single image from a set of different images. The fused image should have more complete information which is more useful for human or machine perception. Image fusion is a process of combining images of different modalities viewing of the same scene, to form a composite image.
The composite image is formed to improve image information and to make it easier for the user to detect, recognize and identify targets and increase his situational awareness.
The Advantages of Image Fusion are increase situational awareness, increased information per video feed, Monitor fewer feeds Monitor fewer feeds, reduced data storage/transmission, best for recognition/identify, primarily reflected energy, Improve reliability, and Improve capability. Extract all the useful information from the source images Do not introduce artifacts or inconsistencies which will distract human observers or the following processing. Reliable and robust to imperfections such as mis-registration. Medical image fusion is the process of registering and combining multiple images from single or multiple imaging modalities to improve the imaging quality and reduce randomness and redundancy in order to increase the clinical application of medical image for diagnosis and
assessment of medical problem. In medical diagnosis, treatment planning and evaluation, the complementary information in image of different modalities is often needed. X-ray computed tomography (CT) has become popular because of its ability to visualize dense structure like bones and blood vessels implants with less distortion, but it cannot detect physiological change.
Bone (vertebrae), nerves and disks and Normal and pathological soft tissue (such as spinal cord) are better visualized by magnetic resonance (MR) image. The standard data fusion methods may not be satisfactory to merge a high resolution panchromatic image and a low resolution multispectral image because they can distort the spectral characteristics of the multispectral data but by using multiresolution wavelet decomposition.
There are several situations that simultaneously require high spatial and high spectral resolution in a single image. This is particularly important in remote sensing.
In other case, such as astronomy, high spatial resolution and high signal-to-noise ratio may be required. However, in most cases, instruments are not capable of providing such data either by design or because of observational constraints. Identification, localization and the determination of diseases are all important steps that ensure the clinician has sufficient data to make appropriate treatment decisions. CT imaging offers the benefit of a reliable stereotactic image, while MRI offers improved visualization of the disease and surrounding anatomical structures. By synthesizing both types of images, we can obtain higher quality image data that offer specific advantages over either type of system alone. Image fusion can therefore play a vital role in achieving successful results from image-guided stereotactic radiosurgery. Successful fusion of CT and MRI images of spinal diseases is often hampered by differences in the positioning of the patient, by changes in the relative positioning of the patient, by changes in the relative position of the vertebra, and by difficulty in establishing reference points. The positional discrepancy between CTs and MR images often limits the accuracy of fused or synthesized images. Image fusion has been used in defense application for situational awareness, surveillance target tracking, intelligence gathering and personal authentication.
International Journal of Recent Advances in Engineering & Technology (IJRAET)
________________________________________________________________________________________________
________________________________________________________________________________________________
ISSN (Online): 2347 - 2812, Volume-2, Issue -11,12 2014 2
II. GRAPH CUT METHOD
A. Introduction
Graph Cut allows physician to visually assess corresponding soft tissue and bony detail on a single image eliminating mental alignment and correlation needed when both CT and MR images are required for diagnosis. Image were registered, pre-processed and then fused. The graph cut result show better performance than other method. Our method successfully transfers bone detail and soft tissue detail to the resolving fused image.
X-ray computed tomography (CT) has become popular because of its ability to visualize dense structure like bones and implants with less distortion, but it cannot detect physiological change. Normal and pathological soft tissue are better visualized by magnetic resonance (MR) image. In medical diagnosis, treatment planning and evaluation, the complementary information in image of different modalities is often needed. The standard data fusion methods may not be satisfactory to merge a high resolution panchromatic image and a low resolution multispectral image because they can distort the spectral characteristics of the multispectral data but by using multiresolution wavelet decomposition.
B. Registration and Pre- Processing
Graph Cut allows Physician to visually assess corresponding soft tissue and bony detail on single image .Image where registered, pre-processed and then fused. As shown in figure below,
Fig.1. Image fusion process
Proposed method consists of image fusion process. This depicted by block diagram fig.
Registration of both CT and MR images is carried out. Align soft tissue details present in both images.
Thresholding is carried out for aligning soft tissue after registration of image which create image 2.
The original CT image was also threshold and generate the new CT image 1.
Both images are scaled to same intensity and are fused.
Radiologists currently display MR and CT images side by side when both images are available. This will provide all image information, but it accessibility is
limited to visual correlation between the two images. It can be difficult to determine whether narrowing of spinal is caused by tissue or bone from clinical MR images hence both CT and MR can be employed. Both CT and MR image is fused to form a composite image. The composite image is formed to improve image content and to make it easier for the user to detect, recognize and identify target.
C. Definition of Registration
Image registration is the process to find the best alignment to map or transform the points in one image set to the points of another image set. The matching process mainly involves: firstly defining a metric to measure how well two images are aligned; and secondly, transformation to bring two images into the spatial alignment.
Transform
Fig. 2 Image registration process Between two images.
D. Image Registration Process
Image registration is the process that transforms several images into the same coordinate system. For example, given an image, several copies of the image are out-of- shape by shearing, twisting, rotation etc. With the given image as fixed, IR can align the out-of-shape images to be the same as the given image. Therefore, IR is essential preprocessing operation for image fusion.
Fig.3 Image Registration Process
The components of the registration framework and their interconnections are shown in Figure the basic input data to the registration process are two images: one is defined as the fixed image f(Y) and the other as the moving image m(Y). Where Y represents a position in N- dimensional space. Registration is treated as an optimization problem with the goal of finding the spatial mapping that will bring the moving image into alignment with the fixed image.
The basic components of the registration framework are two input images, a transform, a metric, an interpolator and an optimizer. The transform component T (X) represents the spatial mapping of points from the fixed image space to points in the moving image space. The
International Journal of Recent Advances in Engineering & Technology (IJRAET)
________________________________________________________________________________________________
________________________________________________________________________________________________
ISSN (Online): 2347 - 2812, Volume-2, Issue -11,12 2014 3
interpolator is used to evaluate moving image intensities at non-grid positions. The metric component provides a measure of how well the fixed image is matched by the transformed moving image. This measure forms the quantitative criterion to be optimized by the optimizer over the search space defined by the parameters of the transform.
III. EXPERIMENTAL RESULT
The input volumes were registered, using a rigid transform in ITK [6]. The optimizer used maximization of Mutual Information (MI) [7] to align soft tissue details present in both images (note the soft tissue details in the MR is better for diagnosis and CT image are suitable for registration). For the purpose of aligning soft tissue each CT image was thresholded from -255 to 255 Hounsfield Units (HU) or -255 to 0 HU if needed. This kept many of the soft tissue details, but removed most of the bone detail. Both images were then scaled to an intensity to be in the same range. The transform was initialized using two corresponding user-selected points, one from the CT and the other from the MR image. After this, MI was calculated from the both images, and the transform was iteratively updated based on MI of the two images at each step. Using the obtained optimal transform, the original MR image (was transformed and resample to the voxel spacing of the CT image. Manual points were selected in the 3D images for the target registration error (TRE) and the fiducial localization error (FLE) evaluation. This removed most of the soft- tissue details and was done because the MR presents the tissue detail with more clarity, so the CT tissue detail is undesirable for the fused image.
A.RESULT EVALUATION
In order to the quantitative analysis, we make use of statistical parameters to evaluate the fusion result, which are Peak signal to noise ratio, mean square error and Root mean square error.
1. Peak signal to noise ratio
Peak signal to noise ratio can be given as follow:
(1) MAXI is the maximum possible pixel value of the image.
The higher the value of the PSNR, the better the fusion result.
2. Mean square error
The mathematical equation of MSE is given as follow:
(2)
Where, I-the perfect image-the fused image to be assessed-pixel low index-pixel column index, m, n-No.
of row and column. The smaller the value of the MSE, the better the fusion performance.
3. Root mean square error
The root mean square error is given by:
n m
j i F j i R m RMSE
n j i
*
)]
, ( ) , (
1[
2
1
(3)
Where, R (i , j) is original image (or one of the source images) and F (i , j) is the fusion result. M and N are the dimensions of the images to be fused, the smaller the value of the RMSE, the better the fusion performance.
Below Fig.4 shows some image set and Table I show their respective performance parameters.
(a) CT1 (b) MRI 1 (c) Fused image 1
(d) CT2 (e) MRI2 (f) Fused image2
(g) CT3 (h) MRI3 (i) Fused image3 Fig. 4. Medical image fusion data sets
International Journal of Recent Advances in Engineering & Technology (IJRAET)
________________________________________________________________________________________________
________________________________________________________________________________________________
ISSN (Online): 2347 - 2812, Volume-2, Issue -11,12 2014 4
TABLE I
EVALUATION FOR CT, MR AND FUSED MEDICAL IMAGES INPUT
IMAGE
CT MRI FUSED
PSNR MSE RMSE PSNR MSE RMSE PSNR MSE RMSE
1 18.08 1018.06 31.90 17.69 1111.87 33.39 18.22 986.72 31.41
2 17.80 1057.56 32.35 17.99 1040.67 32.25 18.37 953.07 30.81
3 17.52 1159.87 34.05 17.02 1299.28 36.04 18.14 1165.81 34.00
IV. CONCLUSION
For CT and MR images graph cut method successfully fused to create a single fused image, with a new effective combined modality for diagnosis. The fused image with more information improved the performance of image analysis.
V. REFERENCES:
[1]. Hill, D.L.G., “Combination of 3D Medical Images from Multiple Modalities”, PhD thesis, Image processing Group, Radiological Sciences UMDS, University of London, (1993). 5. Knerek, K., Ivanovic, M., Machac,J., Weber, D.A,
“Medical image registration”,Europhysics News, Vol. 31, (2000).
[2]. Maintz, J.B.A. and Viergever, M.A., “A Survey of Medical Image Registration”, Medical Image Analysis, Vol. 2, pp: 1-36, (1998).
[3]. Elsen, V.D., Pol, E.J.D., Viergever, M.A.,
“Medical image matching-a review with classification”, IEEE, Engineering in Medicine and Biology Magazine, Vol.12, pp: 26 –39, (1993).
[4]. Dawant, B.M.; “Non-rigid registration of medical images: purpose and methods, a short survey”, IEEE, International Symposium, Biomedical Imaging, Proceedings, pp:465 –468, (2002).
[5]. Knerek, K., Ivanovic, M., Machac,J., Weber, D.A, “Medical image registration”, Europhysics News, Vol. 31, (2000).
[6]. Insight ToolKit. [Online]. Available:
http://www.itk.org
[7]. D. Mattes, D. R. Haynor, H. Vesselle, T. K.
Lewellen, and W. Eubank, “PET-CT Image Registration in the Chest Using Free-form Deformations,” IEEE Transactions on Medical Imaging, vol. 22, no. 1, pp. 120–128, January 2003.
[8]. J. M. Fitzpatrick, J. B. West, and C. R. Maurer,
“Predicting Error in Rigid-Body Point-Based Registration,” IEEE Transactions on Medical Imaging, vol. 17, no. 5, pp. 694–702, 1998.
[9] Y. Hu, S. K. Mirzac, J. G. Jarvikb, P. J. Heagerty, and D. R. Haynor, “MR and CT image fusion of the cervical spine: a noninvasive alternative to CT-Myelography,” in Proceedings of SPIE, vol.
5744. Bellingham, WA: SPIE, 2005.
[10] C. A. Karlo, I. Steurer-Dober, M. Leonardi, C. W.
A. Pfirrmann, M. Zanetti, and J. Hodler,
“MR/CT image fusion of the spine after spondylodesis: a feasibility study,” European Spine Journal, vol. 19, pp. 1771–1775, 2010.