• Tidak ada hasil yang ditemukan

Fusion of Medical Images by Empirical Wavelet Transform

N/A
N/A
Protected

Academic year: 2023

Membagikan "Fusion of Medical Images by Empirical Wavelet Transform"

Copied!
9
0
0

Teks penuh

(1)

Fusion of Medical Images by Empirical Wavelet Transform

G. Sai Sravanthi1, B. Venkateswarlu2, B. Sravan kumar3, N. Sai kiran4 Assistant Professor1, 2, 3, Department of ECE, QISCET

Assistant Professor4, Department of IT, QISCET

Article Info

Page Number: 309-317 Publication Issue:

Vol. 69 No. 1 (2020)

Article History Article Received:

12 September 2020 Revised: 16 October 2020 Accepted: 20 November 2020 Publication: 25 December 2020

Abstract

The process of emulsifying an image entails combining all crucial elements from several photos and adding them to smaller, mostly single- bone images. In the areas of distant seeing, target shadowing, medical imaging, and satellite imaging, image emulsion is quite useful. With the use of the simple average emulsion rule, this design intends to show how Empirical Wavelet transfigures work when used for the emulsion of multi- focus photos. On common datasets used for fusing photos with various foci, the approach suggested is tested. The main function of the empirical wavelet transform is to transform a signal into a multi-resolution analysis utilising an adaptive method. Various methods are used to estimate how effective the proposed system is. Visual perception is used to compare the performance of the proposed system, and conventional quality metrics including entropy, peak signal to noise ratio, and root mean squared error are also evaluated. According to the examination of the experimental data, the suggested approach, which is based on the Empirical Wavelet Transform (EWT), performs better than the established methods.

According to the suggested criteria, the fused image should have a higher entropy than the component images because an emulsion is less effective the lower its entropy. In this procedure, MRI and CT scans are taken into account.

Keywords: Empirical Wavelet Transform, Multi-Focus images, Error Root Mean Square, Entropy, Maximum Signal-to-Noise Ratio, Multi- Resolution Analysis, Magnetic Resonance Imaging (MRI), Computed Tomography (CT).

Ⅰ. INTRODUCTION

The process of combining several images from various sensors to extract features into a single image is known as image fusion. The combined output image includes all pertinent data from the input images. The objective of multi-focus image fusion is to integrate numerous photographs with different focal points into a single image, and various methods are available in the literature to do this. Image fusion is essential for object detection and recognition, military and civilian surveillance, satellite imaging for remote sensing, navigation guiding, and other applications. There are three categories into which image fusion algorithms can be categorized: pixel, feature, and decision levels. It is possible to separate image fusion techniques into two categories, such as spatial domain and transform domain. Discrete Wavelet, Fast Intensity Hue Saturation (FIHS) Image fusion is a method for combining the features from numerous photos taken by various sensors into a single image.

All pertinent information from the input photos is included in the fused output image.

(2)

Multiple methods are known in the literature to accomplish the objective of multi-focus picture fusion, which is to integrate various images with varied focal points into a single image.

Image emulsion has recently been employed in a variety of tasks including remote viewing, surveillance, obtaining medical opinions, and photography. A multi-focus image emulsion and a multi-exposure image emulsion are two of the most important image emulsion activities in photography. Image emulsion's central concept is the collection of critical and necessary information from the input photos into a single image that flawlessly contains all of the information of the input images. The exploration of picture emulsion has been documented in more than 30 research works. In general, there are two components to image emulsion: subjective evaluation standards and image emulsion styles. Cameras that record images and videotape sequences are known as detectors in visual detector networks (VSN). A camera cannot provide a flawless depiction that includes all of the nuances of the scene in many VSN procedures. This is brought on by the shallow depth of focus of camera lenses.

Thus, just the object in the camera's field of view is sharp and concentrated, while other areas of the image are blurry. Several cameras are used by VSN to take pictures with various depths of field. Reusing the original input photos is crucial to reduce the quantum of communicated data because cameras generate a considerable amount of data compared to other detectors, such as pressure and temperature detectors, and there are some restrictions on bandwidth, energy consumption, and processing speed. Recent research on multi-focus picture emulsions can be divided into two groups: spatial disciplines and order transfigure.

For image emulsion, distinct cosine transfigure (DCT) and multi-scale transform are often utilized transformations (MST). Recent years have seen Deep Literacy (DL) flourish in a number of image processing and computer vision operations.

Ⅱ. RESEARCH ELABORATION

The reason for our exploration is to provide a quick and effective diagnosis of the diseases by providing an efficient scan of the organs of the patient to the doctor. The medical images we considered for this process are MRI and CT scan images. These two images vary in providing the information. In real medical opinion, it takes medical experts a large quantum of time and manpower to perform the task and it is cost-effective. A previous study of image fusion is on Discrete Wavelet Transform this process involves the frequency coefficients but for any discrete process, the major issue to deal with is the noise and distortions that occur during the process. To overcome these issues we are implementing this process Image Fusion by Empirical Wavelet Transform. We used Average fusion rule to the process which returns the

average of the resolution coefficients this help.

ⅰ. ALGORITHM

Algorithm to read CT Image:

(3)

program.

Step 3: Read the CT image from the stored location in the device using the path provided before.

Step 4: Check the image for RGB components and if it possesses them then convert the image to gray level and store the image.

Step 5: Convert the Greyscale image into an RGB image and compress an image to 256 bits and store the image

Algorithm to read MRI Image

Step 1: Create the path to read the MRI image from the location stored in the device so that the reading of the image can be done easily.

Step 2: Read the MRI image from a stored location in the device using the path provided before.

Step 3: Check the image for RGB components and if it possesses them then convert the image to gray level and store the image

Step 4: Convert the Greyscale image into an RGB image and compress the image to 256 bits and store the image.

Algorithm to perform Empirical Wavelet Transform Step 1: Create the subplots to store the CT and MRI image.

Step 2: Apply the wavelet filter to return the lowpass and high-pass Components present in the image for the application of the Empirical Wavelet Transform.

Step 3: Apply the EWT to the obtained co-efficient and apply the average fusion rule to them.

Step 4: Apply Inverse Empirical Wavelet Transform to resultant co-efficient obtained present in the image.

Step 5: Subplot the resultant image and name it a fused image. Apply the entropy metric to the MRI and Fused images.

Step 6: Calculate the PSNR and MSE for the Fused Image and write the resultant output images and values.

Step 7: End.

ⅱ. FLOW CHART REPRESENTATION OF EWT:

(4)

Figure 1: Flow chart of EWT

ⅲ. BLOCK DIAGRAM FOR EWT

(5)

In this paper, we read the medical images by using MATLAB Tools and we perform the Empirical Wavelet Transform on both these images. Both the images should be in the same format like JPEG, PNG, etc., On the application of the EWT, the coefficients present in the image namely the resolution coefficients will be split into different modes because of the Multi-Resolution Analysis present in EWT as shown in the diagram. Now, to these coefficients of different modes, we apply the fusion rule named Average. This performs the average of the coefficients and then we perform the IEWT to this image which leads to the resultant image.

ⅳ. METHODOLOGY

With the aid of a technique called the Empirical Wavelet Transform, we suggest a new approach of fusing images in this work. In this process, we read medical images like MRI images and CT images. We consider these images because MRI images show only the non- bony parts in our body like tissues, cells, etc., whereas in the CT images we can observe the bony parts present in our body in other words it represents the outline of the organs this helps us in finding the tumor, swelling and other irregularities in the body The main advantage of the EWT over the existing process is that it uses the resolution coefficients because it possesses a feature known as Multi-Resolution Analysis that helps in the partition of coefficients on the basis of the resolution.

The images that are read should be checked for RGB components if there are any we should convert the image to gray level because RGB components can cause loss of the image because the color will have different spectrum so the analysis of the image might get complex so for an efficient fusion the image should be converted from RGB to Gray level. After conversion we apply the EWT to these images and apply the Average rule to these images.

We apply Inverse EWT to these images and the resultant image after this inversion is the fused image. This image is finally considered as the resultant for the image fusion done by using Empirical Wavelet Transform.

Ⅲ. RESULTS

The initial move involved in any Image fusion involves reading the images. since we are using the MATLAB tool we need to create a path for these images. The image we are reading first is CT picture as displayed in the figure.

(6)

Figure 3: Selecting CT Image

The second image we considered is an MRI image same as the previous one we read the image from the stored location using the path provided. The size of the image should not be high.

Figure 4: Selecting MRI Image

(7)

different parts images we cannot perform the fusion. This is how the fused image, the final product, gets obtained.

Figure 5: Multimodal Image Fusion

The metrics that are calculated are the Entropy for the MRI image, CT image and Fused image in additional we also found PSNR, MSE values for the fused image. For any image the MSE should below and the Entropy should be high.

Figure 6: Metrics of EWT

(8)

Ⅳ. CONCLUSION

Herein, we made a suggestion a practical a method to building wavelets modified to represent the signals after processing. We first demonstrated how to create a wavelet frame set that is as compact as possible before defining the empirical wavelet transform and its inverse. The main goal is to diagnose the diseases in order to completely examine the organs, and tissues and to provide the diagnosis quickly so that immediate treatment will be provided to the patients. This study proposes a brand-new fusion technique based on the Empirical Wavelet transform. By making use of the fusion rule averages, we performed the fusion of the images by Empirical Wavelet Transform. Standard multi-focus image datasets are used in the experiment. The well-known quality measures are used to assess the efficacy of the fusion processes. The Empirical Wavelet Transform performs the fusion based on the basis of resolution coefficient so the noise or distortions in fused images is less compared to any other

fusion method. The experimental research also demonstrates that the EWT-based

Ⅴ. ACKNOWLEDGMENT

The Department of Electronics and Communication Engineering and QIS management are

acknowledged, and the authors offer their gratitude.

Ⅵ. REFERENCES

[1] Cyril Prasanna Raj. P, Venkateshappa, Sudha B S.: High-speed low power DWT architecture for image fusion. International Conference on Electrical, Electronics, and Communications-ICEEC. pp. 156-158 (2014).

[2] Heba Khudhair Abass.: A Study of digital image fusion techniques based on contrast and correlation measures, Ph.D. Thesis, Al-Mustansiriyah University (2013).

[3] Rani, K., Sharma, R.: Study of Different Image fusion Algorithm. International Journal of Emerging Technology and Advanced Engineering,3(5) pp. 288-291 (2013).

[4] Strait,M.,Rahmani,S.,Merkurev,D.: Evaluation of pan-sharpening methods. UCLA Department of Mathematics (2008).

[5] Rajvi Patel, Manali Rajput, Pramit Parekh.: Comparative Study on Multi-focus Image Fusion Techniques in Dynamic Scene. International Journal of Computer Applications (2015).

[6] Rahmani, S.,Strait, M.,Merkurjev, D., Moeller, M.,Wittman, T.: An adaptive IHS pan sharpening method. Geoscience and Remote SensingLetters, IEEE, 7(4) pp. 746-750 (2010).

[7] Huang, S. G.: Wavelet for image fusion. Graduate Institute of Communication Engineering and Department of Electrical Engineering, National Taiwan University (2010).

[8] Naidu, V. P. S.: Image fusion technique using multi-resolution singular value

(9)

curvelets revisited. SIAM Journal on Imaging Sciences, 7(1) pp. 157-186 (2014).

[11] Sahu, D. K., Parsai, M. P.: Different image fusion techniques a critical review.

International Journal of Modern Engineering Research (IJMER), 2(5) pp. 4298-4301 (2012).

[12] Vadher Jagruti.: Implementation of Discrete Wavelet Transform Based Image Fusion.

IOSR Journal of Electronics and Communication Engineering (IOSR-JECE), 9(2) pp.

107-109.

[13] P Ramprakash, M Sakthivadivel, N Krishnaraj, J Ramprasath. ”Host-based Intrusion Detection System using Sequence of System Calls” International Journal of Engineering and Management Research, Vandana Publications, Volume 4, Issue 2, 241-247, 2014

[14] N Krishnaraj, S Smys.”A multihoming ACO-MDV routing for maximum power efficiency in an IoT environment” Wireless Personal Communications 109 (1), 243- 256, 2019.

[15] N Krishnaraj, R Bhuvanesh Kumar, D Rajeshwar, T Sanjay Kumar, Implementation of energy aware modified distance vector routing protocol for energy efficiency in wireless sensor networks, 2020 International Conference on Inventive Computation Technologies (ICICT),201-204

[16] Ibrahim, S. Jafar Ali, and M. Thangamani. "Enhanced singular value decomposition for prediction of drugs and diseases with hepatocellular carcinoma based on multi-source bat algorithm based random walk." Measurement 141 (2019): 176-183.

https://doi.org/10.1016/j.measurement.2019.02.056

[17] Ibrahim, Jafar Ali S., S. Rajasekar, Varsha, M. Karunakaran, K. Kasirajan, Kalyan NS Chakravarthy, V. Kumar, and K. J. Kaur. "Recent advances in performance and effect of Zr doping with ZnO thin film sensor in ammonia vapour sensing." GLOBAL NEST JOURNAL 23, no. 4 (2021): 526-531. https://doi.org/10.30955/gnj.004020 , https://journal.gnest.org/publication/gnest_04020

[18] N.S. Kalyan Chakravarthy, B. Karthikeyan, K. Alhaf Malik, D.Bujji Babbu,. K. Nithya S.Jafar Ali Ibrahim , Survey of Cooperative Routing Algorithms in Wireless Sensor Networks, Journal of Annals of the Romanian Society for Cell Biology ,5316-5320, 2021

[19] Rajmohan, G, Chinnappan, CV, John William, AD, Chandrakrishan Balakrishnan, S, Anand Muthu, B, Manogaran, G. Revamping land coverage analysis using aerial satellite image mapping. Trans Emerging Tel Tech. 2021; 32:e3927.

https://doi.org/10.1002/ett.3927

[20] Vignesh, C.C., Sivaparthipan, C.B., Daniel, J.A. et al. Adjacent Node based Energetic Association Factor Routing Protocol in Wireless Sensor Networks. Wireless Pers Commun 119, 3255–3270 (2021). https://doi.org/10.1007/s11277-021-08397-0.

[21] C Chandru Vignesh, S Karthik, Predicting the position of adjacent nodes with QoS in mobile ad hoc networks, Journal of Multimedia Tools and Applications, Springer US,Vol 79, 8445-8457,2020

Referensi

Dokumen terkait

When Chen and Chen 2016 applied the G-DINA model and analysed the test response data of 1,029 British secondary school students' performance on a reading test i.e., the Program for