• Tidak ada hasil yang ditemukan

Automatic segmentation of brain tumors in magnetic resonance Images

N/A
N/A
Protected

Academic year: 2023

Membagikan "Automatic segmentation of brain tumors in magnetic resonance Images "

Copied!
4
0
0

Teks penuh

(1)

Proceedings of the IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI 2012) Hong Kong and Shenzhen, China, 2-7 Jan 2012

Automatic segmentation of brain tumors in magnetic resonance Images

.

Neda Behzadfar1,2, Hamid Soltanian-Zadeh1,3

Abstract-Segmentation of tumors in magnetic resonance images (MRI) is an important task but is quite time consuming when performed manually by experts. Automating this process is challenging due to the high diversity in appearance of tumor tissue in different patients, and in many cases, similarity between tumor and normal tissues. This paper presents an automatic method for segmentation of brain tumors in MRI.

We use images of patients with glioblastoma multiform tumors.

After pre-processing and removal of the regions that do not have useful information (e.g., eyes and scalp), we create a projection image for determining the primary location of the tumor. This image provides an overall view of the tumor. Then, we grow the primary region to segment the entire tumor. This method is automatic and independent of the operator. It segments low contrast tumors without requiring their exact a tissue boundaries. The segmentation results obtained by the proposed approach are compared with those of an expert radiologist showing excellent correlations among them (R2=O.97).

I. INTRODUCTION

S

EGMENTATION images (MRI) is important in medical diagnosis because of brain tumors in magnetic resonance it provides information associated with anatomical structures as well as potential abnormal tissues necessary for treatment planning and follow-up. Automatic brain tumor segmentation from MRI is a difficult task that involves pathology, MRI physics, radiologist's perception, and image analysis based on intensity and shape. Brain tumors may be of any size, may have a variety of shapes, may appear at any location, and may appear with different image intensities.

Some tumors also deform other structures and appear with edema that changes intensity properties of the nearby region.

For human experts, manual segmentation is a difficult and time consuming task. This makes an automated brain tumor segmentation method desirable. There are many applications of an automated method. For example, it can be used for surgical planning, treatment planning, and treatment evaluation [1].

The challenges associated with automatic brain tumor segmentation have given rise to different approaches.

Existing methods leave significant room for increased automation, applicability, and accuracy. The aim of this paper is to contribute to this domain, by proposing an

1 Control and Intelligent Processing Center of Excellence, School of Electrical and computer Engineering, College of Engineering, University of Tehran, Tehran 14395-515, Iran.

2 Pooyandegan Rah Saadat company, Tehran 16756-965, Iran.

3 Image Analysis Laboratory, Radiology Department, HenryFord Healh System, Detroit, MI 48202, USA

978-1-4577-2177-9/12/$25 (C) 2012 IEEE 329

original method, which is general enough to segment a large class of tumor types.

Clark et al [2] and Fletcher- Heeth et al [3] proposed automated segmentation methods based on artificial intelligence techniques. These methods do not rely on intensity enhancements provided by contrast agents. A particular limitation of the methods is that they require a training phase prior to the segmentation of a set of images.

Other methods are based on statistical pattern recognition techniques like the method proposed by Kausa et al [4]. This method combines information from a registered atlas template and user input to supervise training of a classifier, demonstrating strength of combining voxel-intensity with geometric brain atlas information. This method has been validated for meningiomas and low-grade gliomas. Cuarda et al [5] presented high-dimensional warping to study deformation of brain tissue due to tumor growth. This technique relies on prior definition of the tumor boundary whereas the method we propose in this paper focuses on automatically finding tumor regions.

We present a new approach for automatic segmentation of tumors from multi-channel MRI. Most methods so far have been applicable only to enhancing, homogeneous tumors.

Furthermore, they require user-guidance to train a supervised classifier or to obtain a rough outline of the region of interest. Our method segments inhomogeneous tumors and does not need user-guidance.

The rest of the paper please is organized as follows. In Section 2, the method and materials are explained. In Section 3, the results of the experiments and the analysis are presented. Section 4 discusses the results and presents conclusions and future work.

II. MATERIALS AND METHODS

12 patients (9 male and 3 female) with Gd-enhanced areas in their brain tumors were chosen as the data for the study.

The ages ranged from 36 to 66 years with an average of 53 (Henry Ford Health System, Detroit, MI, USA). These images were acquired using a 3 Tesla GE system and included multi-parametric images with an image matrix of 512 x 512 Tl- weighted (TR = 3,000 ms, TE = 6 ms, TI = 1238 ms), post-contrast Tl-weighted (TR = 3000 ms, TE = 6 ms, TI = 1238 ms), T2-weighted (TR = 3000 ms, TE = 103 ms), and FLAIR (TR = 10000 ms, TE = 120 ms, TI = 2250 ms). All of the images are spatially registered by FSL to a standard Tl-weighted image.

(2)

The overall image intensity varies from scan to scan due to variations in the physical state of the scanner hardware interactions between the detector and the patient's body, and the pulse sequence parameters. In addition, for each MR image, the analog signal attenuation and digital gain settings are automatically adjusted to optimally fill the dynamic range of the analog-to-digital convertor. Therefore, the MR image intensity is globally scaled as a function of these settings as well as the state of the hardware components.

Consequently, it is necessary to compensate for variations in the global intensity of the images. We consider one of the patients as the reference and select the white matter (WM) region in one slice of the post contrast TI weighted image.

Then, we calculate the average intensity of the pixels in this region. Similarly, we calculate the average intensity of the pixels in the white matter region of the other patients' images. Next, we obtain the gain by dividing the other patients' averages to that of the reference image. Finally, for obtaining standard images, intensities of the acquired images are divided by the corresponding gains.

In the next step, we remove the skull from the 1'2- weighted images and extract a fine brain mask. Numerous brain extraction algorithms are publicly available, e.g., BET, SPM, and BS. BET, used in several studies in the past [6], is developed by Smith [7]. It is most popular and is based on a brain surface model. Initially, a rough brain mask is created using two thresholds estimated from the image histogram.

Then, a tessellated sphere centered at the approximate center of gravity of the brain is expanded towards the brain edges.

Relative to the other methods, BET tends to produce smooth edges and often includes additional non-brain tissues [8-12]

(Figure 1).

Figure I. Removal of skull from the image using BET. Note that some non­

brain tissues are still present in the result.

Therefore, we present a brain extraction algorithm that overcomes the above problem and works efficiently for T2- weighted images. First, we process the image using a low pass filter (LPF). The LPF is applied for subduing or removing small details that appear in the background while enhancing large features. The original T2-weighted MRI,

f(x,y),

is subjected to LPF of size 3x3 pixels. The LPF produces a blurred or smoothed image. Then, the image is further processed to generate a binary image using the Ridler's method [13]. In this method, the initialization is done by considering pixels at the corners of the image as the background pixels and the remainder as the objects pixels.

The threshold is obtained as:

330

Where

t LCi.j)Ebackground

f (i,j) fls

= *"

background

-

pixels

t _ LCi.j)Eobjects

f(i,j) flo

- *"

object

-

pixels

(1)

(2)

(3)

Here,

fl1

and

fl�

are the means of the background and the object pixels in the t-stage, respectively. The background removed from the image by applying the threshold condition given in the Ridler's method is shown in Figure 2. A binary image is obtained by applying the threshold condition given in the Riddler's method.

Figure 2. Removal of background from the image using Ridler's method. a) Original image. b) Background image.

In this stage, the morphological operations (erosion, connected component analysis, and dilation) are performed on the binary image to segment the fine brain portion. For the morphological operations, we define a structuring element (STEL) as shown in Figure 3 [14]. STEL is a square element of

d

x

d

pixels. We need curved corners in the STEL to treat curved boundaries of the brain. Therefore, three pixel positions in each corner of STEL are disabled by setting them to '0' value.

Eyes have high intensities in the inferior slices and cause problems in tumor detection. To remove them first, we extract histogram features (mean and standard deviation) [15] from the FLAIR, Tl, and post-contrast Tl images.

Then, we obtain an appropriate threshold from those features and remove regions with intensities larger than the threshold to create a binary image.

CSF and ventricles may also generate problems in tumor detection in some slices. They have larger intensities in the T2-weighted images and lower intensities in the FLAIR images than the other regions. To remove them, we omit the overlap of the regions that have larger intensities than a threshold in the T2-weighted images and the regions that have lower intensities than another threshold in the FLAIR images. In this stage, morphological operations (erosion, connected component analysis, and dilation) are applied on the binary images to create the final masks.

(3)

0 0 1 1 1 1 1 0 0 0 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 0 0 0 1 1 1 1 1 0 0

Figure 3. Structuring element (STEL) used for morghological operations.

Using the post-contrast TI images, we find slices that contain Gd-enhanced areas of tumors for volume analysis.

To this end, we subtract the post-contrast TI image from the corresponding pre-contrast Tl image pixel by pixel. Then, we obtain a binary image by histogram features extracted from the Gd-enhanced image. In this step, we sum the Gd­

enhanced image slices pixel by pixel and create a projection image. An initial tumor region is obtained by removing undesired regions through thresholding and morphological operations. We determine a central slice (i.e., the slice with minimum effect of partial volume analysis [16]) by multiplying this region by the Gd-enhanced image and then selecting the slice with maximum tumor area.

The tumor defined in the projection image points to the approximate location of the tumor in different slices. To reduce the neighborhood of tumor search, we divide the projection image and the Gd-enhanced images into four sections and remove regions that the projection image shows no tumor regions. After selecting the regions that contain tumor, we obtain an appropriate threshold using the histogram features (mean and standard deviation) and select the regions that have intensities higher than the threshold.

Then, by morphological operations (erosion, connected component analysis and dilation), primary regions of the tumor in different slices are formed.

Using the primary regions, the entire tumor is found by region growing. To this end, we analyze pixels that are close to the tumor regions in each stage. If the following two conditions hold, the test pixel is added to the tumor region.

i) [(x,y)

-

11�

<

2

x

std� (4) ii) [(x, y)

-

111

>

2

x

std1 (5)

Here,

[(x,y)

is the test pixel intensity,

11�

and

std�

are

the mean and the standard deviation of the tumor region, respectively, and

111

and

std1

are the mean and standard deviation of the non-tumor region. We repeat the region growing until no new pixels are added.

331

III. RESULTS

The proposed pre-processing approach for the removal of the skull and other non-brain tissues is illustrated in Figure 4. Note that the skull and non-brain tissues are correctly removed. Also, note that the tumor region is bright in the post-contrast Tl-weighted image and is dark in the pre-contrast Tl-weighted image. Thus, clarity of the tumor is enhanced by subtracting the post-contrast image from the pre-contrast image.

Figure 4. MR images after preprocessing and removal of the skull and non­

brain tissues. a) Post-contrast Tl-weighted image. b) Pre-contrast Tl­

weighted image. c) FLAIR image. d) T2-weighted image.

a b c

Figure 5. a) Projection image. b) Tumor extracted from the projection image. c) Central slice.

The projection image created to determine the primary location of the tumor is shown in Figure 5 with MRI sequence that been used in figure 4. This image provides an overall view of the tumor. Also, the central slice is extracted by multiplying the primary region defined in the projection image by the Gd-enhanced image and select the slice that has the maximum tumor area (see Figure 5.c). Next, the pixels that are close to the tumor region are searched to grow the primary tumor region and segment the entire tumor (Figure 6).

IV. CONCLUSION

This paper presents a new approach for automatic segmentation of tumors from MRI images. Automatic segmentation can process inhomogeneous tumors and does not need the user in each stage. Furthermore, in this method, we do not need primary parameters and each tumor pixel is extracted with no need to know its primary location.

The segmentation results obtained by the proposed approach are compared with those of an expert radiologist.

The correlation of the results is shown in Figure 7. The high correlation between the two results and the slope of 1 of the regression line suggest that the proposed method is capable of properly segmenting the entire tumor.

(4)

Figure 6. a) GO-enhanced image. b) Primary region of the tumor selected using the projection image. c) Overall tumor obtained by region growing.

methods applied to contemporary and legacy images: effect of diagnosis, bias correction, and slice location, Human Brain, .. pp. 99- 113,2006.

[7} SM Smith, "Fast robust automated extraction. " Human Brain Mapping 17, pp. 143-155,2002.

[8} D. W Shattuck, SR. Sandor-Leahy, K.A. Schaper, D.A. Rottenberg, R.M Leahy, "Magnetic resonance image tissue classification using a partial volume model . .. Neuroimage. vol.l3. pp. 856-876. 2001.

[9} A.H. Zhuang. D.J. Valertino, A. W Toga. " Skull-stripping magnetic resonance brain images using a model-based level set, Neuroimage, vol. 32, pp. 79-92, 2006.

[l0} CFennema-Notestine, I.B. Ozyurt, CP. Clark, S Morris, A.Bischoff-

corolation betwwen automatic and manual results Grethe. MW Bondi, TL. Jernigan. B. Fischl. F. Segonne.

a

1.ooeoolJ.

0 8.0OE 00 �8�-lln� ______________ _

S R'·�o.96B

§-.

6.0OE 00

n

>-<; 4.ooE�00

'"

g.

2.ooE 00

O.OO'EtOO

0 4 8

manual re sult

Figure 7. Correltion between the tumor volumes estimated from the proposed automatic segmentation method and those estimated from the manual segmentation by an expert.

REFERENCES

[l} Menz BH, Van Leemput K, Lashkari D, Weber MA, Ayache N, Golland p. " A generative model for brain tumor segmentation in multi- model images," Med Image Comput Assist Interv, pp.151- 159,2010.

[2} Clark, Me., Hall, L.o., Goldgof, D.B., Velthuizen, R., Murtagh, F.R., Silbiger. MS. "Automatic tumor-segmentation using knowledge-based techniques," IEEE transaction on medical imaging, vol. 117,pp. 187-201, 1998.

[3] Fletcher-Health, L.M, Hall. L.o., Goldgof, D.B .. Murtagh, F.R.,

"Automatic segmentation of non-enhancing brain tumors in 282 MPrastawa et ai," Medical image analysis, Artificial intelligence in medicine 21, pp. 275-283, 2004.

[4} Kaus, MR., Warfield. SK. Nabavi. A., Chatzidakis, E .. Black, P.M, lolesz, F.A., Kilinis, R., "Segmentation of meningiomas and in MRJ," Lecture notes in computer science, MICCAI, vol. 1679, Springer. pp. 1-10, 1999.

[5} Cuadra. MB., Gomez, J.. Hagmann. p .. Polio. C .. Villemure. J.G., Dawant, B.M, Thiran, J.Ph. "Atlas-based segmentation of pathological brains using a model of tumor growth. .. Medical image computing and computer-Assisted intervention MICCAI. Springer, pp. 380-387, 2002.

[6} C Fennema-Notestine, I.B. Ozyurt, CP. Clark, S Morris, A.

Bischoff-Grethe. M W Bondi. TL. Jernigan. B. Fischl. F.Segonne.

D. W Shattuck, R.M Leahy, D.E. Rex, A. W Toga, K.HZou, G.G.brown, "Quantitative evaluation of automated skull-stripping

332

D. WShattuck, R.M Leahy, D.E. Rex, A. W Toga, KH.Zou. G.G.

Brown, " Quantitative evaluation of automated skull-striping methods applied to contemporary and legacy images: effects of diagnosis. bias correction. and slice location. " Human brain mapping, vol. 27, pp. 99-113, 2006.

[l1} S W Hartley, A.I. Scher, E.SC Korf, L.R. White, L.J. Launer, "

Analysis and validation of automated skull stripping tools: a validation study based on 296 MR images from the Honolulu Asia aging study, " Neuroimage, vol. 30, pp. 1179-1186, 2006.

[l2} J.M Lee. J.H. Kim, J. S Kwon, Sf. Kim, "Evaluation of automated and semi-stripping algorithms: similarity index and segmentation error, .. Computers in biology and medicine, vol. 33 , pp. 495-507, 2003.

[l3} Milan Sonka. Vaclav Hlavac, Roger Boyle. "Image processing:

analysis and machine vision, seconded, " Brooks/Cole publishing company, 1999.

[l4} K Somasundaram, T Kalaiselvi. "Fully automatic brain extraction on algorithm for axial T2-weighted magnetic resonance images, .. Computers in biology and medicine, vol. 40, pp. 811-822, 2010.

[l5} 0. Holub, S T Ferreria. "Quantitative histogram analysis of images," Computer physics communication, vol. 175, no. 9, pp.

620-623, Nov. 2006.

[16] H. Soltanian-Zadeh. J. P. Windham, A. E. Yagle. "Optimal transformation for correcting partial volume averaging effects in magnetic resonance imaging," IEEE transaction on Nuclear Science. vol. 40. no. 4. pp. 1204-1212, Aug 1993.

Referensi

Dokumen terkait

There is significant difference in retail image in terms of Store Location; the access that easy to reach by the consumers, the explicit instruction also the vastly parking