• Tidak ada hasil yang ditemukan

Lung cancer classification using neural networks for CT images

N/A
N/A
Protected

Academic year: 2023

Membagikan "Lung cancer classification using neural networks for CT images"

Copied!
8
0
0

Teks penuh

(1)

jou rn al h om ep a ge :w w w . i n t l . e l s e v i e r h e a l t h . c o m / j o u r n a l s / c m p b

Lung cancer classification using neural networks for CT images

Jinsa Kuruvilla

, K. Gunavathi

ECEDepartment,PSGCollegeofTechnology,Coimbatore641004,India

a r t i c l e i n f o

Articlehistory:

Received14March2013 Receivedinrevisedform 12September2013 Accepted8October2013

Keywords:

Computedtomography Skewness

Kurtosis Neuralnetwork

a bs t r a c t

Earlydetectionofcanceristhemostpromisingwaytoenhanceapatient’schanceforsur- vival.Thispaperpresentsacomputeraidedclassificationmethodincomputedtomography (CT)imagesoflungsdevelopedusingartificialneuralnetwork.Theentirelungissegmented fromtheCTimagesandtheparametersarecalculatedfromthesegmentedimage.Thesta- tisticalparameterslikemean,standarddeviation,skewness,kurtosis,fifthcentralmoment andsixthcentralmomentareusedforclassification.Theclassificationprocessisdoneby feedforwardandfeedforwardbackpropagationneuralnetworks.Comparedtofeedfor- wardnetworksthefeedforwardbackpropagationnetworkgivesbetterclassification.The parameterskewnessgivesthemaximumclassificationaccuracy.Amongthealreadyavail- ablethirteentrainingfunctionsofbackpropagationneuralnetwork,theTraingdxfunction givesthemaximumclassificationaccuracyof91.1%.Twonewtrainingfunctionsarepro- posedinthispaper.Theresultsshowthattheproposedtrainingfunction1givesanaccuracy of93.3%,specificityof100%andsensitivityof91.4%andameansquareerrorof0.998.The proposedtrainingfunction2givesaclassificationaccuracyof93.3%andminimummean squareerrorof0.0942.

©2013ElsevierIrelandLtd.Allrightsreserved.

1. Introduction

Lungcancer isthe leading cause ofcancer deathsin both womenandmen.Itisestimatedthat1.2millionpeopleare diagnosedwiththisdiseaseeveryyear(12.3%ofthetotalnum- berofcancer diagnosed),and about 1.1 millionpeople are dyingofthisdiseaseyearly(17.8%ofthetotalcancerdeath) [1].Thesurvivalrateishigherifthecancerisdetectedatearly stages.Theearlydetectionoflungcancerisnotaneasytask.

About80%patientsarediagnosedcorrectlyatthemiddleor advancedstageofcancer[2].Computer-aideddiagnosissys- temisveryhelpfulforradiologistindetectionanddiagnosing abnormalitiesearlierandfaster[3].Thecomputeraideddiag- nosisisasecondopinionforradiologistsbeforesuggestinga biopsytest[4].Inrecentresearchliterature,itisobservedthat

Correspondingauthor.Tel.:+919946104098.

E-mailaddresses:jinsak@yahoo.com(J.Kuruvilla),kgunavathi2000@yahoo.com(K.Gunavathi).

principlesofneuralnetworkshavebeenwidelyusedforthe detectionoflungcancerinmedicalimages[5].

For classificationoflung cancer,few methodsbased on neuralnetworkhavebeenreportedintheliterature.Abdulla et al. [6] proposed a computer aided diagnosis based on artificial neural networks for classification of lung cancer.

Thefeaturesusedforclassificationarearea,perimeterand shape.Themaximumclassificationaccuracyobtainedis90%.

Camarlinghiet al.[7]proposed acomputer-aided detection algorithmforautomaticlungnoduleidentification.Thesensi- tivityobtainedis80%with3FP/scan.Al-Kadietal.[8]proposed aclassificationmethodbasedonfractaltexturefeatures.The classificationaccuracyobtainedis83.3%.vanGinnekenetal.

[9] compared and combined six computer aided detection algorithms forpulmonarynodules.Thecombination ofsix algorithmsisabletodetect80%ofallnodulesattheexpense

0169-2607/$–seefrontmatter©2013ElsevierIrelandLtd.Allrightsreserved.

http://dx.doi.org/10.1016/j.cmpb.2013.10.011

(2)

Fig.1–CTimageoflungswithcancer.

ofonlytwofalsepositivedetectionsperscanand65%ofall noduleswithonly0.5falsepositives.Cascioetal.[10]proposed acomputer-aideddetection(CAD)systemfortheselectionof lungnodulesincomputertomography(CT)images.Thedetec- tionrateofthesystemis88.5%with6.6FPs/CTon15CTscans.

Areductionto2.47FPs/CTisachievedat80%efficiency.

2. Segmentation

The images are collected from a database of Lung Image Database Consortium(LIDC) and also from reputedhospi- tal.CTimages of155patients are collected including both menand women. The average age ofthe patients consid- eredis64.2years(ageoftheyoungestpatientis18yearsand theoldestpatientis85years).ThelowdoseCTscanimages areobtainedatkilovoltagepeakdistributionof120–140KVp withacurrentvaryingfrom25to40mAsdependinguponthe ageofthepatient.Thereconstructiondiametervariesfrom 260to400mm withaslice thickness of0.75–1.25mm. The totalnumber of110 nodulesofsize >3mmare considered inthis study and the nodules are referred bytwo radiolo- gists.Thefinalresponsewastheconsensusdecisionofthe tworadiologists.ThetworadiologistshavereferredtheLIDC CTscanswithoutconsideringtheannotationsavailableinthe LIDCdatabase.Bothprimaryandsecondarystagecancernod- ules(classifiedbytworadiologistsdependingonthesizeof thenodules)withfourdifferentkindsofnoduleslikeWell- circumscribed nodules, Vascularized nodules, Juxta-pleural nodulesandPleural-tailnodulesareconsideredinthework [11].Fig.1showstheCTimageoflungswithcancerousregion.

ThelungissegmentedfromtheCTimagesusingmorpho- logicaloperations.Thegrayscaleimageisfirstconvertedto binaryimage.Allthepixelsintheinputimagewithainten- sitygreaterthanathresholdlevelisreplacedwithvalue‘1’

andallpixelvalueswithaintensitylessthanthresholdlevel isreplacedwithvalue‘0’.Thethresholdleveliscalculated byOtsumethod[12].TheOtsumethodchoosesthethresh- oldleveltominimizetheintraclassvarianceoftheblackand white pixels.Fig. 2givesthe grayscale tobinary converted image.

Fig.2–Binaryimage.

Fig.3–Morphologicalopeningoutput.

Themorphologicalopeningoperationisperformedtothe binaryimagewithastructuringelement.Thestructuringis ashape,usedtoprobeorinteractwithagivenimage,with the purposeofdrawingconclusionson howthis shapefits ormissestheshapesintheimage.Thestructuringelement usedis‘periodicline’.Itisaflatstructuringelementcontain- ing2×(P+1)members.Thevalueof‘P’specifiesthesizeof the structuringelement. ThePvalue isselected as 2.One structuring element is located at the origin. Fig. 3 shows theoutputaftermorphologicaloperation.Theimageisthen invertedandclearborderoperationisperformed.Theclear borderoperationsuppressesstructuresthatarelighterthan theirsurroundings andthat areconnectedtotheborderof theimage.Thesegmentationmethodusesonlymorphologi- caloperationsandanaverageof98%ofimagesissegmented correctly.Thesegmentedimagesareindependentlyreferred bytworadiologists.Mainadvantageofmorphologicalopera- tionistheirspeedandsimplicityofimplication.Fig.4shows thefinalsegmentedoutput.

3. Statistical parameters

Thestatistical parametersare extractedfrom theregionof interest.Theregionofinterestisthesegmentedsingleslices containing 2 lungs. The parameters considered are mean, standarddeviation,skewnessandkurtosis[13].Inthispaper

(3)

Fig.4–Segmentedoutput.

thehigherordermomentslikefifthcentralmomentandsixth centralmomentarealsoconsidered.

3.1. Mean

Themean,ofthepixelvaluesinthedefinedwindow,esti- mates the value in the image in which central clustering occurs.

= 1 MN

M

i=1

N

j=1

p(i,j)

wherep(i,j)istheintensityvalueofthepixelatthepoint(i,j).

M×Nisthesizeoftheimage.

3.2. Standarddeviation

Thestandarddeviation,istheestimateofthemeansquare deviationofgraypixelvaluep(i,j)fromitsmeanvalue.

=

1

MN

M

i=1

N

j=1

(p(i,j)−)2

3.3. Skewness

Skewness,Scharacterizesthedegreeofasymmetryofapixel distributioninthespecifiedwindowarounditsmean.Skew- nessisapurenumberthatcharacterizesonlytheshapeofthe distribution.

S= 1 MN

M

i=1

N

j=1

p(i,j)−

3

3.4. Kurtosis

Kurtosis,Kmeasuresthepeaknessorflatnessofadistribution relativetoanormaldistribution.

K= 1 MN

M

i=1

N

j=1

p(i,j)−

4

Fifthcentralmoment= 1 MN

M

i=1

N

j=1

p(i,j)−

5

Sixthcentralmoment= 1 MN

M

i=1

N

j=1

p(i,j)−

6

4. Artificial neural network

The neural nets can be trained to perform pattern classi- fication [14].Thesimplest neuralnetworkused forpattern classificationconsistsofalayerofinputunitandasingleout- putunit.Bothfeedforwardandfeedforwardbackpropagation neuralnetworksareusedforclassification.

4.1. Feedforwardneuralnetwork

Infeedforwardneuralnetworksinformationalwaysmoves inonedirectiononly,thereisnofeedback.Theinformation movesforwardfrominputlayerthroughhiddenlayertothe outputlayer.ThenetworksusedareHebb,Perceptron,Ada- lineandMadalinenetworks.InHebbnetworklearningisdone bymodification ofthe weightsofthe neurons.Theweight isinformationusedbyneuralnetworktosolveaproblem.If twointerconnectedneuronsareboth‘on’atthesametime, then the weightbetweenthose neuronsare increased[15].

ThePerceptronnetworkissupervisedclassifierforclassify- inganinputintooneoftwopossibleoutputs.Itisatypeof linearclassifier.Theclassificationalgorithmmakesitspredic- tionsbasedonalinearpredictorfunctioncombiningasetof weightswiththefeaturevectordescribingagiveninput.Both biasandthresholdareneededinthisnetwork.TheAdaline (AdaptiveLinearNeuron)networkusesbipolaractivationsfor itsinputsignalsandtargetoutput.Theweightsontheconnec- tionfromtheinputunitstotheadalinenetworkareadjustable.

Thenetworkhasabias,whichactslikeanadjustableweight onaconnectionfromaunitwhoseactivationisalways1.A Madaline networkconsistsofadalines arrangedinamulti- layernet.Itisatwolayerneuralnetworkwithasetofadalines inparallelasitsinputlayerandasingleprocessingelement initsoutputlayer.

4.2. Feedforwardbackpropagationneuralnetwork Thebackpropagationisasystematicmethodoftrainingmul- tilayer neural networks in asupervised manner. The back propagationmethod,alsoknownastheerrorbackpropaga- tionalgorithm,isbasedontheerror-correctionlearningrule.

Thebackpropagationnetworkconsistsofatleastthreelayers ofunits:aninputlayer,atleastoneintermediatehiddenlayer andoneoutputlayer.Theunitsareconnectedinfeedforward fashionwithinputsunitsconnectedtothehiddenlayerunits andthehiddenlayerunitsareconnectedtotheoutputlayer

(4)

units.Aninputpatternisforwarded tothe outputthrough inputtohiddenandhiddentooutputweights.Theoutputof thenetworkistheclassificationdecision.

4.3. Typesoftrainingfunctionsusedforclassification The back propagation neural networks are trained with thirteen training algorithms or functions [16]. The train- ing functions used are Gradient descent back propagation (traingd), Gradient descent with variable learning rate (traingda),Gradientdescentwithmomentum(traingdm),Gra- dient descent with variable learning rate and momentum (traingdx),Resilientbackpropagation(trainrp),ConjugateGra- dientAlgorithms(traincgf,traincgp,traincgb,trainscg),Quasi NewtonBFGS(trainbfg),OneStepSecantAlgorithm(trainoss), Levenberg–Marquardt (trainlm) and Automated Regulariza- tion(trainbr).

4.4. Gradientdescentbackpropagation(traingd)

Traingdisatrainingfunctionwhichupdatesweightandbias valuesaccordingtogradientdescent.Anetworkcanbetrained whenitsweight,netinput,andtransferfunctionshavederiva- tivefunctions. The weightsand biases are updated inthe directionofthenegativegradientoftheperformancefunc- tion[17].Thelearningrate‘lr’ismultipliedwiththenegative ofthegradienttodeterminethechangestotheweightsand biases.Backpropagationisusedtocalculatederivativesofper- formance(dperf)withrespecttotheweightandbiasvariables X.Eachvariableisadjustedaccordingtogradientdescent:

dX=lr×dprf dX

4.5. Gradientdescentwithvariablelearningrate (traingda)

Traingdaisatrainingfunctionthatupdatesweightandbias valuesdependingonthegradientdescentwithadaptivelearn- ing rate. Theperformance of the back propagation neural networkissensitivetothe learningrate. Itisnotpractical todeterminetheoptimalsettingforthelearningratebefore training.Theoptimallearningratechangesduringthetraining process,asthealgorithmmovesacrosstheperformancesur- face.Anynetworkcanbetrainedwhenitsweight,netinput, andtransferfunctionsarederivativefunctions.Backpropaga- tionalgorithmisusedtocalculatederivativesofperformance (dperf)withrespecttotheweightandbiasvariablesX.Ifper- formancedecreasestowardthegoal,thenthelearningrate isincreasedbyaconstantfactor.Ifperformanceincreasesby morethanthemaximumperformancefixed,thelearningrate isdecreasedbyaconstantfactor.

4.6. Gradientdescentwithmomentum(traingdm) Traingdm is a network training function that updates weightand biasvalues accordingto gradientdescent with

momentum.Eachvariableisadjustedaccordingtogradient descentwithmomentum,

dX=mc×dXprev+lr×(1−mc)×dperf dX

where‘dXprev’isthepreviouschangetotheweightorbias,

mc’ismomentumconstantand‘lr’islearningrate.

4.7. Gradientdescentwithvariablelearningrateand momentum(traingdx)

Traingdxisanetworktrainingfunctionthatupdatesweight andbiasvaluesaccordingtogradientdescentmomentumand adaptivelearningrate.Eachvariableisadjustedaccordingto thegradientdescentwithmomentum[18].Theperformance canbeincreasedifthelearningrateisallowedtochangedur- ingthetrainingprocess.Theinitialnetworkoutputanderror arecalculatedfirst.Ateachepochnewweightsandbiasesare calculatedusingthecurrentlearningrate.Newoutputsand errorsarethencalculated.Eachvariableisadjustedaccording tothegradientdescentwithmomentum.

dX=mc×dXprev+lr×mc×dperf dX

Whenmomentumisconsidered,ifthenewerrorexceeds theolderrorbymorethanapredefinedratio,thenewweights andbiasesarediscarded.

4.8. Resilientbackpropagation(trainrp)

Trainrp isanetworktraining functionthat updatesweight and bias values accordingto the resilient backpropagation algorithm [19]. The purpose of the resilient backpropaga- tion(Rprop)trainingalgorithmistoeliminatetheeffectsof the magnitudesof the partialderivatives. Thesign of the derivative determines the direction of the weight update.

Themagnitudeofthederivativehasnoeffectontheweight update.Ifthederivativeiszero,thentheupdatevalueremains thesame.Whenevertheweightsareoscillating,theweight changeisreduced.Iftheweightcontinuestochangeinthe samedirectionforseveraliterations,thenthemagnitudeof theweightchangeisincreased.

4.9. Conjugategradientalgorithms

Inthe conjugategradientalgorithmsasearchisperformed alongconjugatedirections.Inconjugategradientalgorithms, thestepsizeisadjustedateachiteration[20].Asearchismade alongtheconjugategradientdirectiontodeterminethestep sizethatminimizestheperformancefunctionalongthatline.

(a) Fletcher-ReevesUpdate(traincgf):Thesteepestdescentdirec- tion (negative of the gradient) is searched by the first iteration.

P0=−g0

whereP0istheinitialsearchgradientandg0istheinitial gradient.Theoptimaldistancetomovealongthecurrent searchdirectionisobtainedbylinesearch.

Xk+1=Xkkpk

(5)

sothatitisconjugatetotheprevioussearchdirections.

Thenewsearchdirectionisdeterminedbycombiningthe newsteepestdescentdirectionwiththeprevioussearch direction.

pk=−gkkpk1

wherepk−1 isthe previoussearch directionand ˇkis a constant.

ForFletcherReevesupdate,theconstantˇkisobtainedby

ˇk= gTkgk gTk−tgk−1

Thisistheratioofthesquareofthecurrentgradienttothe squareofthepreviousgradient.Eachvariableisadjusted accordingtothefollowing:

X=X+a×dX

where dX is the search direction.The parameter ‘a’ is selectedtominimizethe performancealong thesearch direction.

(b) PolakandRibiere(traincgp)

InPolakandRibiereupdate,theconstantˇkisobtainedby

ˇk= gTkgk gTktgk−1

Thisistheinnerproductofthepreviouschangeinthe gradientwiththecurrentgradientdividedbythesquare ofthepreviousgradient.

(c) PowellandBealeRestarts(traincgb)

Thesearchdirectionisperiodicallyresettothenegativeof thegradientforallconjugategradientalgorithms[21].The standardresetpointoccurswhenthenumberofiterations isequaltothenumberofnetworkparameters.ThePowell andBealeproposedanewresetmethodtoincreasethe efficiencyofthetrainingprocess.Thistechniquerestarts ifthereisverylittleorthogonalityleftbetweenthecurrent gradientandthepreviousgradient.Thisistestedwiththe followinginequality:

|gTk−1gk|≥0.2×||gk||2

Iftheconditionissatisfied,thesearchdirectionisresetto thenegativeofthegradient.

(d) ScaledConjugateGradientAlgorithm(trainscg)

Eachoftheconjugategradientalgorithmsrequiresaline searchateachiteration.Thislinesearchiscomputation- allyexpensive.Itrequiresthatthenetworkresponseto all training inputsbe computed several times foreach search. Thescaled conjugate gradient algorithm (SCG), wasdesignedtoavoidthetime-consuminglinesearch.

4.10. QuasiNewtonBFGS(trainbfg)

Trainbfgisanetworktraining functionthatupdatesweight andbiasvaluesaccordingtotheBFGSquasi-Newtonmethod.

updateofNewton’smethodisgivenby Wk+1=Wk−A−1k gk

whereAkistheHessianmatrixoftheperformanceindexin the current valuesof the weights and biases.As Ak value increasesthecomplexityandtimeconsumptionincomput- ingWk+1increases.Itiscomplexandexpensivetocomputethe Hessianmatrix.QuasiNewtonmethodisbasedonNewton’s methodbutdoesnotrequirethecalculationofHessianmatrix.

AnapproximationofHessianmatrixisupdatedateachiter- ationofthealgorithm.Theupdateiscomputedasafunction ofthegradient.

4.11. Onestepsecantalgorithm(trainoss)

Trainossisanetworktrainingfunctionthatupdatesweight and bias values according to the one step secant method [22].TheBFGSalgorithmrequiresmorestorageandcomputa- tionineachiterationthantheconjugategradientalgorithms.

Theone stepsecant(OSS) methodisanattempt tobridge thegapbetweentheconjugategradientalgorithmsandthe quasi-Newtonalgorithms.Thisalgorithmdoesnotstorethe complete Hessianmatrix.Itassumesthatateachiteration, thepreviousHessianwastheidentitymatrix.Theadditional advantageisthatthenewsearchdirectioncanbecalculated withoutcomputingamatrixinverse.

4.12. Levenberg–Marquardt(trainlm)

Trainlmisanetworktrainingfunctionthatupdatesweight andbiasstatesaccordingtoLevenberg–Marquardtoptimiza- tion [23]. Trainlmisthe fastest backpropagationalgorithm, andishighlyrecommendedasafirstchoicesupervisedalgo- rithm, although it does require more memory than other algorithms.Whentheperformancefunctionhastheformofa sumofsquares,thentheHessianmatrixcanbeapproximated as

H=JTJ

Thegradientcanbecomputedas g=JTe

whereJistheJacobianmatrixthatcontainsfirstderivatives ofnetworkerrorswithrespecttotheweightsandbiasesand

e’isavectorofnetworkerrors.TheJacobianmatrixcanbe computedusingbackpropagationalgorithm.Itismuchless complexthancomputingtheHessianmatrix.

TheLevenberg–Marquardtalgorithmusesanapproxima- tiontotheHessianmatrix.

Xk+1=Xk−(JTJ+I)−1JTe whereIistheidentitymatrix.

(6)

4.13. Automatedregularization(trainbr)

Trainbrisanetworktrainingfunctionthatupdatestheweight andbiasvaluesaccordingtoLevenberg–Marquardtoptimiza- tion. It minimizes a combination of squared errors and weightsand,thendeterminesthecorrectcombinationsoas toproduceanetworkwhichgeneralizeswell.Theprocessis calledBayesianregularization[24].Itisdesirabletodetermine theoptimalregularizationparametersinanautomatedfash- ion.Theweightsandbiasesofthenetworkareassumedto berandomvariables withspecifieddistributions. Theregu- larizationparametersare relatedtotheunknownvariances associatedwiththesedistributions.TheBayesianregulariza- tiontakesplacewithintheLevenberg–Marquardtalgorithm.

BackpropagationisusedtocalculatetheJacobianjXofperfor- mancePERFwithrespecttotheweightandbiasvariablesX.

EachvariableisadjustedaccordingtoLevenberg–Marquardt jj=jX×jX

je=jX×E

dX=−jj+I×mu je

whereEisallerrorsandIistheidentitymatrix.

Twonewtrainingfunctionsareproposedinthispaper.

4.14. Proposedtrainingfunctions

4.14.1. Trainingfunction1

Themomentumvalueandthelearningrateinfluencetheclas- sificationaccuracyandthemeansquareerroroftheneural network.Anewtrainingfunctionwhichincludesthemomen- tumfactorandthelearningrateisproposed.Inthetraining function1eachvariableisadjustedaccordingtothegradient descentwithmomentumgivenby

dX=mc×dXprev+lr×(1−mc)×mc×dperf dX

where‘dXprev’isthepreviouschangetotheweightorbias,

mc’ismomentumconstant,‘lr’islearningrateand‘dperf’is thederivativeofperformancewithrespecttotheweightand biasvariablesX.Theclassificationaccuracyisincreasedbythe trainingfunction1.

4.14.2. Trainingfunction2

Thetrainingfunction1ismodifiedtoreducethemeansquare error. In the training function 2 each variable is adjusted accordingtothegradientdescentwithmomentumgivenby

dX=3.7×mc×dXprev+lr×(1−mc)×mc×dperf dx

where‘dXprev’isthepreviouschangetotheweightorbias,‘mc

ismomentumconstant,‘lr’islearningrateand‘dperf’isthe derivativeofperformancewithrespecttotheweightandbias variablesX.Themeansquareerrorisminimumfortraining function2.

5. Experimental results

Thestatisticalparametersarecalculatedforsegmentedsin- gleslicescontaining2lungsandgivenasinputtotheneural network.Thetrainingsetfortheneuralnetworkconsistsof 70%ofthetotalimagesandthetestingsetis30%ofthetotal images.Imagesfortrainingandtestingrefertothesegmented singleslicescontaining2lungs.Thesensitivity,specificityand accuracyarecalculatedforeachnetwork.

5.1. Sensitivity

Itmeasurestheproportionofactualpositiveswhicharecor- rectlyidentified.Thatisthepercentageofsegmentedslices containingcancerousnoduleiscorrectlyclassifiedascancer- ous.

Sensitivity= TP TP+FN

TruePositive(TP):Segmentedslicecontainingcancernodule isclassifiedascancerous.

FalsePositive(FP):Segmentedslicewithoutcancernoduleis classifiedascancerous.

TrueNegative(TN):Segmentedslicewithoutcancernodule isclassifiedasnon-cancerous.

FalseNegative(FN):Segmentedslicecontainingcancernod- uleisclassifiedasnon-cancer

5.2. Specificity

Itmeasurestheproportionofnegativeswhichare correctly identified.Thepercentageofsegmentedsliceswithoutcan- cerousnoduleiscorrectlyidentifiedasnoncancerous.

Specificity= TN TN+FP 5.3. Accuracy

Accuracyisastatisticalmeasureofhowwellaclassifiercor- rectlyidentifiesorexcludesacondition.Theaccuracyisthe proportionoftrueresults(bothtruepositiveandtruenegative) inthepopulation.

Accuracy= TP+TN TP+TN+FP+FN

Theresultshowsthattheparameterskewnessgivesmaxi- mumclassificationaccuracy.Thehigherordermomentsare considered, but nosignificant improvement inthe classifi- cationaccuracyisobserved.Comparedtothefeed forward networks, the classification accuracy of back propagation neuralnetworkismore.Theclassificationaccuracyoffeed forwardnetworksandbackpropagationnetworkisshownin Fig.5.Allthethirteentrainingfunctionsofthebackpropa- gationnetworkaretrainedwithdifferentmomentumfactor andlearningrate.Themomentumandlearningratearevar- iedfrom0.1to0.9.Keepingoneoftheseparametersconstant andvaryingtheother,theperformanceofthenetworkisstud- ied.Thebestclassificationaccuracyandleastmeansquare errorofdifferenttrainingfunctionsfordifferentmomentum

(7)

Traingd 0.4 0.6 86.7 0.1242

Traingda 0.3 0.5 88.89 0.118

Traingdm 0.4 0.7 88.89 0.115

Traingdx 0.3 0.7 91.11a 0.112a

Traincgf 0.3 0.5 75.5 0.164

Traincgb 0.6 0.3 73.3 0.198

Traincgp 0.4 0.8 75.56 0.162

Trainlm NA 0.8 82.2 0.144

Trainoss 0.6 0.6 73.3 0.196

Trainrp 0.4 0.7 80 0.154

Trainscg 0.6 0.3 73.3 0.197

Trainbfg NA 0.5 80 0.156

Trainbr 0.4 0.6 88.89 0.115

a Themaximumclassificationrateobtainedamongthe13trainingfunctions.

andlearningrateare showninTable1.Table1showsthat the training function Traingdx gives the maximum classi- ficationaccuracy of91.11%.The Specificity, Sensitivityand AccuracyofTraingdxandthe proposed training function1 withmomentum0.3andlearningrate0.7areshowninFig.6.

TheclassificationaccuracyandmeansquareerrorofTraingdx,

75 80 85 90 95 100

Hebb Madaline

Classificaon Percentage

Adaline Perceptron

propagaon back

Fig.5–Classificationaccuracyoffeedforwardnetworks andbackpropagationnetwork.

80 85 90 95 100 105

Percentage of Classificaon

Specificity

Traingdx versus proposed training funcon 1

Sensivity Accuracy

Traingdx

Proposed training funcon

Fig.6–Traingdxversusproposedtrainingfunction1.

Fig.7–Classificationaccuracyandmeansquareerrorof Traindx,trainingalgorithm1andtrainingalgorithm2.

0 10 20 30 40 50 60 70 80 90 100

-5 0 5 10 15 20 25 30 35

Sensivity

FPs/Scan

FROC curve

Proposed method

Fig.8–FROCcurve.

trainingfunction1andtrainingfunction2withmomentum 0.3andlearningrate0.7areshowninFig.7.Theperformance ofthesystemisanalyzedbytheFROC(Free-responseReceiver OperatingCharacteristic)curveshowninFig.8.Thesensitivity ismeasuredasthepercentageofsegmentedslicescontain- ingcancerousnoduleiscorrectlyclassifiedascancerous.The FROCcurveshowsthatthesensitivityis82%with2FPs/scan andthesensitivityincreasedto90%with15FPs/scanandto 91.4%with30FPs/scan.

6. Conclusion

Acomputeraidedsegmentationandclassificationmethodis proposed.Morphologicaloperationsare usedforsegmenta- tionandclassificationisdonebydifferentneuralnetworks.

Theregionofinterestisthesegmentedsingleslicescontain- ing 2lungs. Thestatisticalparametersare usedasfeatures forclassification.Comparedtothestatisticalparameterslike mean,standarddeviation,kurtosis,fifthcentralmomentand sixth centralmoment, skewness givesthe maximum clas- sification accuracy. There is5–8% increase inclassification accuracyforskewness.Amongthealreadyavailablethirteen training functions, the traingdx training functiongives the maximum classification accuracyof 91.11%. The proposed trainingfunction1givesclassificationaccuracyof93.3%with aspecificityof100%andsensitivityof91.4%.Theproposed

(8)

trainingfunction2givesclassificationaccuracyof93.3%and meansquareerrorof0.0942.Theperformanceoftheproposed CADsystemisgoodwithasensitivityof82%with2FPs/scan, whichisincreasedto91.4%with30FPs/scan.Thesensitivity ismeasuredasthepercentageofsegmentedslicescontaining cancerousnoduleiscorrectlyclassifiedascancerous.Themis- classificationoccurredintheimageswherethecancernodule islocatednearthepleuralsideofthelungs.

Conflict of interest

Theauthorsdonothaveanyconflictsofinterest,anddisclose anyfinancialandpersonalrelationshipswithotherpeopleor organizationsthatcouldinappropriatelyinfluence(bias)the presentedwork.

references

[1] D.M.Parkin,Globalcancerstatisticsintheyear2000,Lancet Oncology2(2001)533–543.

[2] A.Motohiro,H.Ueda,H.Komatsu,N.Yanai,T.Mori, Prognosisofnon-surgicallytreated,clinicalstageIlung cancerpatientsinJapan,LungCancer36(2002)65–69.

[3] R.N.Strickland,Tumordetectioninnonstationary backgrounds,IEEETransactionsonMedicalImaging13 (June)(1994)491–499.

[4] S.B.Lo,S.L.Lou,J.S.Lin,M.T.Freedman,S.K.Mun,Artificial convolutionneuralnetworktechniquesandapplicationsfor lungnoduledetection,IEEETransactionsonMedical Imaging14(August)(1995)711–718.

[5] G.Coppini,S.Diciotti,M.Falchini,N.Villari,G.Valli,Neural networksforcomputeraideddiagnosis:detectionoflung nodulesinchestradiograms,IEEETransactionson InformationTechnologyinBiomedicine4(2003)344–357.

[6] A.A.Abdulla,S.M.Shaharum,Lungcancercellclassification methodusingartificialneuralnetwork,Information EngineeringLetters2(March)(2012)50–58.

[7] N.Camarlinghi,etal.,Combinationofcomputer-aided detectionalgorithmsforautomaticlungnodule

identification,InternationalJournalofComputerAssisted RadiologyandSurgery7(2012)455–464.

[8] O.S.Al-Kadi,D.Watson,Textureanalysisofaggressiveand nonaggressivelungtumorCECTimages,IEEETransactions onBiomedicalEngineering55(2008)1822–1830.

[9] B.vanGinneken,etal.,Comparingandcombining algorithmsforcomputer-aideddetectionofpulmonary

nodulesincomputedtomographyscans:theANODE09 study,MedicalImageAnalysis14(2010)707–722.

[10] R.Bellotti,D.Cascio,etal.,ACADsystemfornodule detectioninlow-doselungCTsbasedonregiongrowingand anewactivecontourmodel,InternationalJournalof MedicalPhysicsResearchandPractice34(2007)4901–4911.

[11] W.J.Kostis,A.P.Reeves,D.F.Yankelevitz,C.I.Henschke,Three dimensionalsegmentationandgrowth-rateestimationof smallpulmonarynodulesinhelicalCTimages,IEEE TransactionsonMedicalImaging22(2003)1259–1274.

[12] R.C.Gonzalez,R.E.Wood,DigitalImageProcessing,3eded., PearsonPrenticeHall,2008.

[13] S.A.Patil,V.R.Udupi,C.D.Kane,A.I.Wasif,J.V.Desai,A.N.

Jadhav,Geometricalandtexturefeatureestimationoflung cancerandTBimageusingchestX-raydatabase,in:IEEE, 2009.

[14] M.G.Penedo,M.J.Carreira,A.Mosquera,Computer-aided diagnosisaneural-network-basedapproachtolungnodule detection,IEEETransactionsOnMedicalImaging17(1998) 872–880.

[15] S.N.Sivanandan,S.Sumathi,S.N.Deepa,Introductionto NeuralNetworksUsingMatlab,TataMcGrawHillPublishing CompanyLimited,2006.

[16] F.Paulin,A.Santhakumaran,Backpropagationneural networkbycomparinghiddenneurons:casestudyonbreast cancerdiagnosis,InternationalJournalofComputer Applications2(June)(2010)40–44.

[17] J.Ramesh,K.Gunavathi,etal.,Faultclassificationin phase-lockedloopsusingbackpropagationneuralnetworks, ETRI,Journal30(August(4))(2008)546–554.

[18] M.T.Hagan,H.B.Demuth,M.H.Beale,NeuralNetwork Design,PWSPublishing,Boston,MA,1996.

[19] M.Riedmiller,H.Braun,Adirectadaptivemethodforfaster backpropagationlearning:theRPROPalgorithm,in:

ProceedingsoftheIEEEInternationalConferenceonNeural Networks,1993.

[20] C.Charalambous,Conjugategradientalgorithmforefficient trainingofartificialneuralnetworks,IEEEProceedings139 (3)(1992)301–310.

[21] M.F.Moller,Ascaledconjugategradientalgorithmforfast supervisedlearning,NeuralNetworks6(1993)525–533.

[22] R.Battiti,Firstandsecondordermethodsforlearning:

BetweensteepestdescentandNewton’smethod,Neural Computation4(2)(1992)141–166.

[23] M.T.Hagan,M.Menhaj,Trainingfeed-forwardnetworks withtheMarquardtalgorithm,IEEETransactionsonNeural Networks5(6)(1994)989–993.

[24] O.DeJesús,M.T.Hagan,Backpropagationalgorithmsfora broadclassofdynamicnetworks,IEEETransactionson NeuralNetworks18(January(1))(2007)14–27.

Referensi

Dokumen terkait

Ngusabha Sambah in the process of internalizing the values of krama desa and teruna-daha in the Tenganan Pegringsingan traditional village, produces moral values that

¡Cómo Wagner te adol'aría contemplán- dote el pecho encerrado en los mamilares de es- cama!i, el casquete de oro rematado por dos alas, la larca lanza en una mano, y el manto de