• Tidak ada hasil yang ditemukan

Pre-trained Deep CNN Model for Feature Extraction

17. Tajbakhsh, N., et al.: Convolutional neural networks for medical image analysis

3.3 Pre-trained Deep CNN Model for Feature Extraction

564 M. Oloko-Oba and S. Viriri

Pre-trained Convolutional Neural Network for the Diagnosis of Tuberculosis 565

[18] to recognize several objects. Example of these models that have performed excellently well includes AlexNet [17], VggNet [25], GoogLeNet [28], ResNet [29]

among other prominent ones. In this present work, we treated the Vgg16 variant of VggNet as a feature extractor rather than as end-to-end network as the case of the original model.

We truncated the three fully convolutional layers in the original vgg16, which resulted in making the “max-pooling” the final and output layer, as presented in Fig. 4. The chest radiographs images are resized to 224 × 224 × 3 to conform to the input dimension of Vgg16 and then propagated through the network until the last max-pooling layer, which now serves as the output layer having a 7 × 7 × 512 shape that is flattened into a vector map and fed as input to the classifier. Of course, the output can be extracted at any random layer, but we decided to output at the pooling layer, this is with respect to the analysis presented in [37].

Fig. 4.Truncated Vgg16 where “Max-pooling layer” is employed as the output layer.

After extracting features from the chest radiographs, these features are then fed as input to the Logistic Regression classifier for training to obtain classifica- tion details, as illustrated in Fig. 5. The LR classifier is powerful and efficient for binary classification. Its relation to exponential probability distribution makes them perform well in medical domain because it allows the dependence of class labels on features and gives room to carry out sensitivity analysis.

As a tradition, we split the dataset into training and validation set prior to training the classifier such that an indicator is configured as [:q] and [q:] spec- ifying that all the images prior to “:q” are included in the training set while the images after “q:” are reserved for validation. The system ‘C’ parameters range between “0.0001 to 1000”, the optimum parameter for the classifier is then selected by employing a grid search. Once the optimum parameter is estab- lished, the model is then evaluated against the testing data to determine the performance of the classifier in detecting whether an image is infected with TB or not.

4 Result and Discussion

This section presents the experimental results performed on both Montgomery

and Shenzhen benchmark datasets. These datasets were split into 70(%) used

for training the logistic regression classifier on the extracted features from the

566 M. Oloko-Oba and S. Viriri

Fig. 5.Procedural flow diagram.

modified pre-trained vgg16 model. In contrast, 30(%) of the dataset was used to evaluate the performance of the classifier on both datasets.

In the first experiment carried out on the Montgomery set, the system obtained an accuracy of 89(%) and precision of 90.1(%) while on the Shen- zhen set, the system performed at 95.8(%) accuracy with a precision of 96(%).

Also, we took a step further to determine the Rank-1 accuracy of the system, which is the number of times where the predicted output conforms to the corre- sponding ground-truth. Hence, the Rank-1 accuracy obtained for both datasets is 92.40(%) and 99.25(%), respectively, as shown in Table 1.

Table 1.Results of the experiments

Database Accuracy (%) Precision (%) Rank-1 (%)

Montgomery 89 90.1 92.40

Shenzhen 95.8 96 99.25

It can be observed from Table 1 that the result obtained in the experiment on the Montgomery set is lower compared to the Shenzhen set, as reported in related work. This outcome is mostly as a result of lesser dataset and class imbalance, Although, in this case since we performed augmentation on the dataset, we can say only class imbalance where the ratio of healthy to unhealthy samples is about

“60 : 40” in the Montgomery set is the factor responsible for it (Table 2).

75

Pre-trained Convolutional Neural Network for the Diagnosis of Tuberculosis 567 Table 2. Comparison of the proposed model with related work. The performance metrics are measured in (%)

Ref Model Accuracy Precision

24 VggNet 81.25 0.80

16 GoogLeNet, VggNet, ResNet 0.847 -

27 AlexNet 90.03 -

30 AlexNet, VggNet, ResNet 92.00 -

31 VggNet, Inception, ResNet, DensNet 81.60 -

Proposed VggNet 95.80 96.00

5 Conclusion

In this work, we explored pre-trained CNN and implemented Vgg16 as a fea- tures extractor. Logistic Regression classifier was then trained on the extracted features to detect Tuberculosis. This study and other related work have shown characteristics such as eliminating “training complexity, expenses, high comput- ing power, and large datasets’ that made implementing pre-trained CNN models powerful and relevant to medical and other fields where dataset are limited for classification. Although the approach applied in this work has obtained excellent results, but a better computer-aided detection system can still be developed if a large amount of annotated chest radiographs dataset can be created. These anno- tated datasets can then be utilized in building deeper models from the scratch to identify various pulmonary diseases.

References

1. World Health Organization.: Global status report on alcohol and health 2018.

World Health Organization (2019)

2. Cohn, D.L., O’Brien, R.J., Geiter, L.J., Gordin, F., Hershfield, E., Horsburgh, C.:

Targeted tuberculin testing and treatment of latent tuberculosis infection. MMWR Morb Mortal Wkly Rep 49(6), 1–54 (2000)

3. Desikan, P.: Sputum smear microscopy in tuberculosis: is it still relevant? Indian J. Med. Res. 137(3), 442 (2013)

4. Zwerling, A., van den Hof, S., Scholten, J., Cobelens, F., Menzies, D., Pai, M.:

Interferon-gamma release assays for tuberculosis screening of healthcare workers:

a systematic review. Thorax 67(1), 62–70 (2012)

5. Leung, C.C.: Reexamining the role of radiography in tuberculosis case finding. Int.

J. Tuberc. Lung Dis.: Official J. Int. Union Against Tuberc. Lung Dis.15(10), 1279 (2011)

6. Jaeger, S., et al.: Automatic screening for tuberculosis in chest radiographs: a survey. Quant. Imaging Med. Surg. 3(2), 89 (2013)

7. Naing, W.Y.N., Htike, Z.Z.: Advances in automatic tuberculosis detection in chest x-ray images. Signal Image Process.5(6), 41 (2014)

568 M. Oloko-Oba and S. Viriri

8. World Health Organization.: Tuberculosis prevalence surveys: a handbook. World Health Organization (2011)

9. World Health Organization.: Chest radiography in tuberculosis detection: summary of current WHO recommendations and guidance on programmatic approaches (No.WHO/HTM/TB/2016.20). World Health Organization (2016)

10. Noor, N.M., Rijal, O.M., Yunus, A., Mahayiddin, A.A., Peng, G.C., Abu-Bakar, S.A.R.: A statistical interpretation of the chest radiograph for the detection of pul- monary tuberculosis. In: 2010 IEEE EMBS Conference on Biomedical Engineering and Sciences (IECBES), pp. 47–51 (2010)

11. Pedrazzoli, D., Lalli, M., Boccia, D., Houben, R., Kranzer, K.: Can tuberculosis patients in resource-constrained settings afford chest radiography? Eur. Respir. J.

49(3), 1601877 (2017)

12. Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Networks 61, 85–117 (2015)

13. Livieris, I.E., Kanavos, A., Tampakas, V., Pintelas, P.: A weighted voting ensemble self-labeled algorithm for the detection of lung abnormalities from X-rays. Algo- rithms 12(3), 64 (2019)

14. Al Hadhrami, E., Al Mufti, M., Taha, B., Werghi, N.: Transfer learning with convo- lutional neural networks for moving target classification with micro-Doppler radar spectrograms. In: IEEE International Conference on Artificial Intelligence and Big Data (ICAIBD), pp. 148–154 (2018)

15. Nogueira, K., Penatti, O.A., Dos Santos, J.A.: Towards better exploiting convolu- tional neural networks for remote sensing scene classification. Pattern Recogn. 61, 539–556 (2017)

16. Lopes, U.K., Valiati, J.F.: Pre-trained convolutional neural networks as feature extractors for tuberculosis detection. Comput. Biol. Med.89, 135–143 (2017) 17. Krizhevsky, A., Sutskever, I., Hinton, G. E.: Imagenet classification with deep con-

volutional neural networks. In: Advances in Neural Information Processing Sys- tems, pp. 1097–1105 (2012)

18. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009)

19. Dean, J., et al.: Large scale distributed deep networks. In: Advances in Neural Information Processing Systems, pp. 1223–1231 (2012)

20. Xie, M., Jean, N., Burke, M., Lobell, D., Ermon, S.: Transfer learning from deep features for remote sensing and poverty mapping. In: Thirtieth AAAI Conference on Artificial Intelligence (2016)

21. Bousetouane, F., Morris, B.: Off-the-shelf CNN features for fine-grained classifica- tion of vessels in a maritime environment. In: Bebis, G., et al. (eds.) ISVC 2015.

LNCS, vol. 9475, pp. 379–388. Springer, Cham (2015). https://doi.org/10.1007/

978-3-319-27863-6 35

22. Liu, T., Xie, S., Yu, J., Niu, L., Sun, W.: Classification of thyroid nodules in ultrasound images using deep model based transfer learning and hybrid features.

In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 919–923 (2017)

23. Ribeiro, E., Uhl, A., Wimmer, G., H¨afner, M.: Exploring deep learning and transfer learning for colonic polyp classification. Comput. Math. Methods Med. (2016) 24. Ahsan, M., Gomes, R., Denton, A.: Application of a Convolutional Neural Network

using transfer learning for tuberculosis detection. In: IEEE International Confer- ence on Electro Information Technology (EIT), pp. 427–433 (2019)

77

Pre-trained Convolutional Neural Network for the Diagnosis of Tuberculosis 569 25. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale

image recognition. arXiv preprint arXiv:1409.1556(2014)

26. Cicero, M., et al.: Training and validating a deep convolutional neural network for computer-aided detection and classification of abnormalities on frontal chest radiographs. Invest. Radiol. 52(5), 281–287 (2017)

27. Hwang, S., Kim, H. E., Jeong, J., Kim, H.J.: A novel approach for tuberculosis screening based on deep convolutional neural networks. In: International Society for Optics and Photonics.: Computer-Aided Diagnosis, vol. 9785, p. 97852W (2016) 28. Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE

Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)

29. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In:

Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

30. Islam, M.T., Aowal, M.A., Minhaz, A.T., Ashraf, K.: Abnormality detection and localization in chest x-rays using deep convolutional neural networks. arXiv preprint arXiv:1705.09850(2017)

31. Rohilla, A., Hooda, R., Mittal, A.: Tb detection in chest radiograph using deep learning architecture. In: ICETETSM-17, pp. 136–147 (2017)

32. Rokach, L.: Ensemble-based classifiers. Artif. Intell. Rev. 33(1–2), 1–39 (2010) 33. Heo, S.J., et al.: Deep learning algorithms with demographic information help to

detect tuberculosis in chest radiographs in annual workers’ health examination data. Int. J. Environ. Rese. Public Health 16(2), 250 (2019)

34. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomed- ical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F.

(eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).

https://doi.org/10.1007/978-3-319-24574-4 28

35. Jaeger, S., et al.: Automatic tuberculosis screening using chest radiographs. IEEE Trans. Med. Imaging 33(2), 233–245 (2013)

36. Candemir, S., et al.: Lung segmentation in chest radiographs using anatomical atlases with nonrigid registration. IEEE Trans. Med. Imaging 33(2), 577–590 (2013)

37. Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: CNN features off-the- shelf: an astounding baseline for recognition. In: Proceedings of the IEEE confer- ence on Computer Vision and Pattern Recognition Workshops, pp. 806–813 (2014) 38. Bloice, M.D., Stocker, C., Holzinger, A.: Augmentor: an image augmentation

library for machine learning. arXiv preprint arXiv:1708.04680(2017)

39. Kurt, B., Nabiyev, V.V., Turhan, K.: Medical images enhancement by using anisotropic filter and clahe. In: IEEE International Symposium on Innovations in Intelligent Systems and Applications, pp. 1–4 (2012)

40. Tajbakhsh, N., et al.: Convolutional neural networks for medical image analysis:

full training or fine tuning? IEEE Trans. Med. Imaging 35(5), 1299–1312 (2016) 41. Hosmer Jr., D.W., Lemeshow, S., Sturdivant, R.X.: Applied logistic regression, vol.

398. John Wiley and Sons, New York (2013)

3.3.2 Conclusion

In this work, we truncated a pre-trained Vgg16 and implemented it to extract TB features from the CXR images; the extracted features were then fed as input to the classifier algorithm to obtain the final binary classification output. The classifier algorithm was constructed such that the “C” parameters ranged between “0.0001 to 1000”; the optimum parameter was then selected by employing a grid search. Once the optimum parameter was established, the model was then evaluated against the testing data samples to determine the performance of the classifier in detecting whether an image was infected with TB or not. The technique achieved 95.80%

accuracy, with a precision of 96%. These results proved the effect of implementing transfer learning on training new models to improve accuracy, most importantly where large number of datasets are unavailable, like in the medical field.

3.4 Ensemble of Convolution Neural Networks for