BAB V PENUTUP
4.2. Saran
Saran yang dapat diberikan penulis demi untuk kepentingan pengembangan penelitian selanjutnya adalah sebagai berikut:
1. Menambahkan jenis dari penyakit daun yang digunakan dalam penelitian, agar menghasilkan program yang dapat mengidentifikasi jenis yang lebih banyak.
2. Menyediakan dataset yang disertai dengan background sehingga sistem mampu mendeteksi dengan baik disegala kondisi.
3. Diharapkan dalam pengembangan selanjutnya dapat menyempurnakan dari program implementasi identifikasi yang lebih baik lagi.
Program Studi Ilmu Komputer (S2) Universitas Nusa Mandiri
DAFTAR PUSTAKA
[1] F. Aziz, “Laporan Akhir Penelitian: Diagnosa Penyakit pada Daun Padi dengan Metode Kembangan CNN,” Jakarta, 2021.
[2] R. C. Bautista and P. A. Counce, “An Overview of Rice and Rice Quality,”
Cereal Foods World, vol. 65, no. 5, 2020, doi: 10.1094/cfw-65-5-0052.
[3] M. T. Deddy Wahyudin Purba, Pengantar Ilmu Pertanian. Jakarta: Yayasan Kita Menulis, 2020.
[4] S. M. Pasaribu, I. S. Anugrah, and R. P. Perdana, “Penerapan Asuransi Pertanian Pada Era Pandemi Covid-19,” Dampak Pandemi Covid-19 Perspekt. Adapt. dan Resiliensi Sos. Ekon. Pertan., vol. 1, no. 3, pp. 921–
940, 2020, [Online]. Available:
http://pse.litbang.pertanian.go.id/ind/pdffiles/45-BBRC-2020-V-3-5- SHP.pdf.
[5] B. Sumiarto and dkk, Pemikiran Guru Besar Universitas Gadjah Mada Menuju Indonesia Maju 2045, vol. 1. Yogyakarta: Gadjah Mada University Press, 2021.
[6] B. Lakitan et al., “Recognizing farmers’ practices and constraints for intensifying rice production at Riparian Wetlands in Indonesia,” NJAS - Wageningen J. Life Sci., vol. 85, no. May, pp. 10–20, 2018, doi:
10.1016/j.njas.2018.05.004.
[7] B. S. Ghyar and G. K. Birajdar, “Computer vision based approach to detect rice leaf diseases using texture and color descriptors,” Proc. Int. Conf. Inven.
Comput. Informatics, ICICI 2017, no. Icici, pp. 1074–1078, 2018, doi:
10.1109/ICICI.2017.8365305.
[8] D.Bandara and B.Mayurathan, “Detection and classification of rice plant diseases using Image Processing Techniques,” Int. Conf. Adv. Res. Comput., vol. 1, no. 1, pp. 29–33, 2021, doi: 10.3233/IDT-170301.
[9] A. E. Asibi, Q. Chai, and J. A. Coulter, “Rice blast: A disease with implications for global food security,” Agronomy, vol. 9, no. 8, pp. 1–14, 2019, doi: 10.3390/agronomy9080451.
[10] U. W. Muzayyanah Rahmiyah, Pengendalian Hama dan Penyakit Tanaman.
Jakarta: Yayasan Kita Menulis, 2021.
[11] K. P. RI, “Kementerian Pertanian - Kementan Dorong Pemanfaatan Industri 4.0 Sektor Pertanian,” Kementrian Pertanian Republik Indonesia, 2021.
https://www.pertanian.go.id/home/?show=news&act=view&id=3399
Program Studi Ilmu Komputer (S2) Universitas Nusa Mandiri (accessed Jan. 05, 2022).
[12] L. Muflikhah, Widodo, W. F. Mahmudi, and Solimun, Machine Learning dalam Bioinformatika. Malang: UB Press, 2021.
[13] S. K. Nuswardhani, “Kajian Serapan Benih Padi Bersertifikat Di Indonesia Periode 2012– 2017,” Agrika, vol. 13, no. 2, p. 162, 2019, doi:
10.31328/ja.v13i2.1207.
[14] A. Rifa’I and D. Mahdiana, “Image processing for diagnosis rice plant diseases using the fuzzy system,” 2020 Int. Conf. Comput. Sci. Its Appl.
Agric. ICOSICA 2020, 2020, doi: 10.1109/ICOSICA49951.2020.9243274.
[15] S. Ghosal and K. Sarkar, “Rice Leaf Diseases Classification Using CNN with Transfer Learning,” 2020 IEEE Calcutta Conf. CALCON 2020 - Proc., pp.
230–236, 2020, doi: 10.1109/CALCON49167.2020.9106423.
[16] S. Arya and R. Singh, “A Comparative Study of CNN and AlexNet for Detection of Disease in Potato and Mango leaf,” IEEE Int. Conf. Issues Challenges Intell. Comput. Tech. ICICT 2019, no. Dl, 2019, doi:
10.1109/ICICT46931.2019.8977648.
[17] B. S. Bari et al., “A real-time approach of diagnosing rice leaf disease using deep learning-based faster R-CNN framework,” PeerJ Comput. Sci., vol. 7, p. e432, 2021, doi: 10.7717/peerj-cs.432.
[18] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once:
Unified, Real-Time Object Detection,” IEEE, vol. 1, no. 1, p. 10, 2018.
[19] P. K. Sethy, N. K. Barpanda, A. K. Rath, and S. K. Behera, “Deep feature based rice leaf disease identification using support vector machine,” Comput.
Electron. Agric., vol. 175, p. 105527, Aug. 2020, doi:
10.1016/J.COMPAG.2020.105527.
[20] S. Phadikar, J. Sil, and A. K. Das, “Classification of Rice Leaf Diseases Based on Morphological Changes,” Int. J. Inf. Electron. Eng., vol. 2, no. 3, pp. 460–463, 2012.
[21] K. N, L. V. Narasimha Prasad, C. S. Pavan Kumar, B. Subedi, H. B. Abraha, and V. E. Sathishkumar, “Rice leaf diseases prediction using deep neural networks with transfer learning,” Environ. Res., vol. 198, no. May, p.
111275, 2021, doi: 10.1016/j.envres.2021.111275.
[22] J. Chen, D. Zhang, A. Zeb, and Y. A. Nanehkaran, “Identification of rice plant diseases using lightweight attention networks,” Expert Syst. Appl., vol.
169, no. August 2020, p. 114514, 2021, doi: 10.1016/j.eswa.2020.114514.
Program Studi Ilmu Komputer (S2) Universitas Nusa Mandiri [23] S. K. Noon, M. Amjad, M. A. Qureshi, and A. Mannan, “Overfitting
Mitigation Analysis in Deep Learning Models for Plant Leaf Disease Recognition,” Proc. - 2020 23rd IEEE Int. Multi-Topic Conf. INMIC 2020, vol. 1, no. 1, pp. 10–14, 2020, doi: 10.1109/INMIC50486.2020.9318044.
[24] H. Hirano, A. Minagi, and K. Takemoto, “Universal adversarial attacks on deep neural networks for medical image classification,” BMC Med. Imaging, vol. 21, no. 1, pp. 1–13, 2021, doi: 10.1186/s12880-020-00530-y.
[25] M. M. H. Matin, A. Khatun, M. G. Moazzam, and M. S. Uddin, “An Efficient Disease Detection Technique of Rice Leaf Using AlexNet,” J. Comput.
Commun., vol. 08, no. 12, pp. 49–57, 2020, doi: 10.4236/jcc.2020.812005.
[26] C. ÖZDEN, “Apple leaf disease detection and classification based on transfer learning,” Turkish J. Agric. For., vol. 45, no. 6, pp. 775–783, 2021, doi: 10.3906/tar-2010-100.
[27] M. A. Islam, N. Rahman Shuvo, M. Shamsojjaman, S. Hasan, S. Hossain, and T. Khatun, “An Automated Convolutional Neural Network Based Approach for Paddy Leaf Disease Detection,” Int. J. Adv. Comput. Sci. Appl., vol. 12, no. 1, pp. 280–288, 2021, doi: 10.14569/IJACSA.2021.0120134.
[28] T. Ahmad, X. Chen, A. S. Saqlain, and Y. Ma, “EDF-SSD: An Improved Feature Fused SSD for Object Detection,” 2021 IEEE 6th Int. Conf. Cloud Comput. Big Data Anal. ICCCBDA 2021, pp. 469–473, 2021, doi:
10.1109/ICCCBDA51879.2021.9442501.
[29] K. Kiratiratanapruk, P. Temniranrat, A. Kitvimonrat, W. Sinthupinyo, and S.
Patarapuwadol, “Using Deep Learning Techniques to Detect Rice Diseases from Images of Rice Fields,” Lect. Notes Comput. Sci. (including Subser.
Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 12144 LNAI, pp.
225–237, Sep. 2020, doi: 10.1007/978-3-030-55789-8_20.
[30] S. Jha, C. Seo, E. Yang, and G. P. Joshi, “Real time object detection and trackingsystem for video surveillance system,” Multimed. Tools Appl., vol.
80, no. 3, pp. 3981–3996, 2021, doi: 10.1007/s11042-020-09749-x.
[31] P. G. J. and N. K. V., “Google Colaboratory : Tool for Deep Learning and Machine Learning Applications,” Indian J. Comput. Sci., vol. 6, no. 3–4, pp.
23–26, Aug. 2021, doi: 10.17010/IJCS/2021/V6/I3-4/165408.
[32] M. K. Agbulos, Y. Sarmiento, and J. Villaverde, “Identification of Leaf Blast and Brown Spot Diseases on Rice Leaf with YOLO Algorithm,” 2021 7th Int. Conf. Control Sci. Syst. Eng. ICCSSE 2021, pp. 307–312, Jul. 2021, doi:
10.1109/ICCSSE52761.2021.9545153.
Program Studi Ilmu Komputer (S2) Universitas Nusa Mandiri [33] geeksforgeeks, “Detection vs Recognition vs Segmentation Object,”
geeksforgeeks.org, 2020. https://www.geeksforgeeks.org/object-detection- vs-object-recognition-vs-image-segmentation/ (accessed Jan. 18, 2022).
[34] H. J. Jeong, K. S. Park, and Y. G. Ha, “Image Preprocessing for Efficient Training of YOLO Deep Learning Networks,” Proc. - 2018 IEEE Int. Conf.
Big Data Smart Comput. BigComp 2018, vol. 1, no. 1, pp. 635–637, 2018, doi: 10.1109/BigComp.2018.00113.
[35] A. Aziz, “Underfitting dan Overfitting Model,” Skillfull, 2020.
https://skillplus.web.id/underfitting-dan-overfitting-model/ (accessed Jan.
17, 2022).
[36] Google Inc., “Overfit and Underfit at Deep Learning,” Google Colaboratory, 2018. https://colab.research.google.com/github/csahat/docs/blob/keras- overfit-trans-id/site/id/tutorials/keras/overfit_and_underfit.ipynb (accessed Jan. 18, 2022).
[37] C. Shorten and T. M. Khoshgoftaar, “A survey on Image Data Augmentation for Deep Learning,” J. Big Data, vol. 6, no. 1, 2019, doi: 10.1186/s40537- 019-0197-0.
[38] P. Kim, “MATLAB Deep Learning,” MATLAB Deep Learn., vol. 1, no.
November 2013, pp. 121–147, 2017, doi: 10.1007/978-1-4842-2845-6.
[39] A. Yang, X. Yang, W. Wu, H. Liu, and Y. Zhuansun, “Research on feature extraction of tumor image based on convolutional neural network,” IEEE Access, vol. 7, pp. 24204–24213, 2019, doi:
10.1109/ACCESS.2019.2897131.
[40] K. Liu, G. Kang, N. Zhang, and B. Hou, “Breast Cancer Classification Based on Fully-Connected Layer First Convolutional Neural Networks,” IEEE Access, vol. 6, no. c, pp. 23722–23732, 2018, doi:
10.1109/ACCESS.2018.2817593.
[41] N. Ketkar, “Convolutional Neural Networks with Python,” Deep Learn. with Python, vol. 1, pp. 63–78, 2017, doi: 10.1007/978-1-4842-2766-4.
[42] Y. F. Yu, D. Q. Dai, C. X. Ren, and K. K. Huang, “Discriminative multi- layer illumination-robust feature extraction for face recognition,” Pattern Recognit., vol. 67, pp. 201–212, 2017, doi: 10.1016/j.patcog.2017.02.004.
[43] D. Gibert, C. Mateu, J. Planes, and R. Vicens, “Using convolutional neural networks for classification of malware represented as images,” J. Comput.
Virol. Hacking Tech., vol. 15, no. 1, pp. 15–28, 2019, doi: 10.1007/s11416- 018-0323-0.
Program Studi Ilmu Komputer (S2) Universitas Nusa Mandiri [44] S. Frizzi, R. Kaabi, M. Bouchouicha, J. M. Ginoux, E. Moreau, and F.
Fnaiech, “Convolutional neural network for video fire and smoke detection,”
IECON Proc. (Industrial Electron. Conf., pp. 877–882, 2016, doi:
10.1109/IECON.2016.7793196.
[45] C. Kong and S. Lucey, “Take it in your stride: Do we need striding in CNNs?,” arXiv:1712.02502v2, vol. 1, no. 2, 2017, [Online]. Available:
http://arxiv.org/abs/1712.02502.
[46] R. Zhang, Q. Zong, and X. Zhao, “A new convolutional neural network for motor imagery classification,” Chinese Control Conf. CCC, vol. 2019-July, pp. 8428–8432, 2019, doi: 10.23919/ChiCC.2019.8865152.
[47] M. Dwarampudi and N. V. S. Reddy, “Effects of padding on LSTMs and CNNs,” 1903.07288v1, vol. 1, no. 1, 2019, [Online]. Available:
http://arxiv.org/abs/1903.07288.
[48] J. Brownlee, “A Gentle Introduction to Pooling Layers for Convolutional Neural Networks,” https://machinelearningmastery.com/, Apr. 22, 2019.
https://machinelearningmastery.com/pooling-layers-for-convolutional- neural-networks/ (accessed Jan. 14, 2022).
[49] W. Koehrsen, “Transfer Learning with Convolutional Neural Networks in PyTorch | by Will Koehrsen | Towards Data Science,” towardsdatascience, Nov. 27, 2018. https://towardsdatascience.com/transfer-learning-with- convolutional-neural-networks-in-pytorch-dd09190245ce (accessed Jan. 14, 2022).
[50] K. Simonyan and A. Zisserman, “VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION,” ICLR 2015, vol. 6, no. 1, pp. 1–14, 2015, [Online]. Available:
https://arxiv.org/pdf/1409.1556.pdf.
[51] T. J. Perumanoor, “Pengantar VGG16,” Ichi.pro, 2018.
https://ichi.pro/id/apa-itu-vgg16-pengantar-vgg16-267001881294357 (accessed Jan. 14, 2022).
[52] M. U. Hasan, “VGG16 - Convolutional Network for Classification and Detection,” neurohive.io, 2018. https://neurohive.io/en/popular- networks/vgg16/ (accessed Jan. 14, 2022).
[53] O. Russakovsky et al., “ImageNet Large Scale Visual Recognition Challenge,” Int. J. Comput. Vis., vol. 115, no. 3, pp. 211–252, Dec. 2015, doi: 10.1007/S11263-015-0816-Y.
[54] C. Szegedy, S. Reed, P. Sermanet, V. Vanhoucke, and A. Rabinovich,
Program Studi Ilmu Komputer (S2) Universitas Nusa Mandiri
“Going deeper with convolutions,” arXiv:1409.4842v1, vol. 1, no. 1, pp. 1–
12, 2014.
[55] R. Anand, T. Shanthi, M. S. Nithish, and S. Lakshman, “Face Recognition and Classification Using GoogleNET Architecture,” Adv. Intell. Syst.
Comput., vol. 1048, pp. 261–269, 2020, doi: 10.1007/978-981-15-0035- 0_20.
[56] C. R. Rahman et al., “Identification and recognition of rice diseases and pests using convolutional neural networks,” Biosyst. Eng., vol. 194, pp. 112–120, 2020, doi: 10.1016/j.biosystemseng.2020.03.020.
[57] S. B. Jadhav, V. R. Udupi, and S. B. Patil, “Identification of plant diseases using convolutional neural networks,” Int. J. Inf. Technol. 2020 136, vol. 13, no. 6, pp. 2461–2470, Feb. 2020, doi: 10.1007/S41870-020-00437-5.
[58] P. Jiang, Y. Chen, B. Liu, D. He, and C. Liang, “Real-Time Detection of Apple Leaf Diseases Using Deep Learning Approach Based on Improved Convolutional Neural Networks,” IEEE Access, vol. 7, pp. 59069–59080, 2019, doi: 10.1109/ACCESS.2019.2914929.
[59] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” Proc. IEEE Conf. Comput. Vis. Pattern Recognit., vol. 45, no.
8, pp. 770–778, 2016, doi: 10.1002/chin.200650130.
[60] K. He, “Arsitektur CNN: Bagaimana ResNet bekerja dan mengapa?,” 2020.
https://ichi.pro/id/arsitektur-cnn-bagaimana-resnet-bekerja-dan-mengapa- 30895772711476 (accessed Jan. 17, 2022).
[61] B. Mandal, A. Okeukwu, and Y. Theis, “Masked Face Recognition using ResNet-50,” 2021, [Online]. Available: http://arxiv.org/abs/2104.08997.
[62] K. Huang, “Traffic Agent Movement Prediction Using ResNet-based Model,” 2021 IEEE 6th Int. Conf. Intell. Comput. Signal Process. ICSP 2021, no. Icsp, pp. 136–139, 2021, doi: 10.1109/ICSP51882.2021.9408922.
[63] R. Reenadevi, T. Sathiya, and B. Sathiyabhama, “Breast Cancer Histopathological Image Classification Using Augmentation Based on Optimized Deep ResNet-152 Structure,” vol. 25, no. 6, pp. 5866–5874, 2021.
[64] E. Dong, Y. Zhu, Y. Ji, and S. Du, “An improved convolution neural network for object detection using Yolov2,” Proc. 2018 IEEE Int. Conf.
Mechatronics Autom. ICMA 2018, vol. 1, no. 1, pp. 1184–1188, 2018, doi:
10.1109/ICMA.2018.8484733.
[65] J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,”
Program Studi Ilmu Komputer (S2) Universitas Nusa Mandiri arXiv:1804.02767v1, 2018, [Online]. Available:
http://arxiv.org/abs/1804.02767.
[66] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “YOLOv4: Optimal Speed and Accuracy of Object Detection,” 2020, [Online]. Available:
http://arxiv.org/abs/2004.10934.
[67] G. Jocher et al., “ultralytics/yolov5: v6.0 - YOLOv5n ‘Nano’ models, Roboflow integration, TensorFlow export, OpenCV DNN support,” Oct.
2021, doi: 10.5281/ZENODO.5563715.
[68] ICHI, “Algoritma Deteksi Objek — Arsitektur YOLO v5,” 2021.
https://ichi.pro/id/algoritma-deteksi-objek-arsitektur-yolo-v5- 151597905900271 (accessed Jan. 17, 2022).
[69] C. Y. Wang, H. Y. Mark Liao, Y. H. Wu, P. Y. Chen, J. W. Hsieh, and I. H.
Yeh, “CSPNet: A new backbone that can enhance learning capability of CNN,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work., vol.
2020-June, pp. 1571–1580, 2020, doi: 10.1109/CVPRW50498.2020.00203.
[70] Y. Piao, Y. Jiang, M. Zhang, J. Wang, and H. Lu, “PANet: Patch-Aware Network for Light Field Salient Object Detection,” IEEE Trans. Cybern., pp.
1–7, 2021, doi: 10.1109/TCYB.2021.3095512.
[71] Yuxin Hu, Y. Li, and Z. Pan, “A Dual-Polarimetric SAR Ship Detection Dataset and a Memory-Augmented Autoencoder-Based Detection Method,”
Sensors, vol. 1, no. 1, 2021.
[72] T. Yang et al., “Evaluation and machine learning improvement of global hydrological model-based flood simulations,” Environ. Res. Lett., vol. 14, no. 11, 2019, doi: 10.1088/1748-9326/ab4d5e.
[73] S. Haghighi, M. Jasemi, S. Hessabi, and A. Zolanvari, “PyCM: Multiclass confusion matrix library in Python,” J. Open Source Softw., vol. 3, no. 25, p.
729, 2018, doi: 10.21105/joss.00729.
[74] K. S. Nugroho, “Confusion Matrix untuk Evaluasi Model pada Supervised
Learning,” Medium, Nov. 13, 2019.
https://ksnugroho.medium.com/confusion-matrix-untuk-evaluasi-model- pada-unsupervised-machine-learning-bc4b1ae9ae3f (accessed Jan. 17, 2022).
[75] S. M. Rallapalli and M. A. Saleem Durai, “A contemporary approach for disease identification in rice leaf,” Int. J. Syst. Assur. Eng. Manag., 2021, doi: 10.1007/s13198-021-01159-y.
Program Studi Ilmu Komputer (S2) Universitas Nusa Mandiri [76] S. Singh, U. Ahuja, M. Kumar, K. Kumar, and M. Sachdeva, “Face mask
detection using YOLOv3 and faster R-CNN models: COVID-19 environment,” Multimed. Tools Appl., vol. 80, no. 13, pp. 19753–19768, 2021, doi: 10.1007/s11042-021-10711-8.
[77] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137–1149, Jun. 2017, doi:
10.1109/TPAMI.2016.2577031.
[78] P. K. Sethy, N. K. Barpanda, A. K. Rath, and S. K. Behera, “Nitrogen Deficiency Prediction of Rice Crop Based on Convolutional Neural Network,” J. Ambient Intell. Humaniz. Comput., vol. 11, no. 11, pp. 5703–
5711, 2020, doi: 10.1007/s12652-020-01938-8.
[79] S. Mathulaprangsan, S. Patarapuwadol, K. Lanthong, D. Jetpipattanapong, and S. Sateanpattanakul, “Rice Disease Recognition Using Effective Deep Neural Networks,” J. Web Eng., no. June, 2021, doi: 10.13052/jwe1540- 9589.20313.
[80] Y. Zhu and N. Shawn, “DENSENET FOR DENSE FLOW,” IEEE, vol. 1, no. 1, pp. 790–794, 2017.
[81] A. M. Jeef, “Image Classification With HSV Color Model Processing,”
DataScienceCentral.com, 2017.
https://www.datasciencecentral.com/image-classification-with-hsv-color- model-processing/ (accessed Jan. 18, 2022).
[82] S. Mathulaprangsan, K. Lanthong, D. Jetpipattanapong, S. Sateanpattanakul, and S. Patarapuwadol, “Rice Diseases Recognition Using Effective Deep Learning Models,” 2020 Jt. Int. Conf. Digit. Arts, Media Technol. with ECTI North. Sect. Conf. Electr. Electron. Comput. Telecommun. Eng. ECTI DAMT
NCON 2020, pp. 386–389, 2020, doi:
10.1109/ECTIDAMTNCON48261.2020.9090709.
[83] M. P. Mathew and T. Y. Mahesh, “Leaf-based disease detection in bell pepper plant using YOLO v5,” Signal, Image Video Process. 2021, vol. 1, no. 1, pp. 1–7, Sep. 2021, doi: 10.1007/S11760-021-02024-Y.
[84] M. P. Reddy, “Mulberry leaf disease detection using YOLO,” Int. J. Adv.
Res. Ideas Innov. Technol. ISSN, vol. 7, no. 3, pp. 1816–1821, 2021.
[85] M. P. Mathew and T. Yamuna Mahesh, “Determining the Region of Apple Leaf Affected by Disease Using YOLO V3,” ICCISc 2021 - 2021 Int. Conf.
Commun. Control Inf. Sci. Proc., Jun. 2021, doi:
Program Studi Ilmu Komputer (S2) Universitas Nusa Mandiri 10.1109/ICCISC52257.2021.9484876.
[86] C. H. Son, “Leaf spot attention networks based on spot feature encoding for leaf disease identification and detection,” Appl. Sci., vol. 11, no. 17, 2021, doi: 10.3390/app11177960.
[87] N. CARSONO, A. Dewi, N. Wicaksana, and S. Sari, “KETAHANAN BEBERAPA GENOTIPE PADI HARAPAN TERHADAP PENYAKIT HAWAR DAUN BAKTERI (Xanthomonas oryzae pv. oryzae) STRAIN III, IV DAN VIII.,” Kultivasi, vol. 20, no. 3, 2021, doi:
10.24198/kultivasi.v20i3.33373.
[88] J. L. da Silva, A. N. Tabata, L. C. Broto, M. P. Cocron, A. Zimmer, and T.
Brandmeier, “Open Source Multipurpose Multimedia Annotation Tool,”
Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect.
Notes Bioinformatics), vol. 12131 LNCS, pp. 356–367, Jun. 2020, doi:
10.1007/978-3-030-50347-5_31.
[89] J. Nelson, “When to Use Contrast as a Preprocessing Step,” Roboflow, 2020.
https://blog.roboflow.com/when-to-use-contrast-as-a-preprocessing-step/
(accessed Jan. 18, 2022).
[90] C. Lee, H. J. Kim, and K. W. Oh, “Comparison of faster R-CNN models for object detection,” Int. Conf. Control. Autom. Syst., vol. 0, no. Iccas, pp. 107–
110, 2016, doi: 10.1109/ICCAS.2016.7832305.
[91] S. S. Pattanshetti and S. I. Nivade, “Real-Time Object Detection with Pre- eminent Speed and Precision using YOLOv4,” Int. J. Res. Eng. Sci. Manag., vol. 4, no. 7, pp. 26–31, 2021.
[92] S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia, “Path Aggregation Network for Instance Segmentation,” Proc. IEEE Comput. Soc. Conf. Comput. Vis.
Pattern Recognit., pp. 8759–8768, 2018, doi: 10.1109/CVPR.2018.00913.
[93] J. Yao, J. Qi, J. Zhang, H. Shao, J. Yang, and X. Li, “A real-time detection algorithm for kiwifruit defects based on yolov5,” Electron., vol. 10, no. 14, 2021, doi: 10.3390/electronics10141711.
[94] M. P. Mathew and T. Y. Mahesh, “Leaf-based disease detection in bell pepper plant using YOLO v5,” Signal, Image Video Process. 2021, pp. 1–7, Sep. 2021, doi: 10.1007/S11760-021-02024-Y.
[95] M. A. Azim, M. K. Islam, M. M. Rahman, and F. Jahan, “An effective feature extraction method for rice leaf disease classification,” Telkomnika (Telecommunication Comput. Electron. Control., vol. 19, no. 2, pp. 463–470, 2021, doi: 10.12928/TELKOMNIKA.v19i2.16488.