Jln. Khatib Sulaiman Dalam No. 1, Padang, Indonesia
Website: ijcs.stmikindonesia.ac.id | E-mail: [email protected]
Identification of Maize Leaf Diseases Using Support Vector Machine and Convolutional Neural Networks AlexNet and ResNet50
Maurice Micheni1,2*, Rael Birithia1, Cyrus Mugambi1, Boaz Too2, Margaret Kinyua1 [email protected], [email protected], [email protected],
[email protected], [email protected]
1Karatina University, P.O box 1957-10101, Karatina, Kenya
2Department of Computing and I.T., University of Embu.
3Food Crops Research Institute, Kenya Agricultural and Livestock Research Organization.
Article Information Abstract Submitted : 12 Jul 2023
Reviewed : 21 Jul 2023 Accepted : 17 Ags 2023
Maize crop protection is crucial for global food security, requiring accurate disease identification. In Kenya, farmers rely on subjective visual analysis of symptomatic leaves, which is time-consuming and prone to errors. Computer vision technologies, like deep learning and machine learning, offer promising solutions for disease identification. This study applies Convolutional Neural Networks (CNNs), specifically AlexNet and ResNet-50, to automatically learn image features and enhance speed and accuracy in maize leaf disease identification. A dataset of 3200 digital maize leaf disease images from Embu County is used for training and testing. AlexNet achieved the highest average accuracy of 98.3%, followed by ResNet-50 at 96.6%. The machine learning, support vector machine (SVM) exhibited the lowest average accuracy of 85.5%. These findings highlight the significance of utilizing AlexNet and ResNet-50 in maize leaf disease identification and classification.
Keywords Maize, Convolution Neural Networks, AlexNet, ResNet50, Support vector machine
A. Introduction
Maize (Zea mays) is a globally cultivated crop with an annual production of approximately 1137 million metric tons (1). In Kenya, maize holds great significance in food security, serving as a vital source of income and employment for small-scale and large-scale farmers. However, despite an increase in the area of maize production in Kenya over the past decade, the total output remains low at 11.7t ha-1 compared to its potential of 40 t ha-1 (2). Maize leaf diseases, caused by both viral and fungal pathogens, pose major constraints to production. Notably, among the most devastating viral diseases limiting maize production in Kenya are Maize lethal necrosis (MLN) disease and maize streak disease (3), along with Gray leaf spot (GLS) disease caused by Cercospora fungal species (4). The three diseases are responsible for yield losses ranging from 40-100% in Kenya (5, 6).
The identification of maize diseases can be achieved using various parts of the crop, but the simplest and most common method is visual analysis of symptomatic leaves (7). Early detection of crop disease plays a crucial role in enabling farmers to implement necessary control measures, including selecting appropriate pesticides, thereby increasing crop yield and improving overall quality (8, 9). Farmers commonly rely on visual analysis of disease symptoms on leaves to identify and classify crop diseases (10). However, this technique is subjective, prone to errors, time-consuming, and costly for poor rural farmers who may need to consult a trained plant pathologist to confirm diseases, especially in multiple infections (11). Moreover, misidentifying diseases can lead to improper pesticide use, reduced maize production, and environmental pollution from pesticide disposal, and potential harm to humans and non-target organisms (12).
One of the most promising approaches to overcome the limitations mentioned above is using computer vision-based automatic systems, which have shown the potential to reduce losses and increase productivity (13). In recent years, machine learning and deep learning techniques have gained popularity for identifying plant diseases using digital images (14). Traditional machine-learning methods extract features from diseased plant images, such as leaf colour, texture, and shape, to train classifiers that differentiate healthy and diseased plants (15).
These approaches have been widely employed in identifying diseases like leaf blotch, powdery mildew, and rust (16). Support Vector Machine (SVM), Decision Tree (D.T.), K-Nearest Neighbors (K-NN), and Random Forests (R.F.) are commonly used machine learning techniques for crop disease classification (17). However, machine learning techniques have limitations in detecting disease images during early infection stages and are influenced by factors like lighting conditions and occlusion, which diminish their effectiveness in disease recognition (18, 19). On the other hand, deep learning methods excel at extracting image features from datasets, enhancing recognition efficiency (18). Additionally, deep learning models can process complex and large-scale images, making them suitable for high- resolution datasets (20).
Significant efforts have been dedicated to developing automatic systems for crop disease recognition using computer vision techniques. (15) conducted a comparative analysis, achieving 99.75% accuracy, using different pre-trained CNN models to classify 38 categories of plant diseases. (21) enhanced two CNN
architectures and employed data augmentation techniques to classify eight specific maize leaf diseases. (22) utilized machine learning support vector machines, training the model with leaf colour and coarseness features to categorize maize leaf diseases in Ethiopian maize farms. However, limited research has been conducted on applying CNN models for maize leaf disease classification specifically in Kenya. In contrast to previous studies, our work focuses on evaluating transfer learning performance using two CNN architectures, AlexNet and ResNet50, to identify maize leaf diseases.
B. Materials and Methods
This study used pre-trained AlexNet and ResNet-50 models with transfer learning to identify and classify three distinct maize leaf diseases. The performance of these CNN architectures (AlexNet and ResNet-50) was compared to that of the traditional Support Vector Machine (SVM) classifier. The experiments were conducted using the following computing resources: Operating system: Windows Server 2018 R2; Graphic card: Core i7, 4 GB, 2.8 GHz Processor; RAM: 32GB.
Data Sets
Maize leaf images of size 256x256 pixels were captured using a digital camera around farms within Embu County. The total number of images was 3200, with each category having 800 images. A plant pathologist classified the diseased leaves to limit the number of images with multiple diseases. The study identified and classified three maize leaf diseases: Maize lethal necrosis, Maize streak virus and Gray leaf spot. Healthy leaves were used as a control in training our model (Table 1).
The dataset was split into three parts for deep neural network classification. The first part was the training set, i.e., a collection of images to be used by the network to learn its hidden parameters (weights and biases) automatically. The second dataset was the validation set that manually adjusted parameters that could not be automatically learned during training. We evaluated the trained model using the validation dataset at the end of each epoch. This allowed us to monitor the training process and detect overfitting instances. The third part of the data was used after the model training to test the model's accuracy.
Table 1. Distribution of data sources and division of training, validation and testing set
Maize leaf disease
type Total data
set training
data set validation testing data set
Maize leaf necrosis 800 640 240 160
Maize streak disease 800 640 240 160
Gray leaf spot 800 640 240 160
Healthy leafs 800 640 240 160
Some Maize leaves showing various diseases identified and classified in this study are shown in Figure 1.
(a) (b) (c) (d)
Figure 1. A sample of maize leaf images representing crop disease used; (a) Gray leaf spot, (b) Maize lethal necrosis disease, (c) Maize streak disease, (d) Healthy maize leaf
Pre-processing phase
The captured data images, initially sized at 256x256 pixels, were resized to match the default size of each CNN architecture. For the ResNet50 network, the images were adjusted to 224x224x3 pixels, while for AlexNet, the images were resized to 227x227x3 pixels.
Training Phase
We employed pre-trained ResNet-50 and AlexNet networks in the training phase, utilizing transfer learning to adapt the output layers for our specific classification task. For ResNet-50, the final three layers of the original network were retrained to generate new layers: a fully connected layer, a softmax layer, and a classification output layer. Similarly, transfer learning was applied to AlexNet, following a comparative approach. The training parameters for AlexNet were configured as follows: The number of iterations was set to 162, with 27 iterations per epoch. The base learning rate was set to 0.0001, and the training process was conducted for 6 epochs.
Description of CNN Architectures: AlexNet and ResNet50
Convolutional Neural Networks (CNNs) consist of two main components:
feature extraction and classification. The feature extraction segment comprises an input layer, convolutional layer with stride and padding, rectified linear unit (ReLU), pooling layer, and batch normalization layer. The classification part includes a fully connected layer, softmax activation, and output layer (Ramanjaneyulu et al., 2020). In this study, two CNN architectures, namely AlexNet and ResNet50, were utilized (Fig 2).
AlexNet consists of 25 layers, including an input layer, five convolutional layers with various filter sizes (e.g., 11x11, 5x5, and 3x3), seven ReLU layers, two normalization layers, three max-pooling layers, three fully connected layers, two dropout layers with a rate of 0.5, softmax activation, and an output layer (23).
On the other hand, ResNet50 addresses the challenges posed by the increasing number of layers in CNNs, which can make learning more difficult and decrease accuracy. Residual Network (ResNet) tackles this issue by incorporating skip connections between layers. ResNet-50 is a CNN architecture with 50 layers (24).
Figure 2. Proposed model using AlexNet/ ResNet50
Evaluation of the Performance of AlexNet, ResNet 50 and SVM in Classifying Maize Leaf Diseases
Various metrics were used to assess the performance of the three classification models (AlexNet, ResNet-50, and SVM), including accuracy, precision, recall, and F1 score. These metrics provide insights into the models' effectiveness in correctly classifying maize leaf diseases. The calculations for these metrics are outlined below:
Accuracy = True Positive + True Negative
True Positive + True Negative + False Positive + False Negative Precision= True positive
True positive + False Positive
F1 Score = 2 * Precision * Recall Precision + Recall
In addition to these metrics, the models' performance was assessed by examining the model loss and accuracy. The model loss quantifies the discrepancy between a given input image's predicted and actual output. During training, the model parameters are updated to minimize this loss. Model accuracy represents the proportion of correctly classified instances and is commonly used to evaluate the performance of classification tasks (25).
C. Results
Identification of Maize leaf disease using AlexNet, ResNet-50 and SVM
The results of the maize leaf disease identification using AlexNet, ResNet-50, and SVM models are presented in Table 2. All models achieved relatively high accuracies in identifying the three maize leaf diseases. The deep learning convolutional neural network models, AlexNet and ResNet-50, exhibited higher accuracy than the machine learning SVM model.
AlexNet demonstrated the highest accuracy across the diseases, achieving 99.9% accuracy for Maize Streak Disease (MSD) and a slightly lower accuracy of 95.9% for Maize Lethal Necrosis (MLN). ResNet-50, on the other hand, achieved the highest accuracy of 99% for MSD recognition and 94% for MLN. In contrast, the SVM model exhibited lower accuracy in disease detection, with accuracies of 89% for MSD and 78% for Gray Leaf Spot (GLS). For the classification of healthy maize images, AlexNet achieved a perfect accuracy of 100%, while SVM showed the lowest accuracy of 90% (Table 2).
On average, both CNN architectures demonstrated higher accuracy in maize leaf disease detection than the machine learning SVM model (Figure 3). AlexNet exhibited the highest average accuracy of 98.3%, followed by ResNet-50, and lastly, the machine learning SVM model with an average accuracy of 88.5% (Figure 3). Based on these findings, we can conclude that the CNN algorithms, specifically AlexNet and ResNet-50, outperformed the machine learning SVM model in maize leaf disease recognition.
Table 2. Accuracy in maize leaf diseases detection by AlexNet, ResNet50 and SVM models
Disease Model Accuracy in disease identification (%)
Maze lethal necrosis AlexNet 95.9
ResNet-50 94.0
SVM 88.0
Maize Steak Virus AlexNet 99.9
ResNet-50 99.0
SVM 89.0
Gay leaf spot AlexNet 97.4
ResNet-50 96.0
SVM 87.0
Healthy AlexNet 100
ResNet-50 97.9
SVM 90
Figure 3. Average accuracy in identification of the three maize leaf diseases by AlexNet, ResNet50 and SVM
Performance of AlexNet, ResNet50 and SVM in the Classification of Maize Leaf Diseases
In the Maize Lethal Necrosis (MLN) disease classification, the AlexNet architecture demonstrated the best performance, achieving a precision, recall, and accuracy of 93.7%, 96.4%, and 95%, respectively. ResNet-50 closely followed with slightly lower precision, recall, and accuracy percentages of 88.0%, 88.3%, and 88.4%. The machine learning model SVM exhibited the lowest precision, recall, and accuracy percentages of 88.0%, 88.3%, and 88.4%, respectively (Table 3).
Similarly, for the classification of Maize Streak Disease (MSD), AlexNet outperformed all other models and achieved the highest recall of 100%. ResNet-50 followed closely behind (Table 3). In contrast, the SVM model recorded the lowest precision (89.0%), recall (85.5%), and accuracy (85.6%) for MSD classification. For the classification of Gray Leaf Spot (GLS) fungal disease, AlexNet demonstrated the highest precision, recall, F1-score, and accuracy of 98.8%. ResNet-50 followed with slightly lower performance (Table 3). The SVM model exhibited the lowest recall value of 81.7% for GLS classification.
Table 3. Confusion matrix and Performance of various CNN architectures (AlexNet, ResNet-50) and SVM in recognition of maize leaf diseases
Disease T.P. TN FP F.N. Precision Recall F1-Score Accuracy
AlexNet
GLS 247 243 3 7 98.8 97.2 98.0 98.0
MLN 239 236 16 9 93.7 96.4 95.0 95.0
MSV 249 250 1 0 99.6 100.0 99.8 99.8
98,3 96,6
88,5
80 85 90 95 100 105
AlexNet ResNet-50 SVM
Accuracy
Classification Models
Healthy 250 250 0 0 100.0 100.0 100.0 100.0
Average score 97.4 97.9 97.6 97.6
ResNET-50
GLS 192 283 8 17 96.0 91.9 93.9 95.0
MLN 188 273 12 23 94.0 89.1 91.5 92.9
MSV 197 299 3 1 98.5 99.5 99.0 99.2
Healthy 195 296 5 4 97.5 98.0 97.7 98.2
Average score 96.2 93.5 94.8 95.7
SVM
GLS 174 261 26 39 87.0 81.7 84.3 87.0
MLN 176 266 24 34 88.0 83.8 85.9 88.4
MSV 178 270 22 30 89.0 85.6 87.3 89.6
Healthy 180 275 20 25 90.0 87.8 88.9 91.0
Average score 88.0 83.7 85.8 88.3
The accuracy of a model provides insights into its overall performance by indicating the correct classification rate, while the loss of a model reflects the quality of its predictions. Ideally, a model should exhibit high accuracy and low loss. During the validation of AlexNet architecture, high model accuracy (Fig. 4a) and low model loss (Fig. 4b) were observed. The model's accuracy increased with each epoch, while the loss function decreased (Figure 4a, b). Similarly, ResNet-50 demonstrated high model accuracy (Fig. 4c) and low model loss (Fig. 4d). These findings highlight the exceptional performance, accuracy, and specificity of both AlexNet and ResNet50 in identifying and classifying maize leaf diseases.
Additionally, the low model loss indicates that the predictions generated by these two CNN architectures were of high quality.
(a) (b)
(c) (d)
Figure 4. Model accuracy and model loss curves for the proposed models, (a) and (b) for AlexNet and (c) and (d) for ResNet50.
D. Discussion
The difference in disease identification accuracy between AlexNet and ResNet50 in this study can be attributed to their distinct designs. AlexNet, with its fewer layers, exhibits faster training speed and lower susceptibility to overfitting. This aligns with findings from previous research by (26), who utilized a "dropout"
regularization technique to mitigate overfitting and expedite model training. In contrast, ResNet-50, being a deeper and more intricate model compared to AlexNet, can capture more complex features within the data. However, this complexity also makes ResNet50 more prone to overfitting, particularly when dealing with small or noisy datasets. This observation is in line with the results reported by (27), who compared ResNet50 and VGG16 for detecting leaf rust in wheat and found that ResNet50 achieved a validation accuracy of 60% with
datasets containing fewer than 300 samples, while VGG16 achieved an accuracy of 93.33%.
Furthermore, the lower accuracy of the support vector machine (SVM) in maize leaf disease identification compared to the CNN models (AlexNet and ResNet-50) can be explained by the nature of CNNs specifically designed for image data, which is typically high-dimensional and complex. Without manual feature extraction, CNNs can automatically learn relevant features from images, such as textures, shapes, and patterns. In contrast, SVMs require explicit feature definition, which can be time-consuming and less effective at capturing relevant features.
CNNs also have the advantage of learning hierarchical representations, allowing them to capture low-level and high-level features important for plant disease classification. SVMs, on the other hand, are typically limited to linear or kernel- based models, which may not capture complex and non-linear relationships in the data as effectively.
The high performance of AlexNet and ResNet50 in terms of accuracy, precision, and recall can be attributed to the adoption of transfer learning in this study. This finding is consistent with earlier research by (28), who reported increased accuracy in identifying rice lesion images in complex scenarios through transfer learning, achieving an average accuracy of 94% surpassing standard training accuracy.
E. Conclusion
The results of this study demonstrate the effectiveness of AlexNet and ResNet50 architectures in the identification of maize leaf diseases, achieving average accuracies of 98.3% and 96.6%, respectively. The machine learning SVM model yielded the lowest accuracy of 88.5%. These findings highlight the superior performance of the convolutional neural networks (CNNs), particularly AlexNet and ResNet50, in feature extraction and accurate classification of maize leaf diseases. Evaluation parameters presented in Table 3 further support the superiority of AlexNet and ResNet50 over SVM. AlexNet consistently outperformed the other models, followed by ResNet50, while SVM obtained the lowest scores.
These findings reinforce the significance of employing AlexNet and ResNet-50 for classifying maize leaf diseases. By leveraging these advanced CNN models, farmers and stakeholders involved in maize crop protection can benefit from enhanced accuracy and specificity in disease detection, surpassing the limitations of traditional visual analysis methods commonly used in the field. Adopting the two CNN models offers the advantage of early disease identification, enabling prompt intervention measures to prevent the spread of diseases before they cause significant economic damage. This proactive approach can reduce crop yield losses and maintain overall crop quality. By integrating state-of-the-art technology like AlexNet and ResNet50 into maize disease management practices, farmers can make informed decisions regarding disease control strategies and ultimately improve their crop production and profitability.
F. Acknowledgement
The authors acknowledge the following individuals and institutions' valuable contributions to this study. First and foremost, we express our sincere
gratitude for the authorship and the collaborative efforts of all the researchers involved in this project. We appreciate the University of Embu - Department of ICT's support in providing the computing systems used for data analysis and model training. Their expertise and resources were instrumental in conducting the computational aspects of this research. We would also like to acknowledge the Kenya Agricultural and Livestock Research Organization (KALRO) Kabete for collaborating and providing diseased plant samples for testing the CNN models.
Their contribution in supplying the necessary plant samples facilitated the validation and evaluation of our models. Furthermore, we thank the entire research team for their dedication and hard work throughout the project. Their commitment and collaborative efforts have been crucial in the successful execution of this study.
G. References
[1] O. Erenstein, M. Jaleta, K. Sonder, K. Mottaleb, and B. M. Prasanna, ‘Global maize production, consumption and trade: trends and R&D implications’, Food Sec., vol. 14, no. 5, pp. 1295–1319, Oct. 2022, doi: 10.1007/s12571- 022-01288-7.
[2] World Food and Agriculture – Statistical Yearbook 2021. FAO, 2021. doi:
10.4060/cb4477en.
[3] P. Bernardo et al., ‘Detection of Diverse Maize Chlorotic Mottle Virus Isolates in Maize Seed’, Plant Disease, vol. 105, no. 6, pp. 1596–1601, Jun.
2021, doi: 10.1094/PDIS-07-20-1446-SR.
[4] M. Kibe et al., ‘Genetic Dissection of Resistance to Gray Leaf Spot by Combining Genome-Wide Association, Linkage Mapping, and Genomic Prediction in Tropical Maize Germplasm’, Front. Plant Sci., vol. 11, p.
572027, Nov. 2020, doi: 10.3389/fpls.2020.57202.
[5] W. D. Batchelor et al., ‘Simulation of Maize Lethal Necrosis (MLN) Damage Using the CERES-Maize Model’, Agronomy, vol. 10, no. 5, p. 710, May 2020, doi: 10.3390/agronomy10050710.
[6] A. K. Charles, W. M. Muiru, D. W. Miano, and J. W. Kimenju, ‘Distribution of Common Maize Diseases and Molecular Characterization of Maize Streak Virus in Kenya’, JAS, vol. 11, no. 4, p. 47, Mar. 2019, doi:
10.5539/jas.v11n4p47.
[7] B. Natesan, A. Singaravelan, J.-L. Hsu, Y.-H. Lin, B. Lei, and C.-M. Liu,
‘Channel–Spatial Segmentation Network for Classifying Leaf Diseases’, Agriculture, vol. 12, no. 11, p. 1886, Nov. 2022, doi:
10.3390/agriculture12111886.
[8] A. Dubois, F. Teytaud, and S. Verel, ‘Short term soil moisture forecasts for potato crop farming: A machine learning approach’, Computers and Electronics in Agriculture, vol. 180, p. 105902, Jan. 2021, doi:
10.1016/j.compag.2020.105902.
[9] E. L. Da Rocha, L. Rodrigues, and J. F. Mari, ‘Maize leaf disease classification using convolutional neural networks and hyperparameter optimization’, in Anais do XVI Workshop de Visão Computacional (WVC 2020), Brasil:
Sociedade Brasileira de Computação - SBC, Oct. 2020, pp. 104–110. doi:
[10] F. Matinuu Sigit, R. Syaifudin, and D. A. Suryaningrum, ‘Disease Detection System in Tobacco Leaves based on Edge Detection with Decision Tree Classification Method’, int, vol. 1, pp. 224–232, Oct. 2022, doi:
10.29407/int.v1i1.2636.
[11] A. Waheed, M. Goyal, D. Gupta, A. Khanna, A. E. Hassanien, and H. M. Pandey,
‘An optimized dense convolutional neural network model for disease recognition and classification in corn leaf’, Computers and Electronics in Agriculture, vol. 175, p. 105456, Aug. 2020, doi:
10.1016/j.compag.2020.105456.
[12] T. Gunstone, T. Cornelisse, K. Klein, A. Dubey, and N. Donley, ‘Pesticides and Soil Invertebrates: A Hazard Assessment’, Front. Environ. Sci., vol. 9, p.
643847, May 2021, doi: 10.3389/fenvs.2021.643847.
[13] A. Owino, ‘Challenges of Computer Vision Adoption in the Kenyan Agricultural Sector and How to Solve Them: A General Perspective’, Advances in Agriculture, vol. 2023, pp. 1–9, Mar. 2023, doi:
10.1155/2023/1530629.
[14] R. Kumar, A. Chug, A. P. Singh, and D. Singh, ‘A Systematic Analysis of Machine Learning and Deep Learning Based Approaches for Plant Leaf Disease Classification: A Review’, Journal of Sensors, vol. 2022, pp. 1–13, Jul.
2022, doi: 10.1155/2022/3287561.
[15] E. C. Too, L. Yujian, S. Njuki, and L. Yingchun, ‘A comparative study of fine- tuning deep learning models for plant disease identification’, Computers and Electronics in Agriculture, vol. 161, pp. 272–279, Jun. 2019, doi:
10.1016/j.compag.2018.03.032.
[16] M. A. Genaev, E. S. Skolotneva, E. I. Gultyaeva, E. A. Orlova, N. P. Bechtold, and D. A. Afonnikov, ‘Image-Based Wheat Fungi Diseases Identification by Deep Learning’, Plants, vol. 10, no. 8, p. 1500, Jul. 2021, doi:
10.3390/plants10081500.
[17] S. Jasrotia, J. Yadav, N. Rajpal, M. Arora, and J. Chaudhary, ‘Convolutional Neural Network Based Maize Plant Disease Identification’, Procedia Computer Science, vol. 218, pp. 1712–1721, 2023, doi:
10.1016/j.procs.2023.01.149.
[18] R. U. Khan, K. Khan, W. Albattah, and A. M. Qamar, ‘Image-Based Detection of Plant Diseases: From Classical Machine Learning to Deep Learning Journey’, Wireless Communications and Mobile Computing, vol. 2021, pp. 1–
13, Jun. 2021, doi: 10.1155/2021/5541859.
[19] Z. Yu, J. Ye, C. Li, H. Zhou, and X. Li, ‘TasselLFANet: a novel lightweight multi- branch feature aggregation neural network for high-throughput image- based maize tassels detection and counting’, Front. Plant Sci., vol. 14, p.
1158940, Apr. 2023, doi: 10.3389/fpls.2023.1158940.
[20] L. Bousset, M. Palerme, M. Leclerc, and N. Parisey, ‘Automated image processing framework for analysis of the density of fruiting bodies of Leptosphaeria maculans on oilseed rape stems’, Plant Pathol, vol. 68, no. 9, pp. 1749–1760, Dec. 2019, doi: 10.1111/ppa.13085.
[21] Y. Zhang, H. Wang, C. Li, and F. Meriaudeau, ‘Data augmentation aided complex-valued network for channel estimation in underwater acoustic orthogonal frequency division multiplexing system’, The Journal of the
Acoustical Society of America, vol. 151, no. 6, pp. 4150–4164, Jun. 2022, doi:
10.1121/10.0011674.
[22] E. Alehegn, ‘Ethiopian maize diseases recognition and classification using support vector machine’, IJCVR, vol. 9, no. 1, p. 90, 2019, doi:
10.1504/IJCVR.2019.098012.
[23] P. Purwono, A. Ma’arif, W. Rahmaniar, H. I. K. Fathurrahman, A. Z. K. Frisky, and Q. M. U. Haq, ‘Understanding of Convolutional Neural Network (CNN): A Review’, IJRCS, vol. 2, no. 4, pp. 739–748, Jan. 2023, doi:
10.31763/ijrcs.v2i4.888.
[24] G. Ganesan and J. Chinnappan, ‘Hybridization of ResNet with YOLO classifier for automated paddy leaf disease recognition: An optimized model’, Journal of Field Robotics, vol. 39, no. 7, pp. 1085–1109, Oct. 2022, doi:
10.1002/rob.22089.
[25] F. J. P. Montalbo and A. A. Hernandez, ‘Classifying Barako coffee leaf diseases using deep convolutional models’, Int. J. Adv. Intell. Informatics, vol. 6, no. 2, p. 197, Jul. 2020, doi: 10.26555/ijain.v6i2.495.
[26] L. S. Chow, G. S. Tang, M. I. Solihin, N. M. Gowdh, N. Ramli, and K. Rahmat,
‘Quantitative and Qualitative Analysis of 18 Deep Convolutional Neural Network (CNN) Models with Transfer Learning to Diagnose COVID-19 on Chest X-Ray (CXR) Images’, SN COMPUT. SCI., vol. 4, no. 2, p. 141, Jan. 2023, doi: 10.1007/s42979-022-01545-8.
[27] S. Sood and H. Singh, ‘An implementation and analysis of deep learning models for the detection of wheat rust disease’, in 2020 3rd International Conference on Intelligent Sustainable Systems (ICISS), Thoothukudi, India:
IEEE, Dec. 2020, pp. 341–347. doi: 10.1109/ICISS49785.2020.9316123.
[28] J. Chen, J. Chen, D. Zhang, Y. Sun, and Y. A. Nanehkaran, ‘Using deep transfer learning for image-based plant disease identification’, Computers and Electronics in Agriculture, vol. 173, p. 105393, Jun. 2020, doi:
10.1016/j.compag.2020.105393.