Analysis and Prediction of Temporomandibular … 57
Fig. 5 Adaboost classifier
3.4 Adaboost
A machine-learning algorithm expressed by Yoav Freund and Robert Schapire is the Ada Boost (short for “Adaptive boosting”) widget it is shown in Fig.5. To boost their presentation, it can be used with other learning algorithms [7]. This is achieved by the tuning of poor learners. Boosting refers to an all-purpose and demonstrably true means of creating a very precise classifier by mixing rough and mildly inaccurate thumb laws.
3.5 Naïve Bayes Algorithm
It is a classification technique with a presumption of freedom between predictors based on Bayes’ theorem. A Naïve Bayes classifier believes, in basic terms, that the inclusion of a certain feature in a class is equivalent to the existence of any other feature. The Bayes theorem provides a way for P(c) posterior likelihood to be determined from P(c), P(x) and P(x) [7].
58 R. B. Kakkeri and D. S. Bormane performance of the classification model on the test dataset. It is important to choose the exact metrics to evaluate the model performance such as confusion matrix, accu- racy, specificity, sensitivity, etc. Following formulas are used to find the performance metrics [13]:
Accuracy—Accuracy is the most innate performance quantity and it is simply a ratio of acceptably predicted observation to the total observations.
Precision—Precision is the ratio of correctly predicted positive observations to the total predicted positive observations. High precision relates to the low false positive rate.
Recall (Sensitivity)—Recall is the ratio of correctly predicted positive observa- tions to the all observations in actual class—yes.
F1 score—F1 Score is the biased average of Precision and Recall. Therefore, this score takes both false positives and false negatives into account. Naturally it is not as easy to understand as accuracy, but F1 is usually more useful than accuracy, especially if you have an uneven class distribution.
In the biomedical field, classification has acquired a lot of importance in machine learning. Classification techniques help to learn a model from a set of training data and to classify one of the classes with test data. The analysis of the few existing clas- sification algorithms [12,15] is related to this study and their comparative efficiency parameters will help to study the new algorithms.
The main kernel used in the analysis of EMG signal features to achieve better clas- sification accuracy is the selected features. This article’s key purpose is to determine the consistency of the classification [13,14]. This article discusses a few classification strategies for Supervised Machine Learning (ML) [12], compares and determines the most successful classification algorithm based on the data collection, the number of instances and variables (features). Five separate machine learning algorithms were considered: k-Nearest Neighbor (KNN), Decision tree, Naïve Bayes, Logistic regres- sion and Support Vector Machine. The TMJ disorder data set was used for the clas- sification to apply the equations with 84 instances with ten characteristics as an independent variable and one as a dependent variable for the study.
Cross-validation is a method for testing machine learning models by training many machine learning models on the available input data sub-sets and testing them on the sub-set of complementary data. Using cross-validation to avoid over fitting, i.e. failing to generalize a pattern.
4.2 K-Fold Cross-Validation
Cross-validation shown in Fig.6, is a re-sampling practice used to measure machine learning models on a partial data set. To achieve cross-validation, we may use the method of k-fold cross-validation. You have fragmented the input data into k data subsets (also known as folds) during k-fold cross-validation.
Analysis and Prediction of Temporomandibular … 59
Fig. 6 k-fold cross-validation model
On all but one (k−1) of the subclasses, train ML model, and then test the model on the sub-set that was not used for training [14]. This procedure is repeated k times, each time with a different sub-set intended for evaluation (and omitted from training).
4.3 Results
The final result is shown in following Fig.7.
Fig. 7 Overall results of models with tenfold cross validation
60 R. B. Kakkeri and D. S. Bormane
5 Conclusion
Temporomandibular disorder is a condition which can put the individual in severe pain, agony as well as unable to chew food properly. Presently, a proper diagnosis and a structured treatment plan is not available as the cost of advanced equipment, technique sensitivity of handling the equipment as well as understanding of multidis- ciplinary approach is inadequate. It was clear after the literature survey that several machine learning algorithms can be applied to the extracted and chosen features. Out of the five supervised training of classification algorithms, the results indicate that SVM was not introduced to be the most reliable and effective algorithm. Relative analysis of the classifiers shows that Decision tree, Logistic regression and Random Forest classification algorithms stood to be the reliable with cross validity of ten folds.
References
1. Uddin F, Baten RBA, Rita SN, Sadat SA, Chowdhury NM (2017) Management of temporo- mandibular joint dysfunction syndrome: an overview. J Bangladesh Coll Physicians Surg 35(3):133–141
2. Klasser GD, Okeson JP (2006) Electromyography in the diagnosis and treatment of temporo- mandibular disorders. Oral Health 137:763–771
3. Mapelli BC, Zanandréa Machado LD, Giglio CS, De Felício CM (2016) Reorganization of muscle activity in patients with chronic temporomandibular disorders. Arch Oral Biol 72:164–
171
4. Bianchi J et al (2020) Osteoarthritis of the temporomandibular joint can be diagnosed earlier using biomarkers and machine learning. Sci Rep 10(1):1–14
5. Spiewak C (2018) A comprehensive study on EMG feature extraction and classifiers. Open Access J Biomed Eng Biosci 1(1):1–10
6. Khezri M, Jahed M (2007) Real-time intelligent pattern recognition algorithm for surface EMG signals. Biomed Eng Online 6:1–12
7. Sweeney EM et al (2014) A comparison of supervised machine learning algorithms and feature vectors for MS lesion segmentation using multimodal structural MRI. PLoS ONE 9:4 8. Chowdhury RH, Reaz MBI, Bin Mohd Ali MA, Bakar AAA, Chellappan K, Chang TG
(2013) Surface electromyography signal processing and classification techniques. Sensors (Switzerland) 13(9):12431–12466
9. Latif R, Sanei S, Shave C, Carter E (2008) Classification of temporomandibular disorder from electromyography signals via directed transfer function. In: 30th annual international conference of the ieee engineering in medicine and biology society, vol 2, no 3, pp 2904–2907 10. Pinho JC, Caldas FM, Mora MJ, Santana-PenÍn U (2008) Electromyographic activity in patients
with temporomandibular disorders. J Oral Rehabil 27(11):985–990
11. Suvinen TI, Kemppainen P (2007) Review of clinical EMG studies related to muscle and occlusal factors in healthy and TMD subjects
12. Khammas BM, Monemi A, Bassi JS, Ismail I, Nor SM, Marsono MN (2015) Feature selection and machine learning classification for malware detection. J Teknol 77(1):243–250
13. Khan MMR, Arif RB, Siddique AB, Oishe MR (2019) Study and observation of the variation of accuracies of KNN, SVM, LMNN, ENN algorithms on eleven different datasets from UCI machine learning repository. In: 4th international conference on electrical engineering infor- mation and communication technology iCEEiCT 2018, pp 124–129
Analysis and Prediction of Temporomandibular … 61 14. Kucuk H, Tepe C, Eminoglu I (2013) Classification of EMG signals by k-nearest neighbor
algorithm and support vector machine methods, pp 1–4
15. Al-Faiz MZ, Ali AA, Miry AH (2010) A k-nearest neighbor based algorithm for human arm movements recognition using EMG signals, EPC-IQ01 2010. In: 1st conference energy, power control, vol 6, no 2, pp 159–167