6.5 Evaluating academic performance using weights (absolute values)
6.5.4 Comparing the two systems in terms of the objectives indicated in section 5.4
This section compares the results of the conventional weighting system with the results of the newly developed fuzzy-based system in terms of the objectives indicated in section 5.4. The scores for the 4 evaluators (for the manual weighting system) have to firstly be consolidated by computing the averages for each criterion (with respect to each alternative) before the comparisons can be made. The performance score for 𝐴1 in terms of 𝐶1 (Administration) for example is calculated as follows: 6+8+7+64 = 6.75. The rest of the calculations can be deduced by analogy and are indicated in Table 6-13.
𝑪𝟏 𝑪𝟐 𝑪𝟑 𝑪𝟒 𝑪𝟓 𝑪𝟔
𝑨𝟏 6.75 15.75 16.75 17.00 5.40 10.50
𝑨𝟐 5.75 14.13 21.00 15.63 3.00 8.13
𝑨𝟑 7.25 11.88 20.21 12.50 6.40 11.25
In section 5.6, the fuzzy performance matrix was calculated and the equivalent BNP values are indicated in Table 6-14.
𝑪𝟏 𝑪𝟐 𝑪𝟑 𝑪𝟒 𝑪𝟓 𝑪𝟔
𝑨𝟏 0.32 0.38 0.44 0.35 0.46 0.32
𝑨𝟐 0.31 0.38 0.45 0.36 0.36 0.30
𝑨𝟑 0.32 0.38 0.46 0.31 0.47 0.32
Table 6-13: Results of the manual weighting system
Table 6-14: BNP values for the Fuzzy Performance matrix
151
Table 6-13 and Table 6-14 indicate the evaluation scores of the manual weighting system and the scores of the fuzzy-based system with respect to the six criteria. The results of these two tables will be used to show the differences between the two evaluation methods. This will be done by focusing on the extent to which each method is able to address each objective mentioned in section 5.4. The comparisons between the two evaluation methods will show disparities. Some of these disparities will overlap (or will be similar) for some for the objectives. The researcher therefore chose 5 of the most important objectives for discussion.
Objective 1: Determine the overall performance of an academic
This is achieved by examining the row values for each academic in both tables. When the manual weighting system (Table 6-13) was used for the evaluation, the results indicated that academic 𝐴1 generally performed well in 5 of the performance areas. However, this academic (𝐴1) had the lowest score for Research and Innovation (𝐶3). When the fuzzy AHP method was used in the evaluation, academic 𝐴1 performs well in 5 areas besides Research and Innovation (𝐶3) where the BNP value is the lowest when compared to the other academics. This means that there is concurrence for both evaluation methods regarding the general performance of 𝐴1. The results of the manual weighting system indicated that academic 𝐴2 is an average performer (when compared to the other 2 academics) since he had the lowest performance values in 3 areas that is, Administration (𝐶1), Consultancy (𝐶5) and Services rendered and External Engagement (𝐶6). The fuzzy AHP (Table 6-14) also indicated that 𝐴2 has the lowest scores for the same three criteria. The results of both evaluation methods therefore concur with each other for academic 𝐴2.
Both evaluation methods indicate that academic 𝐴3 is generally a good performer since he has the highest rating in 5 performance areas when compared to the other 2 academics. However, the disparity lies with the criteria in which this academic (𝐴3) performed poorly. The results of the manual weighting system indicated that this academic (𝐴3) requires improvement in Teaching and Supervision (𝐶2) while the results of the fuzzy AHP method indicated that this academic needs to improve in Writing and Publication (𝐶4).
152
One can therefore conclude that there is not much disparity in the evaluation results when both methods are compared but the discrepancy lies with the performance area that requires improvement for academic 𝐴3. The disparity can be attributed to the inconsistent scoring patterns for the manual weighting system and fuzzy AHP method for academic 𝐴3. When the scoring patterns for academic 𝐴3 with respect to Teaching and Supervision (𝐶2) as well as Writing and Publication (𝐶4) are analysed, they do not resemble each other in both methods.
Hence the disparity regarding which performance area requires improvement for academic 𝐴3. The reasons for the scoring not resembling each other are discussed in section 6.5.5 (the conclusion section).
Objective 2: To determine the strongest and weakest performance areas of an academic
In order to determine this, the values in each column for both tables are examined. Table 6-15 indicates the strongest and weakest performance areas for all 3 academics using both evaluation methods.
Strongest Performance area (weighting system)
Strongest Performance area
(Fuzzy AHP)
Weakest Performance area (weighting
system)
Weakest Performance area
(fuzzy AHP)
𝑨𝟏 𝐶2 and 𝐶4 𝐶1, 𝐶2 and 𝐶6 𝐶3 𝐶3
𝑨𝟐 𝐶3 𝐶2 and 𝐶4 𝐶1, 𝐶5 and 𝐶6 𝐶1, 𝐶5and 𝐶6
𝑨𝟑 𝐶1, 𝐶5 and 𝐶6 𝐶1, 𝐶2, 𝐶3, 𝐶5 and 𝐶6 𝐶2 and 𝐶4 𝐶4
From Table 6-15, it is evident that both evaluation methods generally concur with each other for most of the criteria and alternatives (academics). Both evaluation methods identify academic 𝐴1 as having criterion 𝐶2 as a strength and criterion 𝐶3 as a weakness. Both evaluation methods identify 𝐶1, 𝐶5 and 𝐶6 as weaknesses for 𝐴2. Both methods are able to identify academic 𝐴3 as having strengths in most performing areas when compared to the other two academics.
However, the disparity revolves around the results for academic 𝐴2 where there is little concurrence between the two evaluation methods on the strengths of 𝐴2. This is due to the fact that the three criteria (𝐶2, 𝐶3 and 𝐶4) are allocated the most percentage points that is, 65% (20%
for 𝐶2, 25% for 𝐶3 and 20% for 𝐶4) when compared to the other three criteria. This high Table 6-15: Strongest and Weakest performance areas
153
allocation of scores for each of these criteria means that the scoring patterns of evaluators will be more divergent for the manual weighting system. In other words, evaluators have a wider range of absolute values from which to choose a score. This deduction was confirmed by the results of a survey of evaluators in section 6.5.3.
One would however argue why 𝐴1 and 𝐴3 showed more convergence (for both evaluation methods) than 𝐴2 although the same criteria are used for evaluating all academics. The reason is that the scoring pattern for 𝐴2 using the manual weighting system did not closely resemble the scoring pattern when fuzzy AHP was used. Therefore the results for the two evaluation methods (for 𝐴2) are not similar when the strengths are taken into consideration.
One should however, take note that where the scoring by evaluators has a potential to be divergent, then linguistic values (that is, a fuzzy-based approach) should be used as discussed in 6.5.3 above. Further, 𝐶2, 𝐶3 and 𝐶4 have the most sub-criteria when compared to the other three criteria. This will further compound the degree of reliability during the evaluation process. This was confirmed by a study conducted by Tseun-Ho et al. (2012) which indicated that a criteria that had more sub-criteria produced results that were less reliable than a criteria that had fewer sub-criteria when the manual weighting system is used. It is for these reasons there is a disparity on the strengths and weaknesses for 𝐴2 as indicated in Table 6-15.
Objective 3: To delegate duties to academics according to their strengths
As discussed for objective 2 above, 𝐶2, 𝐶3 and 𝐶4 showed more divergence between the two evaluation methods because these criteria were allocated the most percentage points. It is however noticeable that 𝐶1 and 𝐶6 showed more convergence because these criteria were allocated lesser percentage points, that is, 10% for 𝐶1 and 15% for 𝐶6 (as indicated in Table 6-6).
Therefore if the Head of Department wishes to appoint an academic to head the Administration (𝐶1) section of the department then 𝐴3 should be chosen because this academic has scored the highest when both evaluation methods were used. In other words both methods showed convergence for 𝐶1 indicating that this choice (𝐴3) is fairly reliable for both evaluation methods.
However, if an academic is required to be chosen to head the Research (𝐶3) section, then 𝐴2 is chosen when the manual weighting system is used and 𝐴3 is chosen when fuzzy AHP is used.
154
This discrepancy is due to the reasons discussed under objective 2. Therefore choosing 𝐴3 is more reliable using fuzzy AHP when compared to 𝐴2 where the manual weighting system was used. A similar argument can be made when selecting academics in the other criteria.
Objective 4: Show the overall performance of all academics in all key areas
Such information may be required by the Dean when compiling the annual report. The information required is more quantitative (tangible) in nature. The Head of Department may request the following information from the computer system which will have Table 5-1 data stored for each academic in a department. Some of the information required is as follows:
number of publications, number of conferences attended, projects completed and the number of Masters and PhD students that have graduated in a department. This is one of the instances where there are no disparities of the results for both evaluation methods since quantitative data is being manipulated. The output is therefore the same for both evaluation methods.
Objective 5: Rank academics in terms of all six key performance areas
In order to rank academics in all six key performance areas using the manual weighting system, the total average score of the 4 evaluators is calculated. For academic 𝐴1, the average score is calculated as follows: 𝐴1 =67.5+76+74.5+70
4 = 72. The average scores for the other two academics are calculated in an identical manner. These are 𝐴2 = 67.5 and 𝐴3 = 69.5.
Therefore, the rankings are as follows: Number 1 = 𝐴1, Number 2 = 𝐴3 and Number 3 = 𝐴2. For fuzzy AHP, the ranking was done in section 5.6 using the Fuzzy TOPSIS method. The results attained were Number 1 = 𝐴2, Number 2 = 𝐴3 and Number 3 = 𝐴1. When the results of the rankings are compared, both evaluation methods ranked 𝐴3 as second. The disparity lies with the ranking of 𝐴1 and 𝐴2 although there is a small difference between the three average scores when the manual weighting system is used. However, the small differences in scores are significant especially in terms of promotion and awards. The reason for the disparity in the scores for 𝐴1 and 𝐴2 using both methods are explained in the conclusion of this section.
155