• Tidak ada hasil yang ditemukan

7.7 Statistical Analysis: Statement of findings, interpretation and discussion of the data

7.7.6 Factor and statistical analysis for questions 8 and 9

175

were evaluated based on some theatre production while some respondents from the Health Sciences Faculty were evaluated based on some innovative and ground-breaking research.

This section described the results of the survey for questions 1 to 7 of the research questionnaire.

In order to address the objectives of the remainder of the questions, inferential techniques are used. Inferential techniques include the use of correlations and chi square test values, which are interpreted using the p-values.

7.7.6 Factor and statistical analysis for questions 8 and 9

176

Kaiser-Meyer-Olkin Measure of Sampling Adequacy. .505 Bartlett's Test of Sphericity Approx. Chi-Square 1546.228

Df 78

Sig. .000

The results of the KMO test yields a value of 0.505 which is > 0.50 and the result of the Bartlett test yields a value of 0.000 which is < 0.05. Both requirements have been met and the factor analysis procedure was therefore implemented. The results of the factor analysis procedure are indicated in Table 7-5.

Table 7-4: KMO and Bartlett's Test for question 8

177 Statement

Component

1 2 3

Creating a computerised portfolio of an academic makes evaluation and productivity

estimation easier. .339 .064 .640

Rating a university in terms of its research output in all its’ departments collectively can

be easily done using a computerised production estimation system .206 -.041 .821 Current evaluation methods are effective in meeting SAQA (South African Quality

Assurance) requirements. .590 .486 .302

Current evaluation methods are able to meet the principles (such as standards, quality

and excellence) of the National Quality Framework. .532 .353 .343

Present evaluation methods at DUT are capable of benchmarking academic productivity. .518 .698 .063 Current evaluation methods are successful in measuring an academic’s productivity (that

is, an academic’s efficiency and effectiveness). .671 .525 .197

Current evaluation methods are able to determine whether minimum standards in terms

of departmental requirements can be met. .774 .168 .387

Present methods of evaluation are successful in measuring the productivity of an

academic department as a whole. .924 .013 .159

Present methods of evaluation are able to fairly select candidates who are due for

promotion. .669 .244 -.157

Current evaluation methods can be used to determine whether an academic is due for a

merit award. .194 .886 .000

Current evaluation methods are effective in monitoring and processing performance in terms of the core strategic goals such as teaching and learning, research and external engagement.

.165 .735 .264

Present evaluation methods have been successful in identifying the strengths and

weaknesses of academic staff in terms of these core strategic goals. .112 .943 .058

Table 7-5, indicates that the variables that constituted question 8 was loaded along 3 components. This implies that respondents identified certain aspects of the sub-themes as belonging to other sub-sections. The theme for the first column can be summarised as whether current evaluation methods are able to estimate productivity of academic staff so that minimum standards pertaining to a department and external bodies can be met. The theme for the second column is whether current evaluation methods can monitor and process an academic’s Table 7-5: Results of factor analysis for question 8

178

performance in terms of the core strategic goals as well as for promotion or awards. The theme for the third column is whether a computerised system can make evaluation easier in terms of creating a portfolio of academic staff and rating a university in terms of its research outputs.

Figure 7-7 presents the results of the scoring patterns of the respondents for question 8. The categories “Strongly Agree” and “Agree” has been collapsed into a single category called

“Agree”. The categories “Strongly Disagree” and “Disagree” has been collapsed into a single category called “Disagree”. The category “Neutral” remains the same. This is allowed due to the acceptable levels of reliability. The results are first presented using summarised percentages for the variables (that constitute each section) and then described.

179

0.00 10.00 20.00 30.00 40.00 50.00 60.00 70.00 80.00 90.00 I prefer to be evaluated using a system that is human

intensive (such as interviews) rather than a system that is data intensive (where the human input is reduced).

Creating a computerized portfolio of an academic makes evaluation and productivity estimation easier.

Rating a university in terms of its research output in all its’

departments collectively can be easily done using a computerized production estimation system Current evaluation methods are effective in meeting SAQA

(South African Quality Assurance) requirements.

Current evaluation methods are able to meet the principles (such as standards, quality and excellence) of the National

Quality Framework.

Present evaluation methods at DUT are capable of benchmarking academic productivity.

Current evaluation methods are successful in measuring an academic’s productivity (i.e. an academic’s efficiency and

effectiveness).

Current evaluation methods are able to determine whether minimum standards in terms of departmental

requirements can be met.

Present methods of evaluation are successful in measuring the productivity of an academic department as a whole.

Present methods of evaluation are able to fairly select candidates who are due for promotion.

Current evaluation methods can be used to determine whether an academic is due for a merit award.

Current evaluation methods are effective in monitoring and processing performance in terms of the core strategic goals such as teaching and learning, research and external…

Present evaluation methods have been successful in identifying the strengths and weaknesses of academics in

terms of these core strategic goals.

25.00

61.00 70.00 14.00

11.00 14.00 11.00 11.00 11.00

15.00 12.00 9.00

21.00

34.00 33.00 18.00

39.00 45.00 17.00

6.00 17.00 6.00

23.00 35.00

38.00 15.00

41.00 6.00

12.00

47.00 44.00

69.00 83.00 72.00

83.00 62.00

53.00 53.00

64.00

Percent Disagree Neutral Agree

Figure 7-7:Results of scoring patterns for question 8

180

Evaluation and productivity estimation plays an integral part in measuring the performance of academic staff at the Durban University of Technology. This is clearly demonstrated by the results depicted in Figures 7-3 and 7-4, which indicates that at least 70% of respondents have undergone some kind of evaluation. However, respondents are unhappy about the evaluation methods that are currently implemented at DUT. The results from Figure 7-7 indicate that more than half the respondents feel that current evaluation methods cannot identify strengths and weaknesses of academic staff. They also feel that current evaluation methods cannot efficiently determine whether an academic is due for a merit award and that current procedures are unable to fairly select candidates who are due for promotion.

Estimating productivity of academic departments is difficult due to the qualitative nature of the attributes to be measured (Lee, 2010). Presently, quantitative techniques are being used to measure qualitative attributes. The outputs are therefore inefficient and unreliable. It is for these reasons that 83 % of respondents agreed that current estimation methods are unreliable and inefficient. They feel that the development of a new system is therefore necessary as 61% of respondents indicated that creating a computerised portfolio of an academic makes evaluation and productivity estimation easier and more efficient. Academic staff members are constantly involved in research and publications.

Presently, a system does not exist at DUT that can collectively rate an academic department or the university as a whole in terms of its research and publications. This is confirmed by the results from Figure 7-7 which indicates that 70% of respondents agree that a computerised system can effectively be used to rate a university in terms of its research outputs. The results for the first 4 questions from Figure 7-7 indicates that more than 50% of respondents have remained neutral (for these questions). The reasons could be attributed to the fact that respondents have been subjected to only one method of evaluation method (that is, the manual weighting system) and therefore cannot make a comparison with any other evaluation techniques. Their best option was to therefore remain neutral. It is necessary to comment on why a small percentage of respondents prefer the status quo, that is, the current evaluation methods. For example, 15% agree that present evaluation methods are able to fairly select candidates that are due for promotion and 12% agree that current methods are effective in selecting academics for a merit award. This is attributed to the fact that this small group of

181

respondents succeeded in acquiring a merit award or a promotion after being evaluated with the current methods. This is confirmed by the results indicated in Figure 7-5 that shows 29% of respondents were evaluated because they applied for a promotion. These respondents are therefore reluctant to be subjected to a new system of evaluation. An unexpected result of the survey indicated that only 33% of respondents prefer a system that is data intensive. A data intensive system will normally involve a computerised system. This contradiction may be attributed to the fact that respondents were not able to differentiate between a data intensive and a human intensive system. When one examines the overall results of the survey, it is clear that respondents are unhappy about current evaluation methods. The overall response also indicates that an effective computerised productivity estimation system should be implemented. Such a system has now been developed (Chapter 3) and demonstrated (Chapter 5).

b) Analysis of question 9

Table 7-7 presents the results (for question 9) of the KMO and Bartlett tests for the data.

Kaiser-Meyer-Olkin Measure of Sampling Adequacy. .540

Bartlett's Test of Sphericity

Approx. Chi-Square 173.286

Df 6

Sig. .000

The results of the KMO test yields a value of 0.540 which is > 0.50 and the result of the Bartlett test yields a value of 0.000 which is < 0.05. Both requirements have been met and the factor analysis procedure was therefore implemented. The results of the factor analysis procedure are indicated in Table 7-7.

Table 7-6: KMO and Bartlett's Test for question 9

182

Statement Component

1 2

An effective productivity estimation model should be able to correctly rank personnel for

promotion. .577 .730

The model should be able to monitor and process an academic staff’s performance in terms of the core strategic goals such as teaching and learning, research and external engagement.

.945 .051

The model should be able to identify the strengths and weaknesses in terms of the core

strategic goals. .907 -.181

The model should be able to create a portfolio of an academic so that evaluation and

estimation productivity is made easier. -.313 .886

The principle component analysis was used as the extraction method, and the rotation method was Varimax with Kaiser Normalisation (Refer to Annexure H for an explanation on how the rotated component matrix is determined). This is an orthogonal rotation method that minimizes the number of variables that have high loadings on each factor. It simplifies the interpretation of the factors. The rotation converged in 3 iterations. Factor analysis showed inter-correlations between variables. Items of questions that loaded similarly imply measurement along a similar factor. An examination of the content of items loading at or above 0.5 (and using the higher or highest loading in instances where items cross-loaded at greater than this value) effectively measured along the various components. This question (question 9) loaded along 2 sub- components. The theme for the first component (column 1) relates to the importance intensity of a new computerised model in terms of promotion, academics performance as well as identifying strengths and weaknesses of academics. The theme for the second component (column 2) relates to the importance intensity of developing a model to create portfolios of academic staff in order to make productivity estimation easier.

Rotated Component Matrixa

Table 7-7: Results of factor analysis for question 9

183

Figure 7-8 presents the results of the scoring patterns of the respondents for question 9.

The overall results in Figure 7-8 indicate that the third statement has a lower level of importance when compared to the other statements (Refer to Table 7-8 for the overall rankings of the 4 statements). This means that according to the respondents, correctly identifying personnel for promotion, monitoring performance in terms of core strategic goals and creating a portfolio for each academic are the most important attributes that should be considered first when creating a productivity estimation model. However, all 4 attributes were taken into consideration when developing the model. The overall ranking of the 4 statements are indicated in Table 7-8.

0 20 40 60

The model should be able to create a portfolio of an academic so that evaluation and estimation productivity is

made easier.

The model should be able to identify the strengths and weaknesses in terms of the core strategic goals.

The model should be able to monitor and process an academic’s performance in terms of the core strategic

goals such as teaching and learning, research and … An effective productivity estimation model should be able

to correctly rank personnel for promotion.

23.0 28.0

32.0

43.0 13.0

10.0

46.0 25.0

53.0 51.0

22.0

32.0 11.0

11.0

Percent Most Important Important Somewhat Important Least Important

Figure 7-8:Results of scoring patterns for question 9