• Tidak ada hasil yang ditemukan

CHAPTER 2 21

2.4 Purposes of Evaluation 41

Page 41

Another critique of the method is that there is insufficient empirical evidence to suggest that it satisfies its intended purpose. It has been contended that the means for assessing the success of an empowerment evaluation are underdeveloped and as a result, there is insufficient evidence to suggest that it is an empowering approach.

Other critiques include its over-reliance on self-study which may hinder the evaluations objectivity, and the lack of rigour between differing evaluations which can result in fake evaluations.

According to Trochim (2006) debates rage as to how to decide to choose an evaluation strategy. Each strategy claims superiority of its position. Most good evaluators are familiar with all four categories and borrow from each as the need arises. There is no inherent incompatibility between these broad strategies as each strategy has a unique advantage. Recently, attention has increasingly moved to how the results from different evaluation strategies can be integrated. Academic literature claims there is no simple answer. Differences in opinion with respect to an appropriate evaluation strategy may stem from divergent notions of the purpose of evaluation.

Page 42

mentioning that formative evaluation serves to improve products, programmes, and learning activities by providing information during planning and development.

Trochim (2006) asserts that formative evaluation includes several evaluation types, viz. needs assessment determines who needs the programme, how great the need is, and what might work to meet the need; evaluability assessment determines whether an evaluation is feasible and how stakeholders can help shape its usefulness;

implementation evaluation monitors the fidelity of the programme or technology delivery; and process evaluation investigates the process of delivering the programme or technology, including alternative delivery procedures. A literature review indicates that evaluation for programme improvement characteristically emphasizes findings that are timely, concrete and immediately useful.

2.4.2 Accountability

A further basis for evaluation of programmes is to enhance accountability of programme providers. Alkin and Christie (2004:383) write that accountability refers to the process of “giving an account” or being answerable or capable of being accounted for. Chelimsky and Shadish (1997) state that the purpose of accountability is to measure results or value for funds expended, to determine costs and to assess efficiency. Likewise, managers are thus expected to use resources effectively and efficiently and produce the intended expectations. An evaluation conducted to determine whether expectations are met is called summative evaluation (Scriven, 1991). Lockee et al. (2002) cites summative evaluation as determining if the products, programmes, and learning activities, usually in the aggregate, worked in terms of the need addressed or system goal. Its purpose is to provide a summary judgement of the programme’s performance. The findings of summative evaluations are usually intended for decision makers with major roles in programme oversight. Trochim (2006) suggests that summative evaluation can also be subdivided into the following, outcome evaluations which investigate whether the programme or technology caused demonstrable effects on specifically defined target outcomes; impact evaluation which is broader and assesses the overall or net effects, intended or unintended of the

Page 43

programme or technology as a whole; cost-effectiveness and cost-benefit analysis address questions of efficiency by standardizing outcomes in terms of their costs and values; secondary analysis re-examines existing data to address new questions or the use of methods not previously employed and meta-analysis integrates the outcome estimates from multiple studies to arrive at an overall or summary judgement on an evaluation question.

2.4.3 Knowledge Generation

It has also been argued that programmes should be evaluated in terms of the knowledge they generate. Patton (1996) notes that an increasingly important evaluation purpose that goes beyond the formative-summative evaluation is the area of knowledge-generation. Both judgement-oriented (summative) and improvement- oriented (formative) evaluations involve the instrumental use of results (Leviton &

Hughes, 1981). Instrumental use occurs when a decision or action follows, at least in part, from the evaluation. Rossi et al. (2004) argue that some evaluations are commissioned to describe the nature and effects of an intervention as a contribution to knowledge. Evaluations of this nature are intended to make contributions to the social science knowledge base or be a basis for significant programme innovation. This type of evaluation uses the most rigorous methods feasible. The uses of the findings will include sponsors of the research as well as interested scholars and policymakers and will be disseminated through scholarly journals, conference papers and other professional outlets.

Weiss (1990:176) used this term to describe the effects of evaluation findings being disseminated to the larger policy community “where they have a chance to affect the terms of debate, the language in which it is conducted, and the ideas that are considered relevant in its resolution.” While Weiss has emphasized the informal manner in which evaluation findings provide a knowledge base for policy over time, Chen has focused on a more formal knowledge-oriented approach in what he called

“theory-driven evaluation” (Chen, 1989, Chen and Rossi, 1987). Though theory-

Page 44

driven evaluations can provide programme models for summative judgement or on- going improvement, the connection to social science theory also offers the potential for increasing knowledge about how effective programmes work in general.

2.4.4 Hidden Agendas

According to Rossi et al. (2004), sometimes the true purpose of evaluation has little to do with acquiring information about the programme’s performance. It is said that evaluation is launched as it is believed it will be good public relations and might impress funders or political decision makers. Sometimes, an evaluation is commissioned to provide a rationale for a decision that has already been made behind the scenes to terminate a programme or dismiss an administrator. Or an evaluation may be undertaken as a delaying tactic to appease critics and defer difficult decisions (Rossi et al., 2004). Research literature suggests that all evaluations involve some political manoeuvring and political relations and the evaluator is consequently presented with a difficult dilemma. Rossi et al. (2004) confirm that evaluation must either be guided by the political or public relations purposes or focus on programme performance issues.

According to Neave (1998), innovative evaluations have developed since the late 1980s due mainly to the great social changes associated with mass higher education.

Hostmark-Tarrou (1999:270) points out that politicians and researchers have explained this shift of focus in the evaluation of universities as a result of major changes in society. Innovative evaluations mostly involve the functioning of the institutions, disciplines, and the national education and research system. This study attempts to address the evaluation of an academic department as a service provider at a university.

The next section reviews the literature in respect of the innovative evaluation procedures adopted by universities.

Page 45