• Tidak ada hasil yang ditemukan

CHAPTER 2: NON-VIOLENCE, THE HISTORY OF VIOLENCE AND THE ROLE OF THE CHURCH IN ZIMBABWE

3.4 Programme evaluation

3.4.1 Purposes of evaluation

The two approaches discussed above in fact converge in respect to the purposes of evaluation.

Their first point of convergence is to distinguish two categories of evaluation – summative and formative. They describe summative evaluation as an exercise that judges the overall effectiveness of a programme that are particularly important in making decisions about continuing or terminating an experimental programme or demonstration project. They posit that summative evaluation contrasts with formative evaluation which focuses on ways of improving and enhancing programmes rather than rendering definitive judgment about effectiveness. Summative evaluation, by contrast, provides data to support a judgment about a programme’s worth so that a decision can be made about the merit of continuing the programme. The evaluation of the impact of AVP training in this study will be both summative and formative. On the one hand, it will be summative in that it will take stock of the anticipated attitudinal changes towards violence among trained youths, and such results will be used to assess whether churches in Zimbabwe should embrace AVP as an important programme to inculcate values of non-violence to build more peaceful communities. On the other, it will be formative in that the exact manifestations of conflict are likely to differ with communities, just as the participants and trainers will be different. As such, there could be need to adapt training methodologies to make them suitable for specific community contexts. This is in line with Patton’s postulation that formative evaluation means collecting data for a specific period of time, usually during the start-up or pilot phase of a project, to improve implementation, solve unanticipated problems, and make sure that participants are progressing toward desired outcomes.

63

Be that as it may, there are three main purposes of evaluation that Patton (1997, p 65-70) identifies:

i. Judgment-oriented evaluation

This is evaluation that is aimed at determining the overall merit, worth or value of a project or programme. Merit refers to value of a programme, for example, how effective it is in meeting the needs of those it is intended to help. Worth refers to extrinsic value to those outside the programme, for example, to the larger community or society. Judgment-oriented evaluation approaches include summative evaluations aimed at deciding if a programme is sufficiently effective to be continued or replicated. Questions asked include: Did the programme work? Did it attain its goals? Should the programme be continued or ended? Were desired client outcomes achieved?

ii. Improvement-oriented evaluation

Improvement-oriented forms of evaluation include formative evaluation, quality enhancement and responsive evaluation, among others. These approaches gather varieties of data about strengths and weaknesses with the expectation that both will be found and each can be used to inform an on-going cycle of reflection and innovation. According to Patton (1997, p68), improvement-oriented evaluations ask: What are the programme’s strengths and weaknesses?

To what extent are participants progressing towards the desired outcomes? Which types of participants are making good progress and which types aren’t doing so well? What kinds of implementation problems have emerged and how are they being addressed? What is happening that was not expected? How is the programme’s external environment affecting internal operations? What new ideas are emerging that can be tried out and tested?

iii. Knowledge-oriented evaluation

The evaluation findings contribute to knowledge and may involve clarifying a programme’s model, testing theory, distinguishing types of interventions, figuring out how to measure outcomes, generating lessons learnt, and/or elaborating policy options. This is sometimes describes as enlightenment where the evaluation findings broaden the knowledge base.

64 3.5 Types of evaluation

Rossi et al. (2004, p53) distinguish five types of evaluation. Firstly, there is needs assessment which questions about the social conditions a programme is intended to ameliorate and the needs for the programme. It assesses the nature, magnitude and distribution of a social problem; the extent to which there is need for intervention; and, the implications of those circumstances for the design of the intervention. As indicated in Chapters One and Two of this research, Zimbabwe has been subjected to politically-motivated violence coupled with economic hardships. Violence perpetrated by the youths had destroyed the normal community lives particularly during periods of electoral contestations that peaked during the 2008 harmonised elections.

Secondly, there is assessment of programme theory which asks questions about programme conceptualisation and design. Rossi et al. (2004, p54) posit that the conceptualisation of the programme must reflect valid assumptions about the nature of the problem and represent a feasible approach to resolving it. As for this research, the problem of politically organised youth violence was growing unabated. Churches had remained the only neutral platform at which youths from various political persuasions could still congregate. However, churches in Zimbabwe did not have programmes aimed at developing inter-personal skills of non-violence that would promote peaceful resolution of conflict. To a large extent, they restricted their social teachings to influence behaviour by focusing in the good and the bad, and the consequences of doing evil things as enunciated in the Bible.

Thirdly, there is assessment of programme process (or process evaluation). This considers questions about programme operations, implementation and service delivery. Given a plausible theory about how to intervene in an accurately diagnosed social problem, a programme must still be well implemented in order to have a reasonable chance of actually improving the situation. Rossi et al. (2004, p56) warn that it is not unusual to find that programmes are not implemented and executed according to their intended design. A programme may be poorly managed, compromised by political interference, or designed in ways that are impossible to

65

carry out. Sometimes appropriate personnel are not available, facilities are inadequate, and programme staff lack motivation, expertise or training. Possibly, they continue, the intended programme participants do not exist in the numbers required, cannot be identified precisely, or are not cooperative. The information about programme outcomes that evaluations of impact provide is incomplete and ambiguous without knowledge of programme activities and services that produced those outcomes. On the one hand, when the desired impact is lacking, process evaluation may have diagnostic value by indicating whether this was because of implementation failure (for example, the intended services were not provided) or because, when implemented as intended the programme failed to produce the expected effects. On the other hand, when positive programme effects are found, process evaluation helps confirm that they resulted from programme activities, rather than from spurious sources, and identify the aspects of the service most instrumental to producing the effects.

Fourthly, Rossi et al. (2004, p58) posit that there is impact assessment (impact evaluation or outcome evaluation) which examines programme outcomes and longer-term impact. It gauges the extent to which a programme has produced the intended improvements in social conditions. Impact assessment also asks whether the designed outcomes were attained and whether the changes included unintended side effects. To conduct an impact assessment, the evaluator must establish the status of programme recipients on relevant outcome measures and also estimating what their status would be had they not received the intervention.

Fifthly, Rossi et al. (2004, p60) identify efficiency assessment which asks questions about costs and cost-effectiveness. An efficiency assessment takes account of the relationship between a programme’s costs and its effectiveness. Typical issues include whether a programme produces sufficient benefits in relation to its costs or whether other interventions or delivery systems can produce the benefits at a lower cost.

66