• Tidak ada hasil yang ditemukan

Articulation of the decision-making context

The Political and Organizational Context of Educational Evaluation

4.3 Gearing Evaluation Approach to Contextual Conditions; the Case of Educational Reform Programs

4.3.2 Articulation of the decision-making context

Monitoring and evaluation belong to the category of rational techniques of policyanalysis. In all applications, even if they only reflect a partial aspect of the rationality paradigm (see previous section), there is the assumption that there is some kind of designated decision-making structure and that the monitoring and evaluation results are actually used for decision-making.

In the earlier cited paper by Scheerens, Tan and Shaw the following tentative overview of audiences for monitoring and evaluation within the context of World Bank funded education projects is given.

Figure 4.2 gives a schematic overview.

“Two final observations with respect to the audiences of M&E activities are in place. Firstly, there is a distinction between control vs learning

“modes” in the use of evaluative information. Generally control-functions are served on the basis of the various monitoring activities, categorized according to the phase of project preparation and implementation.

Learning functions are served by evaluation and “effectiveness” studies as summarized in the lower part of Figure 4.2. Because input/process/outcome relationships are central in these latter types of

evaluation, they will generally provide more specific diagnoses as well as handles for improvement. Secondly, the recognition of “outside”

audiences, points at the possibility to institutionalize evaluation procedures after projecttermination. This could be seen as a very important spin-off of World Bank education M&E activities, because it could lead to a sustained strengthening of the educational evaluation function in recipient countries” (cited from Scheerens, Tan & Shaw, 1999).

Type of Monitoring Bank staff Recipient country phase 0 (risk indicators) regional officers country

directors task managers

Ministry of Education (MOE)

phase 1 (accounting for funding) Procurement officers (??) MOE phase 2 (implementation/output

indicators) country directors task

managers MOE phase 3 (outcome indicators) regional officers country

directors MOE phase 4 (impact indicators) senior management

regional directors country directors

MOE and other Ministries

Type of Evaluation

Effectiveness of procurement processes Effectiveness of schooling and project management processes

Procurement officers task managers staff

departments research departments Effectiveness of school-labor market

transmission processes

country directors

MOE MOE, regional officers school leaders &

teachers, parents MOE, other Ministries

Figure 4.2 Audiences for World Bank education M&E activities.

Leaving aside the specific context of World Bank projects a more general treatment of educational monitoring and evaluation and types of decisions can be made. By listing decisional contexts that follow the levels of education systems from high (central) to low (classroom and student level) a stylized overview of types of decisions with corresponding actors and other stakeholders can be given. This is attempted in Table 4.1, below.

As announced, the table provides a stylized overview. The implied assumption that evaluative information is used by decision-makers in a straightforward and linear way is challenged by the results from empirical studies that have investigated actual use, mostly in the context of program evaluation. As stated in the first section of this chapter, according to the rational ideal, evaluation would have a natural place in a clear-cut decision-making setting, where scientific methods and scientific knowledge are used to guide political decision making. As authors in the realm of studies about the use of

evaluation research have shown (e.g. Caplan, 1982; Weiss, 1982; Weiss & Bucuvalas, 1980) the assumptions of evaluation within a rational decision-making context concerning the decision-makers being clearly identified, and evaluation results being used in a direct linear way, may not be fulfilled.

Table 4.1 Major Decision Areas, Decision-makers and Other Stakeholders in Use of Information From Educational M&E.

Decisions Decision-makers Other

stakeholders (In case of evaluation of the

success of an educational reform program) adaptation of program implementation, determining continuation or termination, conditions for sustainability of program

• Donor agencies Elected officials and top level education officers (MOE) in borrowing country

Reconsideration of national educational policy agendas

elected officials and top level education officers deciding on financial inputs, and revision of the national policy agenda using system level indicators on inputs and outcomes, possibly using

international benchmarking information as well

Taxpayers Social partners (industry, unions) Educational organizations

Reform of national curricula • Same as above As above.

Subject-matter specialists Assessment specialists Educational publishers Restructuring of the system in

terms of functional decentralization

• Same as above, also including information of educational structures in other countries

Administrators at all levels of the system Social Partners Educational organizations Reconsideration of regional and

local educational policy-agendas

Municipal or school district level educational authorities deciding on levels of school finance, resources, and substantive educational priorities (again depending on their discretion in these domains) of the schools in the community, using information from school level context, input, process and output indicators

Local community Local/regional pressure groups Local industry Teacher unions School representatives Educational organizations

School development planning and school improvement activities

school managers and teachers using school-based information on inputs, processes and outputs, compared to regional or national averages to monitor or adapt overall school policy, the school curriculum, school organizational conditions and teaching strategies

Parent association Student representatives Local

community Choice of teaching

strategies and

individualized learning routes for students

teachers use detailed information from student level monitoring systems to monitor or adapt their teaching and pedagogy with respect to groups and to individual students

Parents Students

School choice • parents, students Schools, local

authorities In actual practice several or all of these assumptions may be violated. As goals may be vague, or contested among stakeholders, the assumption of “one” authoritative decision maker, either as a person or a well-described body, is also doubtful, even in the case where the decision makers are governmental planning officers.

As far as the use of evaluation results is concerned, empirical research has shown that linear, direct use is more the exception than the rule, “research impacts in ripples, not in waves” says Patton (1980) in this respect. Authors like Caplan (1982) and Weiss proposed an alternative model of evaluation use, which they consider more realistic.

According to this view evaluation outcomes gradually shape perspectives and conceptual schemata of decision-makers and has more an “illuminative” or “enlightenment” function than an authoritative one.

The decision-making context is likely to be less rational (see the first section of this chapter). Instead it may be diffuse while political aspects may have an impact on the use of evaluations and on the very conditions in which it can be applied as the impartial, objective devise it was meant to be.

Although the above considerations most directly apply to program evaluations, they are also likely to play a role, when evaluation has the nature of regular, periodic assessment of the running of a complete educational sub-sector, as with national assessment programs, educational indicator systems, or periodic evaluations carried out by inspectorates. For example when assessment programs are conducted in a setting where the stakes are high, e.g. by making school-finance contingent on performance levels laid bare by the assessments, strategic behaviour is not unlikely to occur. Examples are: training students in doing test-items, adapting classrepetition policies so that a more select group of students actually goes in for testing, leaving out the results of students that score less well.

As Huberman (1987) has shown, the degree to which evaluation can play its rational role is dependent on various structural arrangements:

• the institutionalisation of the evaluation function, e.g. whether there is an inspectorate with a distinct evaluation function, whether there are specialised research institutes;

• the scientific training and enculturation of the users of evaluation results;

• the degree to which evaluators are “utilisation focused” and actively try to act on the political realities in the evaluative setting.

The research literature in question has also yielded a set of conditions that matter for the use of evaluative information. The following conditions, all of which are amenable to improvement, have been mentioned:

• perceived research technical quality;

• commitment of audiences and users to the aims and approach of the evaluation;

• perceived credibility of evaluators;

• communicative aspects (like clarity on aims and scope; brevity of reports);

• political conditions (e.g. evaluations that bring “bad news” on politically sensitive issues are in danger of being ignored or distorted).

Conditions that impinge on the actual use of evaluative information relate to an area that is of enormous importance to the question to what extend the increased range of technical options in educational assessment, monitoring and evaluation will actually make true its potential of improving the overall performance of systems. This is the domain of the pre- conditions that should be fulfilled for an optimal implementation and use of M&E. It is dealt with in more detail in a subsequent section.

4.3.3 Monitoring and evaluation in functionally decentralized