• Tidak ada hasil yang ditemukan

Evaluation: The precondition for learning

Dalam dokumen Policies and Institutions (Halaman 156-159)

Monitoring and evaluation (M&E) has become a basic function in public administration and in many parts of the private sector, and is well described elsewhere (see Patton, 2002; Wholey et al, 2004; the journals Evaluation Practice and American Journal of Evaluation). The focus in this book on policy learning and purposeful institutional change, and on the peculiar aspects of emergencies and disasters as a policy domain, suggest that the broader, rather than practical, issues of M&E are the most critical – that is, the connection between ‘hands-on’ evaluation and the policy and institutional system. From the framework identified in Figure 3.2 in Chapter 3, we can recall the broad attributes of adaptive policy processes and institutional settings:

purposefulness consistent with widely understood problem statements and goals;

persistence, with sufficient longevity in efforts to learn and adapt;

information richness and sensitivity, involving not only the necessity for quality information, but also the wide accessibility of this information;

inclusiveness, allowing for participation by stakeholders and the reconciliation of multiple values and perspectives;

flexibility, so that purposefulness and persistence do not atrophy into rigidity and an inability to learn and adapt.

These institutional attributes will be further addressed in Chapter 8; but here we can consider how approaches to M&E can be more, rather than less, supportive of them in two ways: the characteristics of monitoring-related elements of the policy process and clarity over the purpose of evaluation. On the first of these, monitoring and subsequent evaluation will not be optimally effective without the following characteristics:

Explicit recognition of uncertainty and, thus, of the necessarily contingent and experimental nature of policy and institutional responses. Without such explicit recognition of uncertainty, it is unlikely that the policy community will be prompted to establish procedures for critical reflection or procure the necessary information to enable this. While mounting policy interventions in the hypoth- esis-testing manner of adaptive management (Holling, 1978) may be difficult in a strict sense, the open recognition that an experiment is, indeed, being under- taken – as with a public awareness campaign, or the provision of incentives for household or business preparedness – demands greater clarity of what is known and unknown, and of predicted and speculated cause–effect links between policy problems, goals and responses.

Measurable policy goals, if not in a quantified sense, then at least in a qualitative manner that is amenable to eventual evaluation of relative success and/or failure.

Such goals may define an intended process or a desired outcome (or both), and, in general, will entail overarching goals and a hierarchy of component goals for different aspects of the policy problem and programme. Measures might include reducing the value of assets or activities exposed to various hazards and the vulnerability of these assets; measures of warning system performance; favourable cost–benefit ratios; or improved household awareness and preparedness.

Basic routine data capture, designed within the policy programme and linked to policy goals and key variables affecting their attainment, with clearly defined responsibilities for gathering and maintaining information streams. Basic data include descriptions of baseline conditions at the time of implementation, with- out which relative change following the policy intervention will be difficult to assess. Disaster datasets are often of a good quality when they cover physical phenomena such as flood depths, earthquake frequency, cyclone/hurricane/

typhoon strength, and major transport accidents. But in relation to our interest in vulnerability, damage and resilience, there are few consistent and reliable data- sets. Most contain inconsistent material of unknown quality – which reinforces the importance of metadata: information about the data in terms of quality, accuracy, fitness for purpose, etc. The exceptions, such as quality datasets held by some insurers and some national agencies, may be severely limited in scope and accessibility.

Coordination of roles and activities across agencies and non-governmental groups,

given that disasters will typically involve actions across governmental and social sectors. For effective M&E, this will often involve the participation of agencies and players for whom the provision of critical information is their only substantial engagement in disasters policy (e.g. a statistical agency for demographic and settle- ment data or a manufacturing sector for sales of fire-retardant building materials).

It will also entail the empowerment and coordination of informal community organizations and those with few financial or administrative resources.

A clear mandate for M&E activities. Although coordinated, multiple inputs will usually be required; coordination requires some degree of centralized responsibil- ity and authority for ongoing information-gathering, dissemination and formal review processes. Finance departments or associated agencies often fulfil a general policy monitoring role, including disaster policy and programmes (e.g. the US General Accounting Office). In some jurisdictions, there are agencies tasked with developing performance indicators for all government activities, such as the Australian Productivity Commission. More usually, monitoring is undertaken by agencies or groups with mandates for particular hazards or risks, while evaluation is more ad hoc and is often undertaken by consultants.

Information made widely available to all stakeholders relevant to the policy prob- lem and response. This is necessary to ensure understanding and engagement, and to maintain trust between policy actors, as well as for the more traditional reasons of public accountability.2 Information availability is a continuous issue for emer- gency managers. It is needed during and after a disaster for response and recovery, and is required in advance for all types of planning and awareness. The internet is rapidly becoming the tool for universal access to disaster information (e.g. the UK’s Environment Agency’s website provides flood zones marked on street maps of the whole country).

In terms of the purpose of evaluation (and, thus, forms of evaluation processes and methods, and the information demands that they define), we can identify five purposes of evaluation (adapted from Howlett and Ramesh, 2003):

1 Process evaluation examines specific projects and programmes with a view to generating insights into how to improve policy processes and organizational structures.

2 Efficiency evaluation focuses on the expenditure of resources and whether policy outcomes could have been achieved at lower cost.

3 Effort evaluation also deals with questions of efficiency, but looks particularly at the quality and adequacy of inputs to policy implementation (finance, time, expertise, administrative resources, etc.).

The three approaches above all deal with key inputs, and form the basis of most contemporary ‘performance indicators’ for emergency management agen- cies. This may reflect the availability and ease of comparability of the data, rather than the data’s inherent importance for emergency management. This is not to understate the importance of budgets and value for money.

4 Performance evaluation investigates the outcomes of a policy intervention irrespec- tive of achievement of policy goals.

5 Effectiveness evaluation assesses an intervention in terms of the stated policy goals.

These last two approaches are understandably quite common for disaster management given the goals to protect lives, livelihoods and property, and the high profile of failures. Agencies typically assess their warnings against technical criteria, but less commonly against broad goals such as community safety, although a programme logic approach is being used by the Australian Bushfire Cooperative Research Centre to evaluate community safety programmes. Commerce is also active in this arena, purveying products claiming to help fulfil emergency management objectives.

Defining policy goals and objectives in ways that are measurable often creates major challenges. ‘Improved community safety’ may be universally agreed on as a goal, but is subject to many interpretations and is insufficient as a target for evaluation. Less apparent agendas cannot be ignored either: rapid response and visibility may be more important for the organization’s profile and, thus, ongoing budget allocation than more substantive outputs.

All of these purposes are equally valid and may well be combined in an evaluation process; but, as with participation in Chapter 4, different actors will have different purposes and expectations. Some actors may only be concerned with the expenditure of public resources, others with the achievement of policy goals without any concern for expense, others with the inclusiveness of the policy process, and so on. These agendas need to be clearly defined in order to avoid confusion and so that appropri- ate information and methods are employed.

The particular attributes of the disasters domain suggests that the goal of prevent- ing human, environmental and economic losses would always be paramount – but this is not necessarily the case. Questions of process, of financial efficiency, of admin- istrative accountability – these are inevitable concerns and are important especially in terms of long-term processes and trust. Ideally, process and outcome-oriented evaluations can be designed in a coordinated fashion where different agendas are pursued constructively in the interests of policy learning.

Thus far, our focus has been on the monitoring and evaluation of specific policy and institutional responses. Underpinning capacities to undertake such M&Es, and even more so the ability to engage in policy learning, are the human skills and knowledge in the emergencies and disasters domain. This is very much influenced by

Dalam dokumen Policies and Institutions (Halaman 156-159)