• Tidak ada hasil yang ditemukan

HISTORY, DEVELOPMENT, AND USE OF PUBLIC LIBRARY PERFORMANCE MEASUREMENT

Today’s challenges in PM (i.e. quality service, cost accountability, increasing use of technology, etc.) generate discussion in the literature of the library profession. Most public libraries have felt pressure from their stakeholders to continually redefine and improve their abilities to report performance information. However, PM is not a new development in public libraries. PM in public libraries began in 1876 with Cutter using cost–benefit analysis in a study of cataloging effectiveness. The traditional techniques of PM devel- oped during the early history of PM in public libraries, which include in- terviews, input/output analysis, costs analysis, and activity analysis, are still the most popular forms of PM today.

Lubans states that Shaw was one of the earliest practioners and propo- nents of scientific management in the library profession, dating back to the 1930s. Shaw was an advocate of integration, using a ‘‘total system ap- proach’’ and ‘‘reductions in workforce’’ to increase efficiency. Lubans also

describes how Rider used unit cost study to maximize efficiency as early as 1934. Rider was quoted as saying that if librarians did not use scientific management or cost–benefit analysis to justify performance, nonlibrarians would come in and do it for us. This statement still bears effective warning to today’s library professionals.

Stakeholder and community pressure to change and the discussion gen- erated about PM within the profession has led to the sporadic adoption of a few new performance methodologies or measures in some public LAU.

Since the late 1980s, a small percentage of public libraries have reported incorporating these new types of PM into their arsenal of evaluation. Ex- amples of some of the new PM methodologies that have been reported utilized include: benchmarking, outcomes assessment, TQM, or other strongly quantitative measurements such as the balanced scorecard (BSC), and a variety of customer satisfaction measurement systems. Determining the actual PM methodologies used in Florida public libraries is an integral component of this study.

One cause for the lack of change in using PM in public libraries is the lack of consensus among members of the profession as to when performance measures should be used and the reasons for their use. Lancaster stated that PM consists of microevaluation, which is how a system operates and why, and macro-evaluation, which is how well a system operates. McClure, Zweizig, Van House, and Lynch suggest that the primary utility of per- formance measures is for internal staff diagnosis. Van House et al. stated that, ‘‘performance measurement serves several purposes in libraries: as- sessing current levels of performance; diagnosing possible problem areas;

comparing current and desired levels of performance; and monitoring progress’’(Van House et al., 1987, p. 2). Van House also cites the major benefit of PM to be, ‘‘that it provides information for planning and decision- making’’ (Van House et al., 1987, p. 3). Other additional benefits according to Van House include the following.

The ability to communicate library data to external agencies and indi- viduals

Enhancing the library’s ability to plan, evaluate, and determine effective- ness within the library

Funding justification or other resource decisions to stakeholders Service enhancements or improvements

Increasing staff ability to collect and use performance data

The creation of a PM process that has meaning to the organization and staff

LARRY NASH WHITE 96

Determining whether these purported benefits of using PM exist in Flori- da public libraries is an integral component of the study.

According to Van House, there are three types of performance measures used by libraries: inputs, outputs, and productivity. Input measures are described as the resources (human, financial, and materials) and costs of producing and delivering library services and products. OM fall within three categories: quantitative, qualitative, and timeliness. OM are geared to pro- vide data on the overall quantities of service and products provided; the financial cost of the service and products provided; and the speed of the service provided. Kraft and Boyce (1991) determined that there are four types of library performance measures that HLA should employ to address stakeholder inquiries:

The amounts of resources library have at their disposal.

The efficiency in using these resources to generate services.

The effectiveness in how alternatives are used to meet goals.

The benefit to society and environment.

Risher and Fay (1995)suggest that PM should measure anything that is important to at least one group of stakeholders. They continue by saying that PM should illustrate the variation of service the organization is capable of providing, customer satisfaction, and timeliness. Administrators can ac- complish this by ensuring that multiple performance measures originate from customer needs and provide feedback to the organization. Hernon and Altman state that performance ‘‘measures characterize the extent, effective- ness, and efficiency of library programs and services’’ (Hernon & Altman, 1996, p. 27).

The Public Library Association (PLA) and its parent organization, the American Library Association (ALA), have played the lead role in the use and development of standards of service and PM for public libraries throughout most of the 20th century. In 1933, the ALA published its first set of performance standards for public libraries. Standards of service were the first attempt to provide public libraries with the ability to demonstrate their level of achievement. The first standards of service simply recommended minimum levels of resources to provide library service; examples included $1 per capita as the minimum level of expenditure to support library services.

These standards of service were gradually increased until 1956 when it be- came apparent that there were, ‘‘uneven effects of library size and mission on resource requirements and partially in recognition of the imperfect relationship between expenditure level and service quality’’ (Withers, 1974, pp. 322–323).

ALA introduced the first qualitative standards of service in 1933. These qualitative standards of service possessed a broad scope to allow for wide acceptance by the profession. However, qualitative standards of service were of limited benefit as a standard against which to measure quantitative per- formance. Initially, according to Ammons these qualitative standards of service prescribed, ‘‘appropriate levels of financial support and staff cre- dentials’’ (Ammons, 2000, p. 211). The standards contained phrasing that provided little specific information to facilitate measuring performance to professionals or the community at large. Ammons states that the qualitative standards, ‘‘offered little leverage for prying resources from the city (county) treasury. In short, they (standards) failed to arm library directors with a persuasive means of demonstrating to budget makers’ local shortcomings in facilities, services, and funding’’ (Ammons, 1996, p. 212). From these qual- itative standards of service came the ‘‘formulation of qualitative library goals and, eventually, quantitative standards of service pertaining to library collection, facilities, services, and performance’’(Ammons, 1996, p. 211).

These quantitative standards were appreciated by professionals who needed a set of quantitative measurements that improved their ability to report performance. The uses of quantitative standards of service alone have proven to be controversial within the library profession since their inception in the 1950s–1960s. Some of the more common negative statements made regarding the uses of quantitative standards of service are:

They were created without considering the different service needs of in- dividual communities.

The quantitative standards do not support the top echelon level of libraries in their efforts to demonstrate performance. The top echelon libraries felt penalized when their performance results exceeded the stand- ards, providing local officials with ‘‘justification’’ to divert library funding to other agencies.

The quantitative standards did not systematically approach PM from the perspective of library customers or potential customers.

This was unsettling for many who were accustomed to having customer input as a part of the PM process. Customer input in the PM process in public libraries was initiated in 1939, when Wilson became the first library professional to incorporate the use of customer surveys to determine organ- izational performance accomplishment. Hatry et al. stated: ‘‘Most library systems lack information on: the level of citizen satisfaction with library operations, including the comfort of facilities, hours of operations, speed of service, and helpfulness of the staff; the availability of materials sought by the LARRY NASH WHITE 98

users; and the percentage of households using the system, with estimates of reasons for nonuse by those not using the system’’ (Hatry et al., 1979, p. 67).

Several sets of performance measures have been issued to supplement the older standards of service for public libraries since the 1960s; i.e. Deprospo, Altman, and Beasley (1973); Altman et al. (1977); Van House, Lynch, McClure, Zweizig, and Rodger (1987);Himmel and Wilson (1998), and elec- tronic performance measures by Bertot, McClure and Ryan (2001). Each set of performance measures took years of work to develop and varied in content and measurement techniques to allow for a wide level of application. However, the profession received many of these measures with limited enthusiasm due to the lingering lack of consensus in the use and application of PM by HLA.

The International Federation of Library Associations (IFLA) produced a consolidated collection of recommended minimal quantitative standards and measurements in 1986. Because these minimal quantitative standards were written to have the greatest application, IFLA cautioned readers that its standards were, ‘‘not likely to be universally relevant’’ (Guidelines for Public Libraries, p. 61). The IFLA quantitative standards were in contra- diction to the direction being set by the PLA at the time.

By the late 1980s, PLA had ceased using what it saw as prescriptive quantitative standards as a national measurement tool and had been working to develop and utilize new PMs. One of the more noted new PM method- ologies being used in public LAU’s today is outcomes assessment. The In- stitute of Museum and Library Services (IMLS) states, ‘‘it believes that outcomes-based assessment, a systematic measurement of impact, holds great promise for libraries’’ (IMLS, 2000, p. 2). ‘‘In April 1995, UWA established an internal team charged to help United Way organizations document and improve their impact on community problems by developing and supporting approaches to measuring the outcomes of United Ways’ investment in health and human services’’ (United Way of America’s Outcome Measurement Resource Network, 1995–2002,http://www.unitedway.org/outcomes/).

The United Way outcomes assessment model for PM seeks to provide information to stakeholders about the impact, i.e. changes in knowledge, behavior, status, or condition, that an organization (in this case, public libraries) has had on clients or customers. However, in outcomes assess- ment, customer satisfaction is not considered an appropriate outcome, be- cause it does not represent a change in knowledge, behavior, status, or condition.IMLS (2000)further states, ‘‘outcome based assessment is not a form of research, nor is it simple data collection. It joins both of those essential processes, however, as a powerful tool in reporting the kinds of differences libraries make among their users’’ (IMLS, 2000, p. 2).

The Bureau of Library Development (BLD) of the State Library of Flor- ida is presently encouraging public libraries in Florida to implement out- comes assessment to measure library services in order to improve public libraries’ abilities to report customer and community impact. The BLD has provided publications and numerous training opportunities to HLA in or- der to increase the awareness and implementation rates for outcomes as- sessment. The BLD has even made an outcomes assessment handbook available on its Web site to facilitate ease of access for HLA.

The BLD began requiring in FY 1999–2000 that all Library Service and Technology Act (LSTA) grant applications contain an outcomes assessment model in order to qualify to receive a LSTA grant award. As of 2002, approximately 35 public LAU in Florida are currently using outcomes as- sessment because of this requirement. However, most libraries still have not developed outcomes assessment measures to evaluate their comprehensive library services in spite of the efforts and encouragement from the BLD to local libraries to develop outcome measures.

As outcomes assessment is a relatively new PM methodology being used in public libraries, there is little evidence as to what impact it is having on its practioners. Even the UWA states that, ‘‘This question merits serious re- search that explores not only whether there are benefits, but also under what circumstances, for what types of programs, and other issues of applicability and whether or not results can be generalized’’ (UWA Outcome Measure- ment Resource Network, 2001,http://www.unitedway.org/outcomes/).

The change in emphasis to qualitative PM by some LAU and PLA has caused concern among those who still feel the need to have quantitative PM standards. To respond to the perceived concern among those library practioners many state library agencies have attempted to mandate the use of quantitative PM. While none of these attempts have proven successful to date, at least 23 state-level associations, including the Florida Library As- sociation in 1993–1994, have set their own prescriptive quantitative stand- ards for PM in public libraries. (Ammons, 1996, p. 224)

Kaplan states that administrators and stakeholders of nonprofit organ- izations are becoming increasingly aware that financial performance meas- ures alone are not sufficient to evaluate organizational performance. Other libraries are looking to introduce more complex quantitative PM method- ologies that consider more aspects of organizational performance than tra- ditional library output measurements in order to respond to accountability concerns from stakeholders. BSC is one such strongly quantitative PM methodology being used in public libraries.

LARRY NASH WHITE 100

The BSC was originally designed for use in the private sector. BSC gen- erates feedback on current critical performance processes and targets future performance from both within and outside an organization. Performance measures come from four general areas: financial performance, customers, internal business processes, and learning and growth. The resulting measures from these four areas align individual, unit, and organizational goals against the objectives needed to meet or exceed customer and stakeholder objectives.

In summary, the literature of librarianship suggests that there is little consensus regarding the purposes and definitions of PM in public libraries.

Further, the literature suggests there is little consensus as to what types of PM methodologies should be used in public libraries or what benefits are derived from using these PM methodologies. Additionally, there has been no study to date of the use and perceptions of PM in public libraries (including Florida public libraries). However, the literature does demonstrate a con- sensus that public libraries, in particular Florida public libraries, have been using PM to report some form of efficiency and accountability information to stakeholders for many years without and assessment of the PM process.

Based on the literature findings, this study has been designed to determine what types of PM are being used in Florida public libraries, who receives the resulting information, and what impacts Florida public libraries are expe- riencing from using PM.