Performance measurements should be scrutinized, just like other functions and processes.
Financial indicators and the financial control and accounting system they are typically part of receive an annual audit by an external (third-party) firm. Non-financial strategic performance indicators do not consistently receive the same scrutiny. So how do manag- ers know that these non-financial indicators are providing them with valid, accurate, and reliable information? Valid information here refers to face or content validity: does the indicator measure what it purports to measure? Reliable information means consistency in producing the same measurement output (i.e., indicator value) when identical perfor- mance conditions are repeatedly measured. Accuracy refers to how close the measure- ment output values are to the true performance values. By assuming that the indicators are providing valid, accurate, and reliable information, what assurance do managers have that their measurement systems are clearly understood, useful, and add value to the organiza- tion? A certain amount of financial measurement is a necessary part of doing business, for quarterly and annual SEC filings, reports to shareholders, or mandated by legislation in order to continue receiving government funding. The non-financial components of the
93 Chapter five: Strategic performance measurement
measurement system are not typically mandated by legislation with the exception of com- pliance statistics like those reported to worker safety and environmental protection agen- cies. Organizations compelled to participate in supplier certification programs or achieve quality or environmental management systems certification may feel coerced to develop a rudimentary non-financial measurement system. However, they should realize that the return from developing a strategic performance measurement system is not compliance, but is the provision of useful information that adds value to the organization through better decision-making and support for implementation. After investing the time and resources to develop a strategic performance measurement system, organizations should periodically audit that system for validity, reliability, and accuracy and assess the system for continued relevance and value added.
It is beyond the scope of this chapter to describe the audit and assessment process in detail. The interested reader should refer to Coleman and Clark (2001). Figure 5.2 provides an overview of where the techniques suggested by Coleman and Clark can be applied to audit and assess the measurement process. “Approach” in the figure includes deciding on the extent of the audit and assessment, balancing previous efforts with current needs, and choosing among the variety of techniques available. The techniques in the figure are shown at the phases of the measurement and evaluation process where they are most applicable. Table 5.2 provides brief descriptions of these techniques and sources for addi- tional information.
Organizations concerned with the resource requirements to develop, operate, and maintain a measurement system may balk at the additional tasking of conducting a com- prehensive audit and assessment. Such organizations should, at a minimum, subject their measurement system to a critical review, perhaps using a technique as simple as
“start, stop, or continue.” During or immediately following a periodic review of perfor- mance (where the current levels of performance on each key indicator are reviewed and
Approach Event/
occurrence/
phenomenon
Observe/
sense
Capture/
record, organize
Process/
analyze, aggregate
Portray, annotate,
report
Perceive/
interpret/
evaluate
Techniques
Critical thinking Strategic alignment Method selection
Simulation
Graphical analysis
Balance review Sample size Sensitivity analysis Timeliness
Validity check Formula review Treatment of variation Argument analysis
Verbal reasoning
Figure 5.2 Auditing and assessing the measurement and evaluation process. (Adapted from Coleman, G.D. and Clark, L.A., A framework for auditing and assessing non-financial perfor- mance measurement systems, in Proceedings of the Industrial Engineering Research Conference, Dallas, CD-ROM, 2001.)
Table 5.2 Techniques Available for Auditing and Assessing Strategic Performance Measurement Systems
1. Strategic alignment—audit against the organization’s priorities, implicit and explicit.
2. Balance review—assessment against the elements of one or more “balance” frameworks (e.g., Kaplan and Norton’s Balanced Scorecard, Barrett’s Balanced Needs Scorecard, Sink’s Seven Criteria).
3. Critical thinking—scrutinizing for “faulty assumptions, questionable logic, weaknesses in methodology, inappropriate statistical analysis, and unwarranted conclusions” (Leedy, 2001, p. 36). Includes assessing the logic of the hierarchy of measures and the aggregation
schemes. Assess value and usefulness by using Brown’s (1996) or Sink’s guidelines for the number of indicators used at one level in the organization.
4. Sample design—assessing sample design and the appropriateness of the generalizations made from these samples (i.e., external validity). This is more than an issue of sample size.
“The procedure of stratification, the choice of sampling unit, the formulas prescribed for the estimations, are more important than size in the determination of precision” (Deming 1960, p. 28).
5. Validity check—auditing for evidence of validity. What types of validity have been established for these measures: face, content, construct, or criterion validity?
6. Method selection—assessment of the appropriateness of the method(s) chosen for the data being used. Includes choice of quantitative and qualitative methods. Might include assessment of the reliability of the methods. Internal validity might be addressed here.
7. Simulation—observing or entering data of known properties (often repeatedly), then comparing the output (distribution) of the measurement process against expectations.
8. Sensitivity analysis—varying input variables over predetermined ranges (typically plus and minus a fixed percent from a mean or median value) and evaluating the response (output) in terms of percentage change from the mean or median output value.
9. Formula review—comparison of the mathematical formulae to the operational and
conceptual definitions of the measure. Also includes auditing of replications of the formulae to ensure consistent application.
10. Graphical analysis—at its simplest, plotting results and intermediate outputs to identify underlying patterns. In more advanced forms, may include application of statistical techniques such as individual and moving range charts (Wheeler, 1993). Assess any underlying patterns for possible impact on validity.
11. Timeliness—an assessment of the value of the information provided on the basis of how quickly the measured results reach someone who can directly use the results to control and improve performance. One simple technique is to track the lag time between occurrence and reporting of performance, then apply a user-based judgment of the acceptability of this lag.
12. Treatment of variation—graphical analysis is one technique for addressing variation. More importantly, how do the users of the measurement information perceive or react to variation in results? Assess available evidence of statistical thinking and the likelihood of interpreting noise as a signal or failing to detect a signal when present.
13. Argument analysis—“discriminating between reasons that do and do not support a particular conclusion” (Leedy and Ormrod, 2001, p. 36). Can be used to assess clarity with the Sink et al. (1995) technique described in Coleman and Clark (2001).
14. Verbal reasoning—“understanding and evaluating the persuasive techniques found in oral and written language” (Leedy and Ormrod, 2001, p. 36). Includes assessing the biases found in portrayal of performance information.
95 Chapter five: Strategic performance measurement
evaluated), the manager or management team using the measurement system should ask the following questions:
What should we start measuring that we are not measuring now? What information needs are currently unmet?
Which indicators that we are currently measuring should we stop measuring? Which are no longer providing value, are no longer relevant, or never met our expectations for providing useful information?
Which indicators should we continue to measure, track, and evaluate? If we were designing our measurement system from scratch, which of our current indicators would appear again?
Another less resource-intensive approach is to address the auditing and assessing of the measurement system as part of a periodic organizational assessment.