Once clear about why to measure performance and what dimensions of performance to measure, the question becomes how to implement a functioning measurement system.
The measurement system includes not only the specific indicators but also the plan and procedures for data gathering, data entry, data storage, data analysis, and information por- trayal, reporting, and reviewing. A key recommendation is that those whose performance is being measured should have some involvement in developing the measurement system.
The approaches that can be used to develop the measurement system include the follow- ing: (1) have internal or external experts develop it in consultation with those who will use the system; (2) have the management develop it for themselves and delegate implemen- tation; (3) have the units being measured develop their own measurement systems and seek management’s approval; or (4) use a collaborative approach involving the managers, the unit being measured, and subject matter expert assistance. This last approach can be accomplished by forming a small team, the measurement system design team.
A “design team” is a team whose task is to design and perhaps develop the mea- surement system; however, day-to-day operation of the measurement system should be assigned to a function or individual whose regular duties include measurement and
87 Chapter five: Strategic performance measurement
reporting (i.e., it should be an obvious fit with their job and be seen as job enrichment rather than an add-on duty unrelated to their regular work). When ongoing performance measurement is assigned as an extra duty, it tends to lose focus and energy over time and falls into a state of neglect. Depending on how work responsibility is broken down in an organization, it may make sense to assign responsibility for measurement system operation to industrial engineering, accounting and finance, the chief information officer, quality management/assurance, human resources, or a combination of these. The design team should include the manager who “owns” the measurement system, a measurement expert (e.g., the industrial engineer), two or more employees representing the unit whose performance is being measured, and representatives from supporting functions such as accounting and information systems.
Each of the four development approaches can benefit from adopting a systems view of the organization using an input/output analysis.
5.6.1 Input/output analysis with SIPOC model
A tool for helping users identify information needs at the organizational level is the input/
output analysis or the SIPOC (suppliers, inputs, processes, outputs, and customers) model.
The intent is to get the users to describe their organization as an open system, recogniz- ing that in reality there are many feedback loops within this system that make it at least a partially closed-loop system. The SIPOC model is particularly useful for the design team approach to developing a measurement system. The model helps the team members gain a common understanding of the organization and provides a framework for discussing the role and appropriateness of candidate indicators.
The first step to complete the SIPOC model is to identify the organization’s primary customers, where a customer is anyone who receives a product or service (including infor- mation) from the organization. Next identify the outputs, or specific products and ser- vices, provided to these customers. For an organization with a limited number of products and services, these outputs can be identified on a customer-by-customer basis; for an orga- nization with many products and services, it is more efficient to identify the products and services as a single comprehensive list and then audit this list customer by customer to make sure all relevant products and services are included.
The next step is not typically seen in the SIPOC model, but it is a critical part of any input/output analysis. It starts with the identification of the customers’ desired outcomes, that is, the results they want as a consequence of receiving the organization’s products and services. A customer who purchases a car may want years of reliable transportation, a high resale value, and styling that endures changes in vogue. A customer who purchases support services may want low-cost operations, seamless interfaces with its end users, and a positive impact on its local community. While the organization may not have full con- trol in helping its customers achieve these desired outcomes, it should consider (i.e., mea- sure) how its performance contributes to or influences the achievement of these outcomes.
The identification of desired outcomes also includes identifying the desired outcomes of the organization, such as financial performance (e.g., target return on investment, mar- ket share), employee retention and growth, repeat customers, and social responsibility.
Measuring and comparing the customer’s desired outcomes to the organization’s desired outcomes often highlights key management challenges, such as balancing the customer’s desire for low prices with the organization’s financial return targets. Measuring outcomes helps the organization understand customer needs beyond simply ensuring that outputs meet explicit specifications.
At the heart of the SIPOC model is the identification of processes, particularly the pro- cesses that produce the products and services. A separate list of support processes, those that provide internal services necessary to the functioning of the organization but are not directly involved in producing products or services for external consumption, should also be identified. Processes lend themselves to further analysis through common indus- trial engineering tools such as process flow charts and value stream maps. Process flow charts are useful for identifying key measurement points in the flow of information and materials and thus the source of many operational performance indicators. Strategic per- formance measurement may include a few key process indicators, particularly those that predict the successful delivery of products and services. Once processes are identified, the inputs required for those processes are identified. As with outputs, it may be more efficient to identify inputs as a single list and then compare them to the processes to make sure all key inputs have been identified. The five generic categories of inputs that may be used to organize the list are labor, materials, capital, energy, and information. In order to be use- ful for identifying performance indicators, the inputs must be more specific than the five categories. For example, labor might include direct hourly labor, engineering labor, con- tracted labor, management, and indirect labor. These can be classified further if there is a need to measure and manage labor at a finer level, although this seems more operational than strategic. Examples of relevant labor indicators include burdened cost, hours, percent of total cost, and absenteeism. The last component of the SIPOC model is the identifica- tion of suppliers. While this component has always been important, the advent of overt improvement approaches such as supply chain management and the increased reliance on outsourcing have made the selection and management of suppliers a key success factor for many organizations. Suppliers can also be viewed as a set of upstream processes that can be flow charted and measured like the organization’s own processes. The design team may wish to work with key suppliers to identify indicators of supplier performance that predict the success of (i.e., assure) the products and services being provided as inputs in meeting the needs of the organization’s processes and subsequent products and services.
Informed by the insight of working through an input/output analysis, and regard- less of whether a design team is used or not, the process of designing, developing, and implementing a measurement system based on the body of knowledge described thus far is conceptually simple and practically quite complex. An outline of the sequential steps in this process is provided as a guide in the following section.
5.6.2 Macro strategic measurement method
There are essentially seven steps in the process of building and using a strategic measure- ment system. Each of these seven macro steps may be decomposed into dozens of smaller activities depending on the nature and characteristics of the organization. In practice, the steps and substeps are often taken out of sequence and may be recursive.
1. Bound the target system for which performance measures will be developed. This seemingly obvious step is included as a declaration of the importance of operation- ally and transparently defining the system of interest. Is the target system a single division or the entire firm? Are customers and suppliers included in the organi- zational system or not? Are upline policy makers who influence the environment inside the system or outside of it? Any particular answer may be the “right” one;
the important point is shared clarity and agreement. Frequently people who want better measurement systems define the target system too small, in the false belief
89 Chapter five: Strategic performance measurement
that it is inappropriate to measure things that may be out of the target system’s control. The false belief is often present at the functional and product level, and at the organizational level as supply chains become more complex. Indicators that reflect performance only partially controllable or influenced by the organization are often those most important to customers and end users. When the organiza- tion has only partial control of a performance indicator of importance to custom- ers, the organization needs to understand its contribution to that performance and how it interacts with factors beyond its control. This aversion to measure what is outside one’s control is driven by an inability to separate measurement from evaluation. To separate the two, first, measure what’s important; second, evaluate performance and the degree of influence or control users have over changing the measured result.
2. Understand organizational context and strategy. This step involves documenting, verifying, or refining the target system’s mission, vision, values, current state, chal- lenges, and long- and short-term aims—all of the activities associated with strategic planning and business modeling. Recall how to do measurement in the context of planning and also the input/output process presented earlier.
3. Identify the audience(s) and purpose(s) for measuring. A helpful maxim to guide development of strategic planning and strategic measurement PDSA systems is audi- ence + purpose = design. Who are the intended audiences and users of the measure- ment system, and what are their needs and preferences? What are the purpose(s) of the measurement system being developed? Effective measurement system designs are derived from those answers. There are many ways to discover and articulate who (which individuals and groups) will be using the measurement system, why they want to use it, and how they want to use it. Conceptually, the fundamental engineer- ing design process is applicable here, as are the principles of quality function deploy- ment for converting user needs and wishes into measurement system specifications and characteristics.
4. Select KPAs. This step involves structured, participative, generative dialogue among a group of people who collectively possess at least a minimally spanning set of knowledge about the entire target system. The output of the step is a list of perhaps seven plus or minus two answers to the following question: “In what categories of results must the target system perform well, in order to be successful in achieving its aims?”
5. For each KPA, select key performance indicators (KPIs). This step answers the ques- tion for each KPA, “What specific quantitative or qualitative indicators should be tracked over time to inform users how well the target system is performing on this KPA?” Typically a candidate set of indicators is identified for each KPA. Then a group works to clarify the operational definition and purpose of each candidate KPI; evalu- ate proposed KPIs for final wording, importance, data availability, data quality, and overall feasibility; consider which KPIs will give a complete picture while still being a manageable number to track (the final “family of measures” will include at least one KPI for each KPA); select final KPIs that will be tracked; and identify the KPI
“owner,” sources of data, methods and frequency of reporting, and reporting format for selected KPIs. An inventory of existing performance indicators should be com- pleted in this step.
A note on steps 4 and 5: The order of these steps as described implies a top–down approach. However, reversing the order into a bottom–up approach can also be suc- cessful. A bottom–up approach would identify candidate indicators, perhaps using a
group technique such as brainstorming or the nominal group technique (Delbecq et al., 1975). Once there is a relatively comprehensive list of candidate indicators, the list can be consolidated using a technique such as affinity diagrams (Kubiak and Benbow, 2009) or prioritized with the nominal group technique or analytical hierarchy process.
The aim here is to shorten the candidate list to a more manageable size by cluster- ing the indicators into categories that form the foundation for the dimensions of the organization’s scoreboard (i.e., KPAs) or a prioritized list from which the “vital few”
indicators can be extracted and then categorized by one or more of the performance dimensions frameworks to identify gaps. In either case (top–down or bottom–up), the next step is to try the indicators out with users and obtain fitness-for-use feedback.
6. Track the KPIs on an ongoing basis. Include level, trend, and comparison data, along with time-phased targets to evaluate performance and stimulate improvement.
Compare and contrast seemingly related KPIs over time to derive a more integrated picture of system performance. An important part of piloting and later institution- alizing the vital few indicators is to develop appropriate portrayal formats for each indicator. What is appropriate depends on the users’ preferences, the indicator’s purpose, and how results on the indicator will be evaluated. User preferences may include charts versus tables, use of color (some users are partially or fully color- blind), and the ability to drill down and easily obtain additional detail. An indicator intended for control purposes must be easily transmissible in a report format and should not be dependent on color (the chart maker often loses control of the chart once it is submitted, and color charts are often reproduced on black-and-white copi- ers), nor should it be dependent on verbal explanation. Such an indicator should also support the application of statistical thinking so that common causes of variation are not treated as assignable causes, with the accompanying request for action. An indi- cator intended for feedback and improvement of the entire organization or a large group will need to be easily understood by a diverse audience, large enough to be seen from a distance, and easily dispersed widely and quickly. Rules of thumb for portraying performance information are provided in Table 5.1. Not all of the consid- erations in Table 5.1 can be applied to every chart. A detailed discussion of portrayal is beyond the scope of this chapter. Design teams should support themselves with materials such as Wheeler’s Understanding Variation (1993) and Edward Tufte’s booklet, Visual and Statistical Thinking: Displays of Evidence for Decision Making (1997a), a quick and entertaining read on the implications of proper portrayal.
7. Conduct review sessions. A powerful approach to obtain feedback from users on the indicators, and to evaluate organizational performance based on the indicators, is to conduct regular periodic face-to-face review sessions. Review sessions are typi- cally conducted with all the leaders of the target system participating as a group.
Notionally, the review sessions address four fundamental questions: (1) Is the orga- nization producing the results called for in the strategy? (2) If yes, what’s next; and if no, why not? (3) Are people completing the initiatives agreed to when deploying the strategy? (4) If yes, what’s next; if no, why not? The review session is where critical thinking and group learning can occur regarding the organizational hypothesis tests inherent in strategy. If predicted desired results are actually achieved, is it because leaders chose a sound strategy and executed it well? To what degree was luck or chance involved? If predicted results were not achieved, is it because the strategy was sound yet poorly implemented? Or well implemented but results are delayed by an unforeseen lag factor? Or, in spite of best intentions, did leaders select the “wrong”
91 Chapter five: Strategic performance measurement
strategy? Group discussion of these strategy and measurement questions will also cause further suggestions to be made to enhance the set of indicators and how they are portrayed. See Farris et al. (2011) for more on review sessions.