Cautions in Making the Business Case for Work-Life Programs
While the results of the studies just presented may seem compelling, keep in mind three important considerations:
1. Recognize that no one set of facts and figures will make the case for all
firms. It depends on the strategic priorities of the organization in ques- tion. Figure 2–6 provides a diagnostic logic for conversations about this.
One might start by discussing whether the organization’s likely payoff will be primarily through talent management, human-capital outcomes (improved employee satisfaction, commitment, and engagement), busi- ness operations, or the costs of alternative programs. Start by finding out what your organization and its employees care about right now, what the workforce is going to look like in three to five years, and therefore, what they are going to need to care about in the future. 44
2. Don’t rely on isolated facts to make the business case. Considered by itself,
any single study or fact is only one piece of the total picture. It is important to develop a dynamic understanding of the importance of the relationship between work and personal life. Doing so requires a focus on an organiza- tion’s overall culture and values, not just on programs or statistics. Often a combination of quantitative information along with employees’ experi- ences, in their own words (qualitative information that brings statistics to life), is most effective.
3. Don’t place work-life initiatives under an unreasonable burden of proof.
Decision makers may well be skeptical even after all the facts and costs have been presented to them. That suggests that more deeply rooted atti- tudes and beliefs may underlie the skepticism—such as a belief that addressing personal concerns may erode service to clients or customers or that people will take unfair advantage of the benefits or that work-life issues are just women’s issues. Constructing a credible business case means addressing attitudes and values as well as assembling research. 45
anecdotal information, regardless of the overall impact of such programs (low, average, or high). 48 Such information may be derived in two types of situations:
one in which only indirect measures of dollar outcomes are available and one in which direct measures of dollar outcomes are available.
Indirect Measures of Training Outcomes
Indirect measures of training outcomes are more common than direct mea- sures. That is, many studies of training outcomes report improvements in job performance or decreases in errors, scrap, and waste. Relatively few studies report training outcomes directly in terms of dollars gained or saved. Indirect measures can often be converted into estimates of the dollar impact of training, however, by using a method known as utility analysis. Although the technical details of the method are beyond the scope of this chapter, following is a sum- mary of one such study. 49
A large, U.S.-based, multinational firm conducted a four-year investigation of the effect and utility of its corporate managerial and sales/technical training func- tions. The study is noteworthy because it adopted a strategic focus by comparing the payoffs from different types of training in order to assist decision makers in allocating training budgets and specifying the types of employees to be trained. 50
Project History
The CEO, a former research scientist, requested a report on the dollar value of training. He indicated that training should be evaluated experimentally, strate- gically aligned with the business goals of the organization, and thus demon- strated to be a worthwhile investment for the company. Thus, the impetus for this large-scale study came from the top of the organization.
Methodological Issues
In a project of this scope and complexity, it is necessary to address several important methodological issues. The first concerns the outcomes or criteria to use in judging each program’s effectiveness. Training courses that attempt to influence the supervisory style of managers may affect a large percentage of the tasks that comprise that job. Conversely, a course designed to affect sales of a specific product may only influence a few of the tasks in the sales representa- tive’s job. The researchers corrected for this issue to ensure that the estimate of economic payoff for each training program only represented the value of performance on specific job elements.
A second issue that must be considered when assessing the effectiveness of alternative training programs is the transfer of trained skills from the training to the job. For the training to have value, skills must be generalized to the job (i.e., exhibited on the job), and such transfer must be maintained for some period of time. To address the issue of transfer, the measure of performance on all training programs was behavioral performance on the job. Performance was assessed by means of a survey completed by each trainee’s supervisor (for most courses), peers (i.e., hazardous energy control), or subordinates (i.e., team building) before and after training.
Because it was not possible to assess the length of training’s effects in this study, decision makers assumed that training’s effect (and economic utility) was maintained without decay or growth for precisely one year. In addition, the researchers calculated break-even values, which indicate the length of time
the observed effect would need to be maintained in order to recover the cost of the training program. 51
Training Programs Evaluated
A sample of 18 high-use or high-cost courses was selected based on the recom- mendation of the training departments throughout the organization. Managerial training courses were defined as courses developed for individuals with mana- gerial or supervisory duties. Sales training courses were defined as programs designed to enhance the performance of sales representatives by affecting sales performance or support of their own sales. Technical training courses (e.g., hazardous energy control, in-house time management) were defined as courses not specifically designed for sales or supervisory personnel.
Of the 18 programs, 8 evaluation studies used a control-group design in which training was provided to one group and not provided to a second group similar to the trained group in terms of relevant characteristics. The remaining 10 training program evaluations relied on a pretest- posttest only design, in which a control group was not used and the performance of the trained group alone was evaluated before and after the training program.
Results. Over all 18 programs, assuming a normal distribution of performance on the job, the average improvement was about 17 percent (0.54 of a standard deviation, or SD). However, for technical/sales training it was higher (0.64 SD), and for managerial training it was lower (0.31 SD). Thus, training in general was effective.
The mean ROI was 45 percent for the managerial training programs, and 418 percent for the sales/technical training programs. However, one inexpen- sive time-management program developed in-house had an ROI of nearly 2,000 percent. When the economic utility of that program is removed, the overall average ROI of the remaining training programs was 84 percent and the ROI of sales/technical training was 156 percent.
Time to Break-Even Values. There was considerable variability in these values.
Break-even periods ranged from a few weeks (e.g., time management, written communications) to several years (supervisory skills, leadership skills). Several programs were found to have little positive or even slightly negative effects, and thus would never yield a financial gain.
Conclusions
This study compared the effectiveness and economic utility of different types of training across sales, technical, and managerial jobs. The estimated cost of the four-year project, including the fully loaded cost of rater time, as well as the cost of consulting, travel, and materials, was approximately $730,000 (in 2007 dollars).
This number may seem large. However, over the same time period, the organization spent more than $350 million on training. Thus the cost of the training program evaluation was approximately 0.2 percent of the training budget during the same period. Given budgets of this magnitude, some sort of accountability is prudent.
Despite the overall positive effects and utility of the training, there were some exceptions. The important lesson to be learned is that it is necessary to evaluate the effect and utility of each training program before drawing overall conclusions
about the impact of training. It would be simplistic to claim that “training is a good investment” or that “training is a waste of time and money.” 52
Direct Measures of Training Outcomes
When direct measures of training outcomes are available, standard valuation methods are appropriate. The following study valued the results of a behavior- modeling training program (described more fully below) for sales representa- tives in relation to the program’s effects on sales performance. 53
Study Design
A large retailer conducted a behavior-modeling program in two departments, large appliances and radio/TV, within 14 of its stores in one large metropolitan area. The 14 stores were matched into seven pairs in terms of size, location, and market characteristics. Stores with unusual characteristics that could affect their overall performance, such as declining sales or recent changes in manage- ment, were not included in the study.
The training program was then introduced in seven stores, one in each of the matched pairs, and not in the other seven stores. Other kinds of ongoing sales training programs were provided in the control-group stores, but the behavior-modeling approach was used only in the seven experimental-group stores. In the experimental-group stores, 58 sales associates received the train- ing, and their job performance was compared with that of 64 sales associates in the same departments in the control-group stores.
As in most sales organizations, detailed sales records for each individual were kept on a continuous basis. These records included total sales as well as hours worked on the sales floor. Because all individuals received commissions on their sales and because the value of the various products sold varied greatly, it was possible to compute a job performance measure for each individual in terms of average commissions per hour worked.
There was considerable variation in the month-to-month sales performance of each individual, but sales performance over six-month periods was more stable. In fact, the average correlation between consecutive sales periods of six months each was about 0.80 (where 1.00 equals perfect agreement). Hence, the researchers decided to compare the sales records of participants for six months before the training program was introduced with the results achieved during the same six months the following year, after the training was concluded. All sales promotions and other programs in the stores were identical, since these were administered on an areawide basis.
The Training Program Itself
The program focused on specific aspects of sales situations, such as “approach- ing the customer,” “explaining features, advantages, and benefits,” and “clos- ing the sale.” The training itself proceeded according to the following procedure.
First the trainers presented guidelines (or “learning points”) for handling each aspect of a sales interaction. Then the trainees viewed a videotaped situation in which a “model” sales associate followed the guidelines in carrying out that aspect of the sales interaction with a customer. The trainees then practiced the same situation in role-playing rehearsals. Their performance was reinforced and shaped by their supervisors, who had been trained as their instructors.
Study Results
Of the original 58 trainees in the experimental group, 50 were still working as sales associates one year later. Of the remaining 8 associates, 4 had been promoted dur- ing the interim, and 4 others had left the company. In the control-group stores, only 49 of the original 64 were still working as sales associates one year later. Only 1 had been promoted, and 14 others had left the company. Thus, the behavior-modeling program may have had a substantial positive effect on turnover since only about 7 percent of the trained group left during the ensuing year, in comparison with 22 percent of those in the control group. (This result had not been predicted.)
Figure 2–7 presents the changes in average per-hour commissions for participants in both the trained and untrained groups from the six-month period before the training was conducted to the six-month period following the training. Note in Figure 2–7 that the trained and untrained groups did not have equal per-hour commissions at the start of the study. While the stores that members of the two groups worked in were matched at the start of the study, sales commissions were not. Sales associates in the trained group started at a lower point than did sales associates in the untrained group. Average per-hour commissions for the trained group increased over the year from $9.27 to $9.95 ($21.78 to $23.39 in year 2007 dollars); average per-hour commissions for the untrained group declined over the year from $9.71 to $9.43 ($22.81 to $22.14 in year 2007 dollars). In other words, the trained sales associates increased their average earnings by about 7 percent, whereas those who did not receive the behavior-modeling training experienced a 3 percent decline in average earn- ings. This difference was statistically significant. Other training outcomes (e.g., trainee attitudes, supervisory behaviors) were also assessed, but, for our pur- poses, the most important lesson was that the study provided objective evidence to indicate the dollar impact of the training on increased sales.
The program also had an important secondary effect on turnover. Because all sales associates are given considerable training (which represents an exten- sive investment of time and money), it appears that the behavior modeling contributed to cost savings in addition to increased sales. As noted in the previ- ous discussion of turnover costs, an accurate estimate of these cost savings re- quires that the turnovers be separated into controllable and uncontrollable, because training can affect only controllable turnover.
Finally, the use of objective data as criterion measures in a study of this kind does entail some problems. As pointed out earlier, the researchers found that a six-month period was required to balance out the month-to-month variations in
$10.00 9.80 9.60 9.40 9.20 9.00
Average Per-Hour Commission
Year 1 (Before training)
Year 2 (After training)
9.95
9.43 9.71
9.27
Trained group
Untrained group
Figure 2–7 Changes in per- hour commissions before and after the behavior- modeling training program.
sales performance resulting from changing work schedules, sales promotions, and similar influences that affected individual results. It also took some vigilance to ensure that the records needed for the study were kept in a consistent and consci- entious manner in each store. According to the researchers, however, these prob- lems were not great in relation to the usefulness of the study results. “The evidence that the training program had a measurable effect on sales was certainly more convincing in demonstrating the value of the program than would be merely the opinions of participants that the training was worthwhile.” 54