HUMAN FACTORS ENGINEERING AND SYSTEMS DESIGN
2.4 Human Reliability
not attainable in the foreseeable future. The lower right also represents an effective use of full automation, and the upper left represents the most effective deploy- ment of humans—working on undefined and unpre- dictable problems. As discussed by Sheridan (2002), few real situations occur at these extremes; most human- automated systems represent some trade-off of these options, which gradually progress toward the upper right—ideally, intelligent automation. Clearly, specifi- cation of the human–machine relationship is an impor- tant design decision. The relationship must be such that the abilities of both the human and machine components are maximized, as is cooperation among these compo- nents. Too often, technology is viewed as a panacea and implemented without sufficient attention to human and organizational issues.
The impact of the changing nature of person–
machine systems on the system design process and current approaches to system design is discussed in a later section. However, before this topic is addressed, concepts of system and human reliability are introduced because these concepts are important to a discussion of system design and evaluation.
48 HUMAN FACTORS FUNCTION changes in the system, such as modifications in man-
agement, work procedures, work planning, or resources.
As noted, analyses of many major accident events indicate that the root cause of these events can be traced to latent failures and organizational errors. In other words, human errors and their resulting consequences usually result from inadequacies in system design. An example is the crash at Dryden Airport in Ontario.
The analysis of this accident revealed that the accident was linked to organizational failings such as poor training, lack of management commitment to safety, and inadequate maintenance and regulatory procedures (Reason, 1995). These findings indicate that when ana- lyzing human error it is important to look at the entire system and the organizational context in which the error occurred.
Several researchers have developed taxonomies for classifying human errors into categories. These tax- onomies are useful, as they help identify the source of human error and strategies that might be effective in coping with error. Different taxonomies emphasize different aspects of human performance. For example, some taxonomies emphasize human actions, whereas others emphasize information-processing aspects of behavior. Rasmussen and colleagues (Rasmussen, 1982;
Rasmussen et al., 1994) developed a taxonomy of human errors from analyses of human involvement in fail- ures in complex processes. This schema is based on a decomposition of mental processes and states involved in erroneous behavior. For the analysis, the events of the causal chain are followed backward from the observed accidental event through mechanisms involved at each stage. The taxonomy is based on an analysis of the work system and considers the context in which the error occurred (e.g., workload, work procedures, shift requirements). This taxonomy has been applied to the analysis of work systems and has proven to be useful for understanding the nature of human involvement in accident events.
Reason (1990, 1995) has developed a similar scheme for examining the etiology of human error for the design and analysis of complex work systems. The model is based on a systems approach and describes a pathway for identifying the organizational causes of human error. The model includes two interrelated causal sequences for error events: (1) an active failure path- way where the failure originates in top management decisions and proceeds through error-producing condi- tions in various workplaces to unsafe acts committed by workers at the immediate human–machine interface and (2) a latent failure pathway that runs directly from the organizational processes to deficiencies in the system’s defenses. The model can be used to assess organizational safety health in order to develop proactive measures for remediating system difficulties and as an investiga- tion technique for identifying the root causes of system breakdowns.
The implications of error analysis for system design depend on the nature of the error as well as the nature of the system. Errors and accidents have multiple causes, and different types of errors require different remedial measures. For example, if an error involves deviations
from normal procedures in a well-structured technical system, it is possible to derive a corrective action for a particular aspect of an interface or task element.
This might involve redesign of equipment or of some work procedure to minimize the potential for the error occurrence. However, in complex dynamic work systems it is often difficult or undesirable to eliminate the incidence of human error completely. In these types of systems, there are many possible strategies for achieving system goals; thus, it is not possible to specify precise procedures for performing tasks. Instead, operators must be creative and flexible and engage in exploratory behavior in order to respond to the changing demands of the system. Further, designers are not able to anticipate the entire set of possible events; thus, it is difficult to build in mechanisms to cope with these events. This makes inevitable a certain amount of error.
Several researchers (Rouse and Morris, 1987;
Rasmussen et al., 1994) advocate the design oferror- tolerant systems, where the system tolerates the oc- currence of errors but avoids the consequences; there is a means to control the impact of error on system performance. Design of these interfaces requires an understanding of the work domain and the acceptable boundaries of behavior and modeling the cognitive activity of operators dealing with incidents in a dynamic environment. A simple example of this type of design would be a computer system which holds a record of a file so that it is not lost permanently if an operator mistakenly deletes the files. A more sophisticated example would be an intelligent monitoring system which is capable of varying levels of intervention.
Rouse and Morris (1987) describe an error-tolerant system that provides three levels of support. Two levels involve feedback (current state and future state) and rely on an operator’s ability to perceive his or her own errors and act appropriately. The third level involves intelligent monitoring, that is, online identification and error control. They propose an architecture for the development of this type of system that is based on an operator-centered design philosophy and involves incremental support and automation. Rasmussen and Vincente (1989) have developed a framework for an interface that supports recovery from human errors. The framework, calledecological interface design, is based on an analysis of the work system. This approach is described in more detail in a later section.
3 SYSTEM DESIGN PROCESS 3.1 Approaches to System Design
System design is usually depicted as a highly structured and formalized process characterized by stages in which various activities occur. These activities vary as a func- tion of system requirements, but they generally involve planning, designing, testing, and evaluating. More de- tails regarding these activities are given in a subsequent section. Generally, system design is characterized as a top-down process that proceeds, in an interactive fash- ion, from broad molar functions to progressively more molecular tasks and subtasks. It is also a time-driven
process and is constrained by cost, resources, and organizational and environmental requirements. The overall goal of system design is to develop an entity that is capable of transforming inputs into outputs to accomplish specified goals and objectives.
In recent years, within the realm of system design, a great deal of attention has been given to the design philosophy and the resulting design architecture as it has become apparent that new design approaches are required to design modern complex systems. The design and analysis of such systems cannot be based on design models developed for systems characterized by a stable environment and stable task procedures. Instead, the design approach is concerned with supplying resources to people who operate in a dynamic work space, engage in collaborative relationships, use a variety of technologies, and often need to adapt their behavioral patterns to changing environmental conditions. In other words, a structural perspective whereby we describe the behavior of the system in terms of cause-and-effect patterns and arrange system elements in cause-and-effect chains is no longer adequate.
3.1.1 Models of System Design
The traditional view of the system design process is that it is a linear sequence of activities where the output of each stage serves as input to the next stage. The stages generally proceed from the conceptual level to physical design through implementation and evaluation. Human factors inputs are generally considered in the design and evaluation stages (Eason, 1991). The general character- istics of this approach are that it represents a reductionist approach where various components are designed in isolation and made to fit together; it is dominated by technological considerations where humans are consid- ered secondary components. The focus is on fitting the person to the system, and different components of the system are developed on the basis of narrow functional perspectives (Kidd, 1992; Liker and Majchrzak, 1994).
Generally, this approach has dominated the design of overall work systems, such as manufacturing systems, as well as the design of the human–machine interface. For example, the emphasis in the design of human–computer systems has largely been on the indi- vidual level of the human–computer interaction without much attention to task and environmental factors that may affect performance. To date too much attention has been on the microergonomic aspects of design without sufficient attention to social and organizational issues (Hendrick and Kleiner, 2001; Kleiner, 2008). The implementation of computers of automation into most work systems, coupled with the enhanced capabilities of technological systems, has created a need for new approaches to system design. As discussed, there are many instances where technology has failed to achieve its potential, resulting in failures in system performance with adverse and often disastrous consequences. These events have demonstrated that the traditional design approach is no longer adequate. A brief overview of these approaches and some other design approaches will be presented to provide some examples of alter- native approaches to system design and demonstrate
methodologies and concepts that can be applied to the design of current human–machine systems. This will be followed by a discussion of the specification application of human factors engineering to design activities.
3.1.2 Alternative Approaches to System Design
Sociotechnical Systems ApproachThe sociotechnical systems approach, which evolved from work conducted at the Tavistock Institute, represents a complete design process for the analysis, design, and implementation of systems. The approach is based on open systems theory and emphasizes the fit between social and technical systems and the environment. This approach includes methods for analyzing the environment, the social system, and the technical system. The overall design objective is the joint optimization of the social and technical systems (Pasmore, 1988). Some drawbacks associated with sociotechnical design are that the design principles are often vague and there is often an overemphasis on the social system without sufficient emphasis on the design of the technical system.
Clegg (2000) recently presented a set of sociotech- nical principles to guide system design. The principles are intended for the design of new systems that involve new technologies and modern management practices.
The principles are organized into three interrelated cat- egories: metaprinciples, content principles, and process principles.Metaprinciples are intended to demonstrate a world view of design, content principles focus on more specific aspects of the content of the new designs, and process principles are concerned with the design process. The principles also provide a potential for eval- uative purposes. They are based on a macroergonomic perspective.
The central focus of macroergonomics is on inter- facing organizational design with the technology employed in the system to optimize human–system functioning. Macroergonomics considers the human–
organization–environment–machine interface, as op- posed to microergonomics, which focuses on the human–machine interface. Macroergonomics is consi- dered to be the driving force for microergonomics.
Macroergonomics concepts have been applied suc- cessfully to manufacturing, service, and health care organizations as well as to the design of computer-based information systems (Hendrick and Kleiner, 2001;
Kleiner, 2008).
Participatory ErgonomicsParticipatory ergonomics is the application of ergonomic principles and concepts to the design process by people who are part of the work group and users of the system. These people are typically assisted by ergonomic experts who serve as trainers and resource centers. The overall goal of participatory design is to capitalize on the knowledge of users and to incorporate their needs and concerns into the design process. Methods, such as focus groups, quality circles, and inventories, have been developed to maximize the value of user participation. Participatory ergonomics has been applied to the design of jobs and workplaces and to the design of products. For example, the quality circle approach was adopted by a refrigerator
50 HUMAN FACTORS FUNCTION manufacturing company that needed a systemwide
method for assessing the issues of aging workers. The assembly line for medium-sized refrigerators was chosen as an area for job redesign. The project redesign team involved workers from the line as well as other staff members. The team was instructed with respect to the principles of ergonomics and design for older workers.
The solution, proposed by the team, for improving the assembly line resulted in improved performance and also allowed older workers to continue to perform the task (Imada et al., 1986). The design of current personal computer systems also typically involves user participation. Representative users participate in usability studies. In general, participatory ergonomics does not represent a design process because it does not consider broader system design issues but rather focuses on individual components. However, the benefits of user participation should not be overlooked and should be a fundamental aspect of system design.
User-Centered Design The user-centered design approach represents an approach where human factors are of central concern within the design process. It is based on an open-systems model and considers the human and technical subsystems within the context of the broader environment. User-centered approaches propose general specifications for system design, such as that the system must maximize user involvement at the task level and the system should be designed to support cooperative work and allow users to maintain control over operations (Liker and Majchrzak, 1994).
Essentially, this design approach incorporates user requirements, user goals, and user tasks as early as pos- sible into the design of a system, when the design is still relatively flexible and changes can be made at least cost.
Eason (1989) has developed a detailed process for user-centered design in which a system is developed in an evolutionary incremental fashion and develop- ment of the social system complements development of the technical system. Eason maintains that the techni- cal system should follow the design of jobs and the design of the technical system must involve user par- ticipation and consider criteria for four factors: func- tionality, usability, user acceptance, and organizational acceptance. Once these criteria are identified, alterna- tive design solutions are developed and evaluated. There are different philosophies with respect to the nature of user involvement. Eason emphasizes user involvement throughout the design process, whereas with other mod- els the users are considered sources of data and the emphasis is on translating knowledge about users into practice. Advocates of the user participation approach argue that users should participate in the choice between alternatives because they have to live with the results.
Advocates of the knowledge approach express concern about the ability of users to make informed judgments.
Eason (1991) maintains that designers and users can form a partnership where both can play an effective role.
A number of methods are used in user-centered design, including checklists and guidelines, observations, inter- views, focus groups, and task analysis.
Computer-Supported DesignThe design of com- plex technical systems involves the interpretation and
integration of vast amounts of technical information.
Further, design activities are typically constrained by time and resources and involve the contributions of many persons with varying backgrounds and levels of technical expertise. In this regard, computer-based design support tools have emerged to aid designers and support the design of effective systems. These sys- tems are capable of offering a variety of supports, including information retrieval, information manage- ment, and information transformation. The type of sup- port warranted depends on the needs and expertise of the designer (Rouse, 1987). A common example of this type of support is a computer-aided design/computer-aided manufacturing (CAD/CAM) system.
There are many issues surrounding the development and deployment of computer-based design support tools, including specification of the appropriate level of support, determination of optimal ways to characterize the design problem and the type of knowledge most useful to designers, and the identification of factors that influence the acceptance of these tools. A discussion of these issues is beyond the scope of this chapter. Refer to Rouse and Boff (1987a,b) for an excellent review of this topic.
Ecological Interface Design Ecological interface design (EID) is a theoretical framework for designing human–computer interfaces for complex sociotechnical systems (Rasmussen et al., 1994; Vincente, 2002). The primary aim of EID is to support knowledge workers who are required to engage in adaptive problem solving in order to respond to novelty and change in system demands. EID is based on a cognitive systems engineering approach and involves an analysis of the work domain and the cognitive characteristics and behavior tendencies of the individual. Analysis of the work domain is based on an abstraction hierarchy (means–end analysis) (Rasmussen, 1986) and relates to the specification of information content. The skills–
rules–knowledge taxonomy (Rasmussen, 1983) is used to derive inferences for how information should be presented. The aims of EID are to support the entire range of activities that confront operators, including familiar, unfamiliar, and unanticipated events, without contributing to the difficulty of the task.
EID has been applied to a variety of domains, such as process control, aviation, software engineering, and medicine, and has been shown to improve perfor- mance over that achieved by more traditional design approaches. However, there are still some challenges confronting the widespread use of EID in the industry.
These challenges include the time and effort required to analyze the work domain, choice of the interface form, and the difficulty of integrating EID with the design of other components of a system (Vincente, 2002).
3.2 Incorporating Human Factors