• Tidak ada hasil yang ditemukan

Humans degrade basically safe systems. Or do they?

Dalam dokumen Human Factors for Engineers (Halaman 164-167)

To engineer is to err

7.1 Humans degrade basically safe systems. Or do they?

The most basic assumption that engineers often bring to their work is that human error exists. Human error is ‘out there’: it can be measured, counted, and it can be designed or proceduralised against – at least to some extent. Among engineers, the traditional idea has often been that human error degrades basically safe systems.

Engineers do their best to build safe systems: to build in redundancies, double-checks and safety margins. All would go well (i.e. all can be predicted or calculated to go well) until the human element is introduced into that system. Humans are the most unreliable components in an engineered constellation. They are the least predictable:

their performance degrades in surprising and hard to anticipate ways. It is difficult to even say when errors will occur. Or how often. ‘Errors’ arise as a seemingly random by-product of having people touch a basically safe, well-engineered system.

The common engineering reflex (which is assumed to create greater safety) is to keep the human away from the engineered system as much as possible (by automation), and to limit the bandwidth of allowable human action where it is still necessary (through procedures). Neither of these ‘solutions’ work unequivocally well.

Human factors, as we know it today, got its inspiration from these basic ideas about human error. It then showed something different: an alternative way of looking at human error. As a result, there are basically two ways of looking at human error today. We can see human error as a cause of failure, or we can see human error as a symptom of failure. These two views have recently been contrasted asthe old viewof human error versusthe new view– fundamentally irreconcilable perspectives on the human contribution to system success and failure. In the old view of human error:

• Human error is the cause of accidents.

• The system in which people work is basically safe; success is intrinsic. The chief threat to safety comes from the inherent unreliability of people.

• Progress on safety can be made by protecting the system from unreliable humans through selection, proceduralisation, automation, training and discipline.

In the new view of human error:

• Human error is a symptom of trouble deeper inside the system.

• Safety is not inherent in systems. The systems themselves are contradictions between multiple goals that people must pursue simultaneously. People have to create safety.

• Human error is systematically connected to features of people’s tools, tasks and operating environment. Progress on safety comes from understanding and influencing these connections.

The groundwork for the new view of human error was laid at the beginning of human factors. Fitts and Jones described back in 1947 how features of World War II airplane cockpits systematically influenced the way in which pilots made errors [1].

For example, pilots confused the flap and gear handles because these typically looked and felt the same and were co-located. Or they mixed up the locations of throttle, mixture and propeller controls because these kept changing across different cockpits.

Human error was the starting point for Fitts’ and Jones’ studies – not the conclusion.

The label ‘pilot error’ was deemed unsatisfactory, and used as a pointer to hunt for deeper, more systemic conditions that led to consistent trouble. The idea these studies convey to us is that mistakes actually make sense once we understand features of the engineered world that surrounds people. Human errors are systematically connected to features of people’s tools and tasks. The insight, at the time as it is now, was profound:

the world is not unchangeable; systems are not static, not simply given. We can re-tool, re-build, re-design, and thus influence the way in which people perform. This, indeed, is the historical imperative of human factors – understanding why people do what they do so we can change the world in which they work and shape their assessments and actions accordingly. Human factors is about helping engineers build systems that are error-resistant (i.e. do not invite errors) and error-tolerant (i.e. allow for recovery when errors do occur).

But what is an error, really? There are serious problems with assuming that ‘human error’ exists as such, that it is a uniquely identifiable category of sub-standard human performance, that it can be seen, counted, shared and designed against. For what do engineers refer to when they say ‘error’? In safety and engineering debates there are at least three ways of using the label ‘error’:

• Error as thecauseof failure. For example: this accident was due to operator error.

• Error as thefailure itself. For example: the operator’s selection of that mode was an error.

• Error as aprocess, or, more specifically, as a departure from some kind of standard.

For example: the operators failed to follow procedures. Depending on what you use as standard, you will come to different conclusions about what is an error.

This lexical confusion, this inability to sort out what is cause and what is con- sequence, is actually an old and well-documented problem in human factors and specifically in error classifications [2]. Research over the past decade has tried to be more specific in its use of the label ‘human error’. Reason, for example, contends that human error is inextricably linked with human intention. He asserts that the term error can only be meaningfully applied to planned actions that fail to achieve their desired consequences without some unforeseeable intervention. Reason identified the basic types of human error as either slips and lapses or as mistakes. Specifically, slips and lapses are defined as errors that result from some failure in the execution or storage stage of an action sequence, regardless of whether or not the plan that guided the action was adequate to achieve its objective. In this context, slips are considered as potentially observable behaviour whereas lapses are regarded as unobservable errors.

In contrast, Reason defines mistakes as the result of judgmental or inferential pro- cesses involved in the selection of an objective or in the specification of the means to achieve it. This differentiation between slips, lapses and mistakes was a significant contribution to the understanding of human error.

Reason’s error type definitions have limitations when it comes to their practical application. When analysing erroneous behaviour it is possible that both slips and mistakes can lead to the same action although they are both the results of different cognitive processes. This can have different implications for the design and assess- ment of human–computer interfaces. To understand why a human error occurred, the cognitive processes that produced the error must also be understood, and for that the situation, or context, in which the error occurred has to be understood as well.

Broadly speaking, slips or lapses can be regarded as action errors, whereas mis- takes are related more to situation assessment, and people’s planning based on such assessment (see Table 7.1).

Much of Reason’s insight is based on, and inspired by Jens Rasmussen’s proposal [3], where he makes a distinction between skill-based, rule-based and knowledge- based errors. It is known as the SRK framework of human performance which is shown in Table 7.2.

In Rasmussen’s proposal, the three levels of performance in the SRK framework correspond to decreasing levels of familiarity with a task or the task context; and increasing levels of cognition. Based on the SRK performance levels, Reason argues

Table 7.1 Behavioural errors and cognitive processes (from [2], p. 13)

Behavioural error type Erroneous cognitive process Mistakes Planning/situation assessment

Lapses Memory storage

Slips Execution

Table 7.2 Skill–rule–knowledge framework (from [3])

Performance level Cognitive characteristics

Skill-based Automatic, unconscious, parallel activities Rule-based Recognising situations and following

associated procedures Knowledge-based Conscious problem solving

Table 7.3 Human performance and behavioural errors (from [2])

Performance level Behavioural error type Skill-based Slips and lapses Rule-based Rule-based mistakes Knowledge-based Knowledge-based mistakes

that a key distinction between the error types is whether an operator is engaged in problem solvingat the time an error occurs. This distinction allowed Reason to identify three distinct error types which are shown in Table 7.3.

Jens Rasmussen’s SRK framework, though influential, may not be as canonical as some literature suggests. Especially, Rasmussen’s so-called ‘skilful’ behaviour is often thought to actually occur across all three levels, if indeed there are different levels: think, for example, about decision making skills, or pattern recognition skills.

Dalam dokumen Human Factors for Engineers (Halaman 164-167)