• Tidak ada hasil yang ditemukan

Humans and automation

Dalam dokumen Human Factors for Engineers (Halaman 146-154)

Automation and human work

6.2 Humans and automation

It is a sobering thought that the motivation for introducing automation generally is technical rather than social or psychological. That is, automation is introduced to answer the needs of the process, rather than to answer the needs of people working with the process. The process needs can be to reduce the number of disturbances or accidents, or simply to improve efficiency. Automation is in both cases allowed to take over from people because it is assumed to do better in some respects. While this may be correct in a narrow sense, an inevitable consequence of introducing automation is that the working conditions of the human operators are affected. Since this will have effects on the functioning of the overall system both short-term and long-term, it is important to consider the automation strategies explicitly rather than implicitly.

typically as part of short-term control of the process. The purpose of tracking is to maintain performance within narrowly specified limits.

Regulatingextends over longer periods of time and/or larger parts of the system and mixes open- and closed-loop control. It may, for instance, deal with the transition between system states or the carrying out of a specific set of operations, such as pump rotation. Regulating includes the recognition of the process state and the scheduling of appropriate activities. The carrying out of these activities involves tracking, which thus can be seen as a component of regulating.

• The purpose of monitoring is to keep track of how the system behaves, and to generate or select the plans necessary to keep the system within the overall performance envelope. Monitoring, like regulating, combines open- and closed- loop control. The plans may, for instance, be available as procedures of various types. Monitoring comprises interpretation of the state of the system and the selection or specification of proper plans for action. Monitoring thus encompasses both tracking and regulating.

• Monitoring in turn is guided bytargeting, which sets the overall goals for system performance – in close accordance with, e.g., management policies and regula- tions. The goals are normally prescribed for a process in advance, but for specific situations such as disturbances, new goals may be defined to match the current conditions. Targeting is decidedly open-loop control.

As this chapter has shown, automation began by taking over the simpler functions on the level of tracking (think of the flying-ball governor), and has slowly expanded to include regulation and monitoring activities. In some cases automation has even covered the aspects of targeting or decision-making, leaving the operators with very little to do. The consequences of increasing automation can therefore be described as a gradual narrowing or eroding of human involvement with the process, and therefore also a narrowing of human responsibilities. (From the technological point of view, the development is obviously one of increasing involvement and increasing respon- sibility.) The net effect has been the gradual removal of humans from the process, with the unwanted side effect that they are less well prepared to intervene when the need arises. The progression is, of course, not always smooth and automation may exist in various degrees on different levels (e.g., [11], p. 62). The illustration never- theless serves to point out that humans today have fewer responsibilities and that the remaining tasks are mainly the higher-order functions of monitoring and targeting, and hence depend on cognitive rather than manual functions.

In a historical perspective, there have been several distinct approaches to automa- tion design, sometimes also called automation philosophies. The three main ones are described in the following sections.

6.2.1 The ‘left-over’ principle

Discussions of automation have been part of human factors engineering ever since the late 1940s, often specialised under topics such as function allocation or automation strategies. In retrospect, the design of automation can be seen as referring to one of several different principles, which have developed over the years. Each principle is

associated with a view of the nature of human action, although this may be implicit rather than explicit. They are therefore sometimes referred to as automation philoso- phies or automation strategies, although they have not all been recognised as such at the time. The simplest automation philosophy is that the technological parts of a system are designed to do as much as feasible (usually from an efficiency point of view) while the rest is left for the operators to do. This approach is often called the

‘left-over’ principle or the residual functions principle. The rationale for it has been expressed as follows:

The nature of systems engineering and the economics of modern life are such that the engineer tries to mechanize or automate every function that can be …. This method of attack, counter to what has been written by many human factors specialists, does have a considerable logic behind it … machines can be made to do a great many things faster, more reliably, and with fewer errors than people can …. These considerations and the high cost of human labor make it reasonable to mechanize everything that can be mechanized.

[12]

Although this line of reasoning initially may seem unpalatable to the human factors community, it does on reflection make good sense. The proviso of the argument is, however, that we should only mechanise or automate functions that can be completely automated, i.e., where it can be guaranteed that automation will always work correctly and not suddenly require the intervention or support of humans. This is a very strong requirement, since such guarantees can only be given for systems where it is possible to anticipate every possible condition and contingency. Such systems are few and far between, and the requirement is therefore often violated [6]. Indeed, if automation were confined to those cases, far fewer problems would be encountered.

Without the proviso, the left-over principle takes a rather cavalier view of humans since it fails to include any explicit assumptions about their capabilities or limitations – other than that the humans in the system hopefully are capable of doing what must be done. Implicitly this means that humans are treated as extremely flexible and powerful machines, which far surpass what technological artefacts can do.

6.2.2 The compensatory principle

A second automation philosophy is the eponymous Fitts’ list (named after Paul Fitts, [13]), sometimes also referred to as the compensatory principle. This principle pro- poses that the capabilities (and limitations) of people and machines be compared on a number of relevant dimensions, and that function allocation is made so that the respective capabilities are used optimally. (The approach is therefore sometimes referred to as the ‘Men-Are-Better-At, Machines-Are-Better-At’ or ‘MABA-MABA’

strategy.) In order for this to work it must be assumed that the situation characteristics can be described adequatelya priori, and that the capabilities of humans (and tech- nology) are more or less constant, so that the variability will be minimal. Humans are furthermore seen as mainly responding to what happens around them, and their actions are the result of processing input information using whatever knowledge they may have – normally described as their mental models.

Table 6.1 The principles of the Fitts list

Attribute Machine Operator/human

Speed Much superior Comparatively slow, measured in

seconds Power output Much superior in level and

consistency

Comparatively weak, about 1500 W peak, less than 150 W during a working day

Consistency Ideal for consistent, repetitive actions

Unreliable, subject to learning (habituation) and fatigue Information

capacity

Multi-channel. Information transmission in megabits/sec.

Mainly single channel, low rate

<10 bit/sec.

Memory Ideal for literal reproduction, access restricted and formal

Better for principles and strategies, access versatile and innovative Reasoning,

computation

Deductive, tedious to program. Fast, accurate. Poor error correction

Inductive. Easy to ‘programme’. Low, inaccurate. Good error correction.

Sensing Specialised, narrow range. Good at quantitative assessment. Poor at pattern recognition.

Wide energy ranges, some multi-function capability Perceiving Copes poorly with variations in

written/spoken material.

Susceptible to noise.

Copes well with variation in written/spoken material.

Susceptible to noise.

Table 6.1 shows the dimensions or attributes proposed by Paul Fitts as a basis for comparing humans and machines. Even though this list reflects the human factors’

perspective of more than 50 years ago, most of the dimensions and characterisations are still highly relevant – the main exception being the attribute of power output.

Some of the characterisations need perhaps be adjusted to represent the capabilities of current technology, but despite five decades of development within artificial intel- ligence, machines still have a long way to go before they can match humans in terms of perception, reasoning, and memory.

The determination of which functions should be assigned to humans and which to machines is, however, not as simple as implied by the categories of the Fitts list.

Experience has taught us that it is necessary to consider the nature of the situation, the complexity, and the demands and not just compare the attributes one by one:

It is commonplace to note the proper apportionment of capabilities in designing the human/computer interface. What humans do best is to integrate disparate data and con- struct and recognize general patterns. What they do least well is to keep track of large amounts of individual data, particularly when the data are similar in nature and difficult to differentiate. Computers are unsurpassed at responding quickly and predictably under highly structured and constrained rules. But because humans can draw on a vast body of other experience and seemingly unrelated (analogous or ‘common-sense’) reasoning, they are irreplaceable for making decisions in unstructured and uncertain environments.

([14], p. 4.)

Furthermore, the function substitution that is part of the compensatory principle disregards the fact that to achieve their goals most systems have a higher order need for co-ordination of functions. Function allocation by substitution is based on a very narrow understanding of the nature of human work and capabilities, and effectively forces a machine-like description to be applied to humans. A substitution implies a rather minimal view of the function in question and blatantly disregards the context and other facets of the situation that are known to be important. Since functions usually depend on each other in ways that are more complex than a mechanical decomposition can account for, a specific assignment of functions will invariably have consequences for the whole system. In this way even apparently small changes will affect the whole.

The issue is thus one of overall system design and co-operation between humans and machines, rather than the allocation of functions as if they were independent entities.

6.2.3 The complementarity principle

A third automation philosophy, which has been developed during the 1990s, is called the complementarity principle (e.g., [15, 16]). This approach aims to sustain and strengthen human ability to perform efficiently by focusing on the work system in the long term, including how routines and practices may change as a consequence of learn- ing and familiarisation. The main concern is the ability of the overall system to sustain acceptable performance under a variety of conditions rather than a transitory peak in efficiency (or safety). The complementarity principle is concerned with human–

machine co-operation rather than human–machine interaction, and acknowledges the significance of the conditions provided by the overall socio-technical system. It is con- sistent with the view of Cognitive Systems Engineering (CSE), which emphasises the functioning of the joint cognitive system [17]. CSE describes human actions as pro- active as well as reactive, driven as much by goals and intentions as by ‘input’ events.

Information is furthermore not only passively received but actively sought, hence sig- nificantly influenced by what people assume and expect to happen. A joint cognitive system is characterised by its ability to maintain control of a situation, in spite of disrupting influences from the process itself or from the environment. In relation to automation, CSE has developed an approach called function congruence or function matching, which takes into account the dynamics of the situation, specifically the fact that capabilities and needs may vary over time and depend on the situation [18].

One way of making up for this variability is to ensure an overlap between functions assigned to the various parts of the system, corresponding to having a redundancy in the system. This provides the ability to redistribute functions according to needs, hence in a sense dynamically to choose from a set of possible function allocations.

The main features of the three different automation philosophies are summarised in Table 6.2. This describes the function allocation principle, the purpose of func- tion allocation, i.e., the criteria for successful automation, and the school of thinking (in behavioural science terms) that the philosophy represents. Table 6.2 also indicates the view of humans that is implied by each automation philosophy and thereby also their view of what work is. In the ‘left-over’ principle few, if any, assumptions are made about humans, except that they are able to handle whatever is left to them.

Table 6.2 Comparison of the three automation philosophies

Automation principle Residual function

(‘left-over’)

Fitts list Complementarity/

congruence Function

allocation principle

Leave to humans what cannot be achieved by technology

Avoid excessive demands to humans

(juxtaposition)

Sustain and strengthen human ability to perform efficiently Purpose of

function allocation

Ensure efficiency of process by automating whatever is feasible

Ensure efficiency of process by ensuring efficiency of human–machine interaction

Enable the joint system to remain in control of process across a variety of conditions School of

thought

Classical human factors

Human–machine interaction, human information processing

Cognitive systems engineering View of humans

(operators)

None (human as a machine)

Limited capacity information processing system (stable capabilities)

Dynamic, adaptive (cognitive) system, able to learn and reflect

Human work is accordingly not an activity that is considered by itself, but simply a way of accomplishing a function by biological rather than technological means.

The strongest expression of this view is found in the tenets of Scientific Management [19]. The compensatory principle does consider the human and makes an attempt to describe the relevant attributes. Although the Fitts list was developed before the onslaught of information processing in psychological theory, the tenor of it corre- sponds closely to the thinking of humans as information processing systems that has dominated human factors from the 1970s onwards. This also means that work is described in terms of the interaction between humans and machines or, more com- monly, between humans and computers, hence as composed of identifiable segments or tasks. Finally, in the complementarity/congruence principle humans are viewed as actively taking part in the system, and as adaptive, resourceful and learning partners without whom the system cannot function. The view of work accordingly changes from one of interaction to one of co-operation, and the analysis starts from the level of the combined or joint system, rather than from the level of human (or machine) functionsper se.

6.2.4 From function allocation to human–machine co-operation

A common way to introduce automation is to consider the operator’s activities in detail and evaluate them individually with regard to whether they can be performed better (which usually means faster and/or more cheaply) by a machine. Although some kind

of decomposition is inevitable, the disadvantage of only considering functions one by one – in terms of how they are accomplished rather than in terms of what they achieve – is that it invariably loses the view of the human–machine system as a whole. The level of decomposition is furthermore arbitrary, since it is defined either by the granularity of the chosen model of human information processing, or by a technology-based classification, as epitomised by the Fitts list. To do so fails to recognise important human capabilities, such as being able to filter irrelevant information, scheduling and reallocating activities to meet current constraints, anticipating events, making generalisations and inferences, learning from past experience, establishing and using collaboration, etc. Many of the human qualities are of a heuristic rather than an algorithmic nature, which means that they are difficult to formalise and implement in a machine. Admittedly, the use of heuristics may also sometimes lead to unwanted and unexpected results but probably no more often than algorithmic procedures do.

The ability to develop and use heuristics is nevertheless the reason why humans are so efficient and why most systems work.

In a human-centred automation strategy, the analysis and description of system performance should refer to the goals or functional objectives of the system as a whole.One essential characteristic of a well-functioning human–machine system is that it can maintain, or re-establish, an equilibrium despite disturbances from the environment. In the context of process control the human–machine system is seen as a joint cognitive system, which is able to maintain control of the process under a variety of conditions. Since this involves a delicate balance between feedback and feedforward, it is essential that the system be represented in a way that supports this.

Although a joint cognitive system obviously can be described as being composed of several subsystems, it is more important to consider how the various parts of the joint system must correspond and co-operate in order for overall control to be maintained.

The automation strategy should therefore be one of function congruence or function matching, which takes into account the dynamics of the situation, specifically the fact that capabilities and needs may vary over time and depend on the situation. Function congruence must include a set of rules that can achieve the needed re-distribution, keeping in mind the constraints stemming from limited resources and inter-functional dependencies.

This type of automation strategy reflects some of the more advanced ways of delegating functions between humans and machines. Both Billings [2] and Sheridan [20] have proposed a classification of which the following categories are relevant for the current discussion:

• In management by delegation specific tasks or actions may be performed automatically when so ordered by the operators.

• Inmanagement by consentthe machine, rather than the operator, monitors the process, identifies an appropriate action, and executes it. The execution must, however, have the previous consent by the operators.

• In management by exceptionthe machine identifies problems in the process and follows this by automatic execution without need for action or approval by operators.

• In autonomous operationthe machine carries out the necessary tasks without informing the operator about it.

Neither management by exception nor autonomous operation is a desirable strat- egy from the systemic point of view. In both cases operators are left in the dark, in the sense that they do not know or do not understand what is going on in the system.

This means that they are ill-prepared either to work together with the automation or take over from it if and when it fails. Management by delegation and management by consent are better choices, because they maintain the operator in the loop. Since human action is proactive as well as reactive, it is essential to maintain the ability to anticipate. This requires knowledge about what is going on. If the automation takes that away, operators will be forced into a reactive mode, which will be all the more demanding because they have no basis on which to evaluate ongoing events.

6.2.5 Unwanted consequences of automation

The reasons for introducing more technology and/or improving the already existing technology are always noble. A host of empirical studies nevertheless reveal that new systems often have surprising unwanted consequences or that they even fail outright (e.g., [21]). The reason is usually that technological possibilities are used clumsily [10], so that well-meant developments, intended to serve the users, instead lead to increasing complexity and declining working conditions. For example, if users do not understand how new autonomous technologies function, they are bound to wonder

‘what is it doing now?’, ‘what will happen next?’, and ‘why did it do this?’ [22].

Other unwanted, but common, consequences of automation are:

• Workload is not reduced by automation, but only changed or shifted.

• Erroneous human actions are not eliminated, but their nature changes. Further- more, the elimination of small erroneous actions usually creates opportunities for larger and more critical ones.

• There are wide differences of opinion about the usefulness of automation (e.g., in terms of benefit versus risk). This leads to wide differences in the patterns of utilisation, hence affect the actual outcomes of automation.

System changes, such as automation, are never only simple improvements, but invariably have consequences for which knowledge is required, how it should be brought to bear on different situations, what the roles of people are within the overall system, which strategies they employ, and how people collaborate to accomplish goals. The experience since the beginning of the 1970s has identified some problems that often follow inappropriate employment of new technology in human–machine systems [23], such as:

Bewilderment. Even well-intended system changes may be hard to learn and use. Users may find it difficult to remember how to do infrequent tasks or may only be partially aware of the system’s capabilities. This may lead to frustrating breakdowns where it is unclear how to proceed.

Dalam dokumen Human Factors for Engineers (Halaman 146-154)