• Tidak ada hasil yang ditemukan

Introduction

Dalam dokumen Human Factors for Engineers (Halaman 140-146)

Automation and human work

6.1 Introduction

The word automation, which comes from Greek, is a combination ofauto, ‘self’, andmatos, ‘willing’, and means something that acts by itself or of itself. This is almost identical to the modern usage where automatic means something that is self- regulating or something that acts or operates in a manner determined by external influences or conditions but which is essentially independent of external control, such as an automatic light switch. The following quotes illustrate the various modern meanings of the term:

Automation is defined as the technology concerned with the application of complex mechanical, electronic and computer based systems in the operation and control of production.

([1], p. 631)

‘Automation’ as used in the ATA Human Factors Task Force report in 1989 refers to

‘a system or method in which many of the processes of production are automatically performed or controlled by self-operating machines, electronic devices, etc.’

([2], p. 7) We defineautomationas the execution by a machine agent (usually a computer) of a function that was previously carried out by a human.

([3], p. 231) Automation is often spoken of as if it was just a technology and therefore mainly an engineering concern. Automation is, however, also an approach to work and therefore represents a socio-technical intervention that needs careful consideration. In order to understand how automation relates to human work, it is useful to consider two simple examples, both taken from the history of technology rather than the present day.

6.1.1 Precision and stability

The first example is a self-regulating flow valve for a water clock (clepsydra), which was invented about 270 B.C. by Ktesibios (or Ctsesibius) of Alexandria. The water clock itself was invented about 1500 B.C. by the Egyptian court official Amenemhet ([4], p. 124). The principle of a water clock is that water flows into (or out of ) a vessel at a constant rate, whereby the level of water can be used as a measure of the passage of time. In its simplest form, a water clock is a container in which water runs out through a small hole in the bottom. The water clock is filled at the beginning of a time period and the level of water indicates how much time has passed. In more sophisticated versions the water runs into a container instead of out of it. The water clock that Ktesibios improved was even more advanced. In this ‘machine’ water fell from a level and hit a small scoop-wheel, thereby providing the force to drive a shaft.

Here the precision of the clock depended critically on having a constant trickle or flow of water, which in turn depended on the pressure or the level of water in the reservoir. Ktesibios solved this problem by designing a self-regulating flow valve (Figure 6.1), which produced the desired flow of water.

The second example is almost 2000 years younger and far better known, even though a version of the self-regulating valve is used by billions of people every day.

The example is the flying-ball governor that we know from James Watt’s ‘classical’

steam engine. Watt did not invent the steam engine as such, Thomas Savery had done that already in 1698, but he significantly improved the existing models in several ways, one by adding a separate condenser (1756) and the other by introducing the conical pendulum governor (1769) (see Figure 6.2).

The purpose of the governor was to maintain the constant speed of the engine by controlling the inflow of steam. This was done by having a pair of masses (balls) rotating about a spindle driven by the steam engine. With an increase in speed, the masses moved outward and their movement was transmitted to the steam valve via

Float

Inlet

Outlet

Constant flow of water Water

reservoir

Figure 6.1 The self-regulating valve of Ktesibios

Spindle driven by engine Arm

connecting to steam valve

Sliding member

Figure 6.2 Watt’s pendulum governor

a member that glided up the spindle. This reduced the steam admitted to the engine, thus reducing the speed. Conversely, if the speed was reduced, the masses moved inward and the member would glide down the spindle, thereby increasing steam flow to the engine. In this way the flying-ball governor automated the task of regulating the steam valve, which otherwise had to be carried out by a person.

Both examples illustrate two fundamental relations between automation and human work.One is that automation ensures a morepreciseperformance of a given function, either by the self-regulating flow-valve in the water clock or by the flying- ball governor in the steam engine. The performance, or output, is more precise because a technological artefact on the whole can respond to smaller variations of the input than a human. A second, and somewhat related, feature is that automation improves thestabilityof performance because mechanical devices function at a stable level of effort without any of the short-term fluctuations that are seen in human work.

People, on the other hand, are ill-suited to repetitive and monotonous tasks, which they notably do very badly. In both examples, a person could in principle carry out the same functions – and indeed did so initially – but the performance would not be as accurate and smooth as that of the automation, even if full and undivided attention were paid to the task.

6.1.2 Automation as compensation

As these examples show, automation was from the very beginning used to overcome certain shortcomings found in human work, notably that people tended to get dis- tracted or tired, and hence were unable to maintain the required level of precision and stability. The need for automation has thus clearly existed and been recognised for a very long time, even though the practical demands in the beginning were limited.

In the 18th century the industrial revolution provided the possibility to amplify many

aspects of human work, first and foremost force and speed, and also helped over- come problems of constancy and endurance. Yet it also created a need for automation that dramatically changed the nature of human work. In addition to the demands for precision and stability, the industrial revolution and the acceleration of technology quickly added a demand for speed. As long as people mainly had to interact with other people, the pace of work settled at a natural pace. But when people had to interact with machines, there was no natural upper limit. Indeed, a major advantage of machines was that they could do things faster, and this advantage was not to be sacrificed by the human inability to keep pace.

As new technology quickly started to set the pace of work, humans were left struggling to respond. In order for machines to work efficiently, a high degree of regularity of input was required – both in terms of energy/material and in terms of control. Since humans were unable to provide that, artefacts were soon invented to take over. That in turn increased the pace of work, which led to new demands, hence new automation (see the discussion of the self-reinforcing loop in Section 6.1.3).

A significant side effect was that work also became more monotonous. That is one of the unfortunate consequences of automation, and one of the reasons why it is a socio-technical rather than an engineering problem.

Taken together, the three features of precision, stability, and speed meant that humans soon became a bottleneck for system performance. This was actually one of the reasons for the emergence of human factors engineering as a scientific discipline in the late 1940s. In modern terms this is expressed by saying that humans have a capacity limitation or that their capacity (in certain respects) clearly is less than that of machines.One purpose of automation is therefore to overcome the capacity limitations that humans have when they act as control systems, thereby enabling processes to be carried out faster, more efficiently – and hopefully also more safely.

In addition to being seen as a bottleneck, humans were also frequently seen as a source of unwanted variability in the system. This variability might lead not only to production losses, but more seriously to incidents and accidents that could jeopardise the system itself. Consistent with this line of reasoning, the solution was to eliminate humans as a source of variability – as far as possible – by replacing them with automation. While this view no longer is regarded as valid (cf. the discussion in Section 6.1.4), it did have a significant influence on the development and use of automation in the last quarter of the 20th century, and was, paradoxically, one of the reasons for the many problems that this field experienced.

6.1.3 The self-reinforcing loop

While the history of automation is as long as the history of technology and human invention itself, the issue became much more important in the 20th century, especially with the proliferation of digital information technology.One way of understanding that is to see the increasing complexity of technological systems as a self-reinforcing or positive feedback loop [5], cf. Figure 6.3.

The growing potential of technology is an irresistible force which invariably is used to increase system functionality – for instance by reducing production costs,

Growing technological potential

Task complexity

Stable (constant) human capabilities

Compensating automation System

functionality Increasing

performance demands

Figure 6.3 The self-reinforcing complexity cycle

improving quality of products, enlarging capacity, and shortening cycle times for production and maintenance. To this are added the effects of a second driving force, namely the growing demands for performance, particularly in terms of speed and reli- ability. These demands are themselves due to an increasing dependency on technology in all fields of human endeavour. The combination of increased system functionality and growing performance demands leads to more complex tasks and therefore to increased system complexity in general. This may seem to be something of a para- dox, especially because technological innovations often purportedly are introduced to make life easier for the users. The paradox is, however, not very deep, and is related to the well-known ‘ironies of automation’ [6], cf. Section 6.3.2.

The growing complexity of systems and tasks increases the demands for control, in accordance with the Law of Requisite Variety [7]. This leads to a reduction in the tolerance for variability of input, hence to an increase in the demands on the humans who operate or use the systems. Unfortunately, the increased demands on controller performance create a problem because such demands are out of proportion to human capabilities, which in many ways have remained constant since Neolithic times.

From the technological point of view humans have therefore become a bottleneck for overall system throughput. The way this problem is overcome, as we have already seen, is to introduce automation either to amplify human capabilities or to replace them – known as the tool and prosthesis functions, respectively [8]. The ability to do so in turn depends on the availability of more sophisticated technology, leading to increased system functionality, thereby closing the loop.

When new technology is used to increase the capacity of a system, it is often opti- mistically done in the hope of creating some slack or a capacity buffer in the process, thereby making it less vulnerable to internal or external disturbances. Unfortunately, the result usually is that system performance increases to take up the new capacity, hence bringing the system to the limits of its capacity once more. Examples are easy to find, and one need go no further than the technological developments in cars and in traffic systems. If highway driving today took place at the speed of the 1940s, it would be both very safe and very comfortable. Unfortunately, the highways are filled with ever more drivers who want to go faster, which means that driving has become more complex and more risky for the individual. (More dire examples can be found in power production industries, aviation, surgery, etc.)

6.1.4 Humans as a liability or as a resource

As mentioned above, one consequence of the inability of humans to meet the tech- nologically defined performance norms is that the variability may lead to loss of performance (and efficiency) or even to failures – either as incidents or accidents.

Indeed, as the complexity of technological systems has grown, so has the number of failures [9]. For the last 50 years or so, one of the main motivations for automation has therefore been to reduce or eliminate human failures that may cause production losses and system accidents. Automation has been used to constrain the role of the operator or preferably to eliminate the need for human operators altogether. Although, from a purely engineering perspective, this may seem to be an appropriate arrange- ment, there are some sobering lessons to be learned from the study of the effects of automation (e.g. [10]).

An alternative to the view of humans mainly as a source of variability and failures is to view them as a resource that enables the system to achieve its objec- tives. This view acknowledges that it is impossible to consider every possible contingency during design, and therefore impossible for technological artefacts to take over every aspect of human functioning. Humans are in this way seen as a source of knowledge, innovation and adaptation, rather than as just a limiting factor. This leads to the conclusion that automation should be made more effec- tive by improving the coupling or co-operation between humans and technology, i.e., a decidedly human-oriented view. When this conclusion is realised in practice, two thorny questions are: (1) what should humans do relative to what machines should do, and (2) are humans or machines in charge of the situation? The first question is identical to the issue of function allocation proper, and the second to the issue of responsibility. The two issues are obviously linked, since any alloca- tion of functions implies a distribution of the responsibility as well. The problem is least complicated when function allocation is fixed independently of variations in the system state and working conditions, although it is not easy to solve even then. The responsibility issue quickly becomes complicated when function alloca- tion changes over time, either because adaptation has been used as a deliberate feature of the system, or – more often – because the system design is incomplete or ambiguous.

It is a sobering thought that the motivation for introducing automation generally is technical rather than social or psychological. That is, automation is introduced to answer the needs of the process, rather than to answer the needs of people working with the process. The process needs can be to reduce the number of disturbances or accidents, or simply to improve efficiency. Automation is in both cases allowed to take over from people because it is assumed to do better in some respects. While this may be correct in a narrow sense, an inevitable consequence of introducing automation is that the working conditions of the human operators are affected. Since this will have effects on the functioning of the overall system both short-term and long-term, it is important to consider the automation strategies explicitly rather than implicitly.

Dalam dokumen Human Factors for Engineers (Halaman 140-146)