Automation and human work
6.3 Cognitive systems engineering
Cognitive systems engineering emphasises that humans and machines should not be considered as incompatible components or parts that in some way have to be recon- ciled, but that they rather should be seen as a whole – as a joint cognitive system. The functions and capabilities of the whole are therefore more important than the func- tions and capabilities of the parts. A cognitive system is formally defined by being able to modify its pattern of behaviour on the basis of past experience to achieve specific anti-entropic ends. Less formally, a joint cognitive system is characterised by its ability to maintain control of a situation.One way of representing the dynamics of system performance is by means of the basic cyclical model shown in Figure 6.5.
According to this model, control actions or system interventions are selected based on the construct (or understanding) of the situation to produce a specific effect. If that is achieved, the correctness of the construct is confirmed and it becomes corre- spondingly easier to select the next action. If the expected effect does not occur, the controlling system must somehow reconcile the difference between the expected and
Disturbances Events/
feedback
Action Construct
Determines
Produces Modifies
Controller/
controlling system
Process/
application/
controlled system Monitoring and detection Control and mitigation
Figure 6.5 The basic cyclical model of cognitive systems engineering
actual outcomes. In the case of humans this means they must revise their understand- ing of the situation. This is also necessary if an unrelated disturbance interferes, since that will produce effects that were not expected. In both cases more time and effort is required to evaluate the situation and to select the next action. (For a more detailed description of this model see Reference 26).
This model can easily be applied to the description of automation by making a dis- tinction between monitoring and detection functions on the one hand, and control and mitigation functions on the other. Monitoring and detection denote the steps necessary to keep track of how an event develops (parameter values and state transitions), to establish a reference description (construct) and to detect possible significant changes.
Control and mitigation denote the actions taken either to regulate the development of the event, for instance to maintain a process within given performance limits, and the actions needed to mitigate unwanted effects and recover from undesirable con- ditions. The focus on functions rather than structures or components is intentionally consistent with the function congruence principle described above, but differs from the earlier automation philosophies.
6.3.1 Balancing functions and responsibilities
For any joint system the two sets of functions can be carried out either by human users or by machines (technology). This provides a convenient way to consider the role of automation in relation to the overall functioning of the joint system (Figure 6.6). For monitoring and detection, some functions can easily be carried out by technology, hence automated. A machine can readily check whether a specific parameter reaches
Monitoring and detection by machine
Control and mitigation by machine
Monitoring and detection by human
Control and mitigation by human
Figure 6.6 Balancing the use of automation
a threshold (e.g., a level or pressure alarm), and it can do it for many parameters and for any length of time.Other functions may better be left to humans, such as recognising patterns of occurrence or detecting dynamic trends and tendencies. Similarly, some functions of control and mitigation may also be taken over by machines. For simple and well-defined conditions mitigation can easily be completely automated, e.g., an automatic fire extinguisher or cruise control in a car. For more complex conditions, the contribution of humans is needed, such as finding ways of recovering from a combination of failures. In general, if automation takes over the detection-correction functions, people lose information about what the system does, and hence lose control.
If, for the sake of simplicity, we consider the extreme situations where the two sets of functions are assigned entirely either to humans or to automation, the result is a definition of four distinct conditions (Table 6.3). Each of these conditions corresponds to a well-known automation scenario, which can be described in terms of the main advantages and disadvantages. In practice it is, however, not reasonable to maintain such a simple distinction. As the small examples above have shown, humans and automation have quite different capabilities with regard to monitoring/detection and control/mitigation. Since these capabilities are dependent on situation characteristics, for instance time demands and work environment, it is ineffectual to allocate functions based on a simple comparison. It would be better to consider a set of distinct and representative scenarios for the joint system, and then for each of these consider the advantages and problems of a specific automation design in accordance with the principles of balancing or function congruence.
As Table 6.3 clearly shows, there is no single or optimal situation that represents human-oriented automation, and the choice of an automation strategy must always be a compromise between efficiency and flexibility. If human functions are replaced by technology and automation, efficiency will (hopefully) increase, but there is a cost in terms of loss of flexibility. Conversely, if flexibility is the more important concern, part of the efficiency may have to be sacrificed. There is no uncomplicated way of designing automation that is applicable across all domains and types of work.
As noted in the beginning, automation is introduced to improve precision, improve
Table 6.3 Balancing monitoring-control matrix
Human control and mitigation Automated control and mitigation Human
monitoring and detection
Characterisation: Conventional manual control.
Advantages:Operators are fully responsible, and in-the-loop.
Problems:Operators may become overloaded with work, and cause a slow-down of the system.
Characterisation:Operation by delegation.
Advantages: Improved
compensatory control.Operators are relieved of part of their work, but are still in-the-loop.
Problems:Operators may come to rely on the automation, hence be unable to handle unusual situations.
Automated monitoring and detection
Characterisation: Automation amplifies attention/recognition.
Advantages: Reduced monotony.
Problems:Operators become passive and have to rely on/trust automation.
Characterisation: Automation take-over.
Advantages: Effective for design-based conditions.
Problems: Loss of understanding and de-skilling. System becomes effective but brittle.
stability and/or improve speed of production. For some processes, one or more of these criteria are of overriding concern. For other processes different criteria, such as flexibility or adaptability, may be more important. Each case therefore requires a careful consideration of which type of automation is the most appropriate. Human- oriented automation should not in itself be a primary goal, since the joint system always exists for some other purpose. Human-oriented automation should rather be interpreted as the need to consider how the joint system can remain in control, and how this can be enhanced and facilitated via a proper design of the automation involved.
6.3.2 The ironies of automation
One of the problems with automation is that it is often difficult to understand how it works as noted by Billings [22], Woods and Sarter [27] and many others. Before the advent of mechanised logic, automation was analogue and therefore relatively straightforward to follow, as illustrated by the self-regulating valve and the flying- ball governor. Today, automation is embedded in logic and far more complex, hence difficult to understand. This is most obviously the case for the end user, as demonstrated by the phenomenon called ‘automation surprises’ [28]. But it is increasingly also a problem for automation designers, who may be unable to compre- hend precisely how functions may be coupled, and hence how they depend on each other [9].
As a simple example, consider attempts to automate the use of room lighting in order to conserve energy. A simple rule for that could be:
IF <lights in room X are off> AND<movement in room X is detected> THEN <turn lights in room X on >
IF <lights in room X are on> AND<no movement is detected in room X for n minutes> THEN <turn lights in room X off >
(Even for this simple type of automation, there are problems relating to the sensitivity of detection, i.e., the criterion for whether a movement has taken place. It is not an uncommon experience that a group involved in intense discussions, but with little use of gestures, suddenly find themselves in darkness.)
Not satisfied with this, designers (or the marketing department?) may propose to automate room lighting as a person walks through a house or building. In this case the light should be turned on just before a person enters a room, and be turned off soon after a person has left the room. In order for this to work it is necessary that the automation can determine not just that a person is moving, but also in which direction, at least for rooms that have more than two points of access. This can become very complex, even for a small house and even if only one person is moving at a time. What happens, for instance, if in the middle of going from room B to a non-adjacent room E, the person realises that s/he has forgotten something in room B and therefore starts to walk back again? It is easy to see that the automation of such a relatively simple function may become very complex, and that it may be difficult for both designers and users to have a clear understanding of how it works. If this example seems too trivial, try to consider the automation required for an adaptive gearbox or intelligent cruise control, or for landing an airplane.Or what about the attempts to design automatic filtering of spam email or undesirable web contents?
Bainbridge [6] has described the difficulties that designers have in understanding how automation works as an irony of automation. The noble ambition of automation has generally been to replace human functions by technological artefacts. From that perspective it is something of a paradox that automated systems still are human–
machine systems and that humans are needed as much as ever. Bainbridge argued that ‘the increased interest in human factors among engineers reflects the irony that the more advanced a control system is, so the more crucial may be the contribution of the human operator’ ([6], p. 775).
The first irony is that designer errors can be a major source of operating problems.
In other words, the mistakes that designers make can create serious problems for oper- ators. It can be difficult to understand how automation works because it is so complex, but the situation is obviously made worse if there are logical flaws in the automation.
This may happen because the design fails to take into account all possible conditions because something is forgotten, because of coding mistakes and oversights, etc.
The second irony is that the designer, who tries to eliminate the operator, still leaves the operator to do the task that the designer cannot imagine how to automate.
This problem is more severe since it obviously is unreasonable to expect that an operator in an acute situation can handle a problem that a designer, working under more serene conditions at his desk, is unable to solve. Indeed, automation is usually
designed by a team of people with all possible resources at their disposal, whereas the operator often is alone and has limited access to support.
6.3.3 Automation as a socio-technical problem
The discussion of balancing/function congruence and the ‘ironies of automation’
should make it obvious that automation is a socio-technical rather than an engineer- ing problem. This means that automation cannot be seen as an isolated problem of substituting one way of carrying out a function with another. The classical automa- tion philosophies, the residual functions principle and the Fitts list, both imply that functional substitution is possible – something known in human factors as the sub- stitution myth [28]. The basic tenets of the substitution myth are, (1) that people can be replaced by technology without any side effects and (2) that technology can be upgraded without any side effects. (Notice that the issue is not whether the side effects are adverse or beneficial, but whether they are taken into account.)
The first tenet means that there are no side effects of letting technology carry out a function that was hitherto done by humans. In other words, people can simply let go of part of their work, without the rest being the least affected. It is obviously very desir- able that people are relieved of tedious and monotonous work, or that potential human risks are reduced. But it is incorrect to believe that doing so does not affect the work sit- uation as a whole. New technology usually means new roles for people in the system, changes to what is normal and exceptional, and changes in manifestations of failures and failure pathways. There are therefore significant consequences both in the short term on the daily routines at the individual or team level, and in the long term on the level of organisations and society, such as structural changes to employment patterns.
The second tenet means that a function already carried out by technology can be replaced by an updated version, which typically means increased capacity but also increased complexity, without any side effects. This tenet is maintained despite the fact that practically every instance of technological upgrading constitutes evidence to the contrary. To illustrate that, just consider what happens when a new model of photocopier, or printer, is installed.
An excellent example of how the substitution principle fails can be found in the domain of traffic safety. Simply put, the assumption is that traffic safety can be improved by providing cars with better brakes, in particular with ABS. The argument is presumably that if the braking capability of a vehicle is improved then there will be fewer collisions, hence increased safety. The unspoken assumption is, of course, that drivers will continue to drive as they did before the change. The fact of the matter is that drivers do notice the change and that this affects their way of driving, specifically in the sense that they feel safer and therefore drive faster or with less separation from the car in front.
That automation is a socio-technical rather than an engineering problem means that automation design requires knowledge and expertise from engineering as well as from human factors. Automation is essentially the design of human work, and for this to succeed it is necessary that humans are not seen simply as sophisticated machines [29]. This is even more important because automation for most systems
is bound to remain incomplete, as pointed out by the second irony. As long as automation cannot be made completely autonomous, but requires that some kind of human–machine co-operation take place, the design must address the joint system as a whole rather than focus on specific and particular functions. Even if automation autonomy may be achieved for some system states such as normal operation, human–
machine co-operation may be required for others (start-up, shut-down, disturbances or emergencies), for repair and maintenance, or for upgrading.
The solution to these problems does not lie in any specific scientific discipline or method. This chapter has not even attempted to provide a complete coverage, since that would have to include control theory, cybernetics, and parts of cognitive science – and possibly also risk analysis, total quality management, social psychology, organisational theory, and industrial economics. It is, of course, possible to design automation without being an expert in, or having access to expertise from, all of these fields. Indeed, few if any systems have included this broad a scope.Designing automation neither is, nor should be, an academic exercise with unlimited time and resources. It is rather the ability to develop an acceptable solution to a practical problem given a (usually) considerable number of practical constraints. But what this chapter has argued is that no solution is acceptable unless it acknowledges that automation design is also the design of human work, and that it therefore must look at the system as a whole to understand the short- and long-term dynamics that determine whether the final result is a success or a failure.