• Tidak ada hasil yang ditemukan

Higher order processing

Dalam dokumen Human Factors for Engineers (Halaman 72-77)

Humans: skills, capabilities and limitations

3.8 Higher order processing

Higher order cognition represents the complex network of processes that enhance the understanding and production of information that has been assimilated ini- tially through sensation, followed by perceptual categorisation of a situation, before being attended to via specific attentional mechanisms and then stored in memory.

In summary, higher order processing transforms the information that has been assimilated and stored into meaningful concepts that the individual can act upon.

Some examples of higher order processing concepts are illustrated in Figure 3.3, and are broadly examined under the general heading of thinking research.

Thinking research

• Problem solving

• Deductive reasoning

• Decision-making

• Creativity and discovery

Figure 3.3 Higher order processing concepts

First, it is important to examine the general concept of thinking and thinking research. The term thinking may, at first glance, seem self-explanatory. It is an important component of our everyday lives and we use the term both freely and readily to explain our state of activity as humans. Yet how simple do you believe it would be to explain the concept of thinking without actually using the term ‘thinking?’

Psychologists and philosophers alike have battled with what thinking entails but perhaps the simplest way to understand thinking is to illustrate its usefulness in our actions. For instance, thinking is important for our ability to make decisions, to choose our own personal goals and to form our beliefs [21]. Because thinking is involved in so many of our mental processes and actions as humans, it is logical to assume that the term ‘thinking’ naturally becomes a prerequisite, as well as forming a fundamental part of, other high order cognitive processes such as problem solving and decision-making.

Humans are good at a number of different types of thinking. For example, induc- tive reasoning, and critical thinking, which involve thinking about beliefs, actions and goals, and drawing on experience. This is in contrast to computers, which are adept at deductive reasoning, where a purely logical approach is required. A framework, which links thinking about beliefs, actions and personal goals, is the search-inference framework [21]. This model essentially considers ‘thinking space’ to be made up of two related factors: search and inference (Figure 3.4).

In an example adapted from Baron [21], the following scenario is an illustration of how the search-inference model operates:

A software engineer has just been given the task of developing a new database system for a small motor-sport company. The engineer talks with existing software developers about the best programs to use and refers to the manuals for the material, which will provide the content for the database. However, her research reveals that the database she is required to design and build exceeds her expertise and she consults with senior managers about what course of action should be taken. Some suggest designing the database system to the level at which she is competent, while others suggest taking a course to develop her skills further.

After consideration, the software development engineer decides to take the course, improve and hone her database skills and develop the database system to its full potential.

Thinking space Inference

Search

Figure 3.4 An illustration of the search-inference framework

The software engineer is clearly involved in a number of search and inference thinking strategies. According to Baron [21] thinking involves searching for three kinds of objects: possibilities, evidence and goals. Therefore, in the above example, the software engineer sees that there are many possibilities for the task that she has been set and these possibilities also relate to the doubt that she is experiencing.

Possible solutions not only come from the engineer herself, but also from other colleagues. In this sense therefore the goals become the benchmark by which the possibilities of action are evaluated. In the scenario above, the engineer’s goals consist of undertaking a task to design and build a database. Internally, the engineer has a desire to succeed and complete the task to the best of her abilities. Finally, evidence about the goal of achieving the correct database will lead to either a decision being made to choose either designing the database to the current level of her expertise or undertaking a course to improve her skills in the area. The evidence for each possibility is laid down and the software engineer decides which course of action to take. This is a clear example of how thinking becomes a prerequisite for choosing the correct problem solving strategy and thus also relates closely to decision-making.

It is also a clear example of what is commonly referred to as ‘inductive thinking’

where evidence and past experience are used to form generalisations about what is most likely to be true.

Problem solving provides a good illustration of how humans utilise higher order cognitive process to resolve unfamiliar situations. Wickens and Hollands [22] stated that while planning and multi-tasking are also types of cognitive processing that are able to deal with situations that are not always identical, these methods are arguably different from problem solving approaches. The main reason for the difference lies in the fact that planning and multi-tasking processes use existing cognitive mental models as their starting point. Problem solving has no such model as a starting point and therefore requires the human to develop new cognitive mental methods for the situation.

Newell and Simon [23] proposed a problem-space theory, which is worth mentioning since it is the foundation from which many existing theories stem.

Using a maze analogy, Newell and Simon likened a problem to having multiple paths. The initial stage was standing outside the maze (i.e. being presented with a problem). Travelling through the maze with its many junctions where the individual can choose whether to turn left or right or go forward or backward is termed the inter- mediate stage. The goal state is the final stage where the individual reaches the centre of the maze (i.e. solution has been found). Further, Newell and Simon suggested that the human capability to solve problems requires the individual to pass different knowledge states. This is similar to the maze analogy; the individual begins at an initial knowledge state and will then attempt to perform a search of alternative mental states before finally reaching the goal state.

Within the domain of human factors, two critical higher order cognitive processes are problem solving and decision-making. However, logically it can be suggested that all of the higher order cognitive processes are inter-related in some way. Earlier it was highlighted how decision-making was integrally related to problem solving and thinking in general. However, decision-making can also be considered as a cognitive entity in its own right and therefore deserves further clarification. If we were to assess retrospectively our daily activities both in the actions we perform but more specifically in the higher order cognitive activities that take place, we would see that decision-making would most likely make up the greatest part of our daily cognitive processing activities and actions.

Logic would suggest that decision-making activities involve the selection of one choice from a number of choices, that there is a certain amount of information with respect to each choice and that there is often a certain amount of uncertainty with the choice. Indeed this is the paradigm, which many researchers consider to define decision-making tasks [24].

Theories of decision-making, and therefore the subsequent development of decision-making models, fall into three broad categories:

1. Normative (1950s–) 2. Descriptive (1970s–) 3. Prescriptive (1990s–)

The order of these categories represents the chronological order of dominance of decision-making theories over the last 50 years or so. In other words, normative theories were most dominant or common in the 1950s; descriptive models were most common in the 1970s and so forth, although research today continues to develop notions based on any one of these three broad categories of decision-making.

Normative models (now known as ‘classical decision theory’) represent early emergent thinking on decision-making. Essentially, normative models specify the cost or benefits associated with different choices. Mathematical models could there- fore be applied to these ideas in order to optimise outcomes. Utility Theory is an illustration of a normative model. It works on the premise that humans seek pleasure, and avoid pain. In general decision-making, decisions are made to maximise pleasure (positive utility) and minimise pain (negative utility) [25]. One of the limitations of the

normative decision-making models is that they work on the assumption that humans make decisions according to a normative model, i.e. they are programmed to make decisions in an ideal setting taking into account various well-defined parameters.

A possible scenario relating to normative models is illustrated below.

An engineer attempts to make the following decision: whether to stay late after work and complete a set of tasks ahead of schedule in order leave early the next work day or complete the task in the regular work period resulting in not staying late or leaving work early.

The engineer will weigh up both positive and negative utilities and may decide that the pleasure gained (positive utility) from leaving early the next day will far outweigh the inconvenience (negative utility) of staying late at work tonight. Therefore, the engineer decides to stay late at work and finish the assigned tasks.

The reality is that humans often do not make decisions that conform to the normative model. They do not weigh up the situation in an organised and methodical way taking into account positive and negative utilities. This lack of reference to the subjective element was one of the reasons leading to the demise of the norma- tive models. They were superseded by the descriptive models, which attempted to

‘describe’ human decision-making as it actually occurs. In the example given above, a colleague might suggest the engineer joins them for a drink after work, and a sponta- neous decision might be made to abandon working late despite arrangements having already been made to leave work early the next day.

Descriptive models are arguably self-explanatory since these models describe how people make decisions and as a result such issues as the irrationality and biases that are so distinct to our human psychological make-up are taken into account [26].

As one example of a descriptive model, Prospect Theory [27] describes the extent to which the probability of various outcomes affects people’s decisions. Therefore, people will make a choice based on the highest prospect and not the utility values. For example, an aircraft pilot may perceive the probability of running into detrimental weather conditions to be low (since the pilot may be familiar with the terrain and area), when in reality evidence such as weather reports suggests the probability of bad weather to be high. Like many descriptive theories prospect theory is consid- ered to be vague [28], but nevertheless it highlights the capabilities of humans to use probability estimations as just one of the many strategies in their decision-making process.

Prescriptive models are by far the most accessible models, the reason being that prospective models attempt to study decision making in natural and operational set- tings [29] as opposed to developing descriptive theories in one context and trying to relate the decision-making process to another area. For instance, it is clear that a decision-making scenario involving deciding whether or not to work late at night or leave at the normal time would not be applicable to a safety-critical situation such as an emergency on the flight deck of an aircraft where there are so many different factors such as time pressure and stress that will ultimately influence the decision-making process and are unique to the flight-deck in that instance. Examining decision-making in this manner is called ‘naturalistic decision-making’ [30]. It takes into account how people use their experience to make decisions in operational and therefore natural

settings [26]. For instance, the capability of humans to learn from mistakes makes experience an invaluable tool in safety-critical domains. It has been documented that human errors, while contributing to detrimental consequences on some occa- sions, provide a vital source of information for experienced decision-makers. Taking another example from the aircraft flight deck, a study by Rauterberg and Aeppli [31]

found that experienced pilots can sometimes be considered ‘better’ pilots because their expertise results from using previous unsuccessful behaviour (e.g. errors in a flight deck simulator or in a real situation) to make current decisions. In other words, using prior experience resulting from errors can be successfully used to make ‘safer’

decisions in safety-critical situations on the flight deck.

There are, however, certain human limits in decision-making and an area in which this limitation becomes apparent is in statistical estimation. Wickens [32] suggested using an analogy with the classical procedure of statistical inference. The analogy basically states that in the first instance a statistician or other suitably qualified person such as a psychologist computes a set of descriptive statistics (e.g. mean, median, standard error) of the data; then using the estimated statistics some inference about the sample is made. According to Wickens, humans have limitations in two fundamental areas:

1. ability to accurately perceive and store probabilistic data; and 2. ability to draw accurate inferences on the data presented.

An example of these limitations is seen when people are asked to perceive proportions. It appears that when we try to estimate proportions, let us say for instance that an engineer is trying to estimate the numerical value of a proportion from a sample of data, there is tendency to be conservative in our estimation. In other words, humans tend to underestimate proportions rather than overestimate. Wickens [32]

suggested that the reason we tend to be cautious or conservative in our estimation of proportion is due to the fact that any large estimation using extreme values may run a greater risk of producing an estimation that is incorrect. Sokolov [33] offers a more interesting explanation where it is proposed that the salience and frequency of attending to something ultimately influences how conservative we will be in esti- mating proportion and making a decision on such estimations. In this context salient events are events that are rare and may therefore cause an increase in estimation of its frequency. For example, an aircraft pilot may only attend to the salient char- acteristics of the flight deck environment instead of characteristics occurring on a more frequent basis. Therefore, the decision-making process is biased towards these salient cues, which means that it could be fundamentally flawed and not lead to the best outcome.

Dalam dokumen Human Factors for Engineers (Halaman 72-77)