1 George Miller has been a staunch supporter of cognitive psychology. He is well known for his observation that the number of symbols one can conveniently hold on to is 7 +/− 2.
2 This text was written in 1955. At that time, the convention of referring to ‘men and women’ by the generic ‘he’ was standard grammatical practice. The species was usually referred to as ‘Man’.
44
T
Th he e C Co om mp pu utta attiio on na alliis stts s
The advent of cognitive psychology in the mid 20th century opened up the question of the best models with which to represent the cognitive processes that were pre-sumed to underlie overt cognitive performances. It was all very well to have fash-ioned a representation of a body of knowledge, but how was it implemented in action? A model of something is a representation based on an analogy between the model and its subject. Scientific models are often used to represent mechanisms and processes that are at the time unobservable. Trying to extend our powers of obser-vation to test the verisimilitude or truth-like qualities of our models is one of the most important and fruitful ways that the natural sciences grow. Scientific models are also used to abstract the salient features of something complicated or of some-thing which is in some way indeterminate or ill-defined. For example, weather maps present simplified models of the complex flux of pressure and humidity in regions of the atmosphere. Here the subject and source of the model are the same, namely the processes occurring in the atmosphere. When models are used to represent processes which are currently beyond the reach of observation, necessarily the source and the subject of the model are different. Darwin’s theory of evolution by natural selection was the expression of a model or representation of the process that he imagined had occurred in nature over vast stretches of time. He based it on what he knew of the methods of selective breeding used by farmers and gardeners to transform existing strains of animals and plants. The natural sciences are rich in this kind of model. Here the subject (whatever happened in the remote past) and the source of the model (what we know about selective breeding) are different. The plausibility of the model depends in part on whether the source, in this case selec-tive breeding on the farm, can be identified in some way with a possible process in nature. Both kinds of models appear in the work of the computationalists.
As long ago as the 16th century the project of building a machine that would simulate human and animal behaviour was taken seriously. Various automata were constructed that mimicked some relatively simple actions of people and animals.
These models, including models of full-sized orchestras, were mostly driven hydraulically. Some of these can be seen in a curious museum near the town of Eau Clair, Wisconsin, in the United States. A hydraulic model of the nervous system was popular at the time. Some philosophers conjectured that animals were just fleshly hydraulic machines.
The idea that an actual machine could simulate human cognitive powers was not pursued seriously until the 19th century. The title of La Mettrie’s book, L’Homme Machine, published in 1749, was more a metaphor for a materialist explanation of cognition than a serious attempt to propose a thinking machine to reproduce at least some of the cognitive powers of a person. In the 1840s, Thomas Babbage (1791–1871) made a machine which, at least in principle, could perform arithmetical calculations mechanically. By now the source of mental models had moved on to mechanisms. In the 20th century electrical systems, including the telephone network, began to be used as a source for models of various aspects of human psychology.
It is one thing to construct a machine that simulates a human being performing a cognitive or practical task. It is quite another to play with the idea that a machine could think as a human being thinks, and thus would be conscious. This possibil-ity was still far fetched at the beginning of the 20th century. However, during the Second World War, the idea of making a serious comparison between a machine’s capacity to perform human tasks and a human being’s cognitive capacities took a huge step forward with Alan Turing’s (pp. 82–86) theoretical and practical inven-tions in computation. His ideas were very quickly taken up by many others, including the idea that machines should, in certain circumstances, be said to think. If machines could properly be said to be think, at least in principle, perhaps people were, in their own way, thinking machines, working in a similar way. This startling conjecture was the foundation for a new turn in psychology, the compu-tational model of mind. Computing machines and their internal processes became a fertile source for models to represent cognitive processes.
When asked what had been his aim in life Marvin Minsky (pp. 93–98), one of those whose work is to be discussed in this chapter, replied ‘To construct intelli-gent machines’. This answer prompts the question that has bedeviled the project of building a computationalist psychology: ‘Is an intelligent machine a device that can only do what people can do, or is it one that can also think like people think?’
Behaviourists would not make any distinction between these alternatives. For them only overt behaviour is relevant to a scientific psychology. However, for cog-nitive psychologists this is a real question. Perhaps people are machines, though not in the same way as locomotives or grenade launchers. For an admirably clear and well-written introduction to the contested relation between cognition and computation Minsky’s The Society of Mind (1987) has not been bettered.
George Miller (1920–) was one of the leading founder members of the Center for Cognitive Studies, set up at Harvard in 1967. With Jerome Bruner (pp. 54–62) he threw himself into a vigorous programme of research and writing to bring to fruition the insights that had led to the flight from positivistic behaviourism to a psychology that fulfilled the tenets of scientific realism. Miller was responsible for one of the most pervasive and influential models that shaped cognitive psychology, the <Test, Operate, Test, Exit> or ‘TOTE’ machine. This device continued to perform a task until a certain desired state had been reached, and then it stopped. Such a machine could be looked at in three ways. It could be a plan for an actual machine with the links between parts conduits for energy. It could be a flow chart for the processing
of information through a series of boxes. It could be a computer running a programme drawn from a memory store. Miller and others not only adopted models like the TOTE machine, but attempted to develop mathematical descriptions of the phenomena and of the schemata that were employed by people in creating them. The actual results of this kind of work were disappointing. However, Miller’s enthusiasm and his faith in formal modelling of cognitive processes provided some of the impetus to keep the computationalist programme running.
The 20th century saw a great many advances in the mathematical analysis of computation and the design of more and more powerful and successful comput-ing machines. The use of this technology as the groundwork for a new science of mind was enthusiastically promoted and just as strongly contested. The era began and ended with the publication and intense discussion of two famous thought experiments. The Turing Test strongly suggested an identity between computation and thought. However John Searle’s (pp. 98–101) thought experiment of the
‘Chinese Room’ seemed to undercut the idea, even in principle. These imaginary experiments will be described and their significance explained as we go along. At this point, we need to understand the importance of thought experiments in the sciences. The history of the computational model of mind is very largely a history of theoretical debates and thought experiments.
The development of physics has depended on the judicious invention of imagi-nary experiments. Newton’s Laws describe situations in which there is neither a driving force, nor any resisting friction to interfere with the pure motion of mate-rial things. Einstein’s images of moving trains and falling elevators were not just illustrations of the special and general theories of relativity, but it seems they were integral to the thinking that produced them. Computational psychology is very much in the same mould as Newtonian and post-Newtonian physics and Darwinian biology, heavily dependent on ‘thought’ experiments.
A
Alla an n M Ma atth hiis so on n T Tu urriin ng g ((1 19 91 12 2– –5 54 4))
This remarkable man must be credited with three innovations that bore on the programme of research that linked computing machines to human cognition. The first was the discovery of a set of rules and procedures by the use of which any computable function could be evaluated in a finite number of steps. The second was the invention of an abstract machine that could perform these operations in a purely mechanical way (Turing, 1936). The third was his contribution to the design of a practical version of his abstract machine, the electronic computing machine. After the Second World War Turing was responsible for the basic plan of the first computing machine that incorporated its own programmes, the ACE machine developed at Manchester University.
Within a decade practical computing machines were being used for the perfor-mance of all sorts of routine jobs. They were not just sophisticated adding machines.
They were capable of simulations of the performance of many of the everyday
cognitive and manual tasks on which our way of life depends. Information technology (IT) as a branch of engineering has been a huge success.
The suggestion of a strong link between this branch of engineering and psy-chology was first clearly articulated in a paper Turing wrote in 1950. He proposed a test which would justify us in saying that any machine which could pass it should properly be said to be thinking. The obverse of this idea was the principle that opened up the possibility of computational psychology, that any being which can think must be a kind of computing machine.
W
Wh ho o w wa as s A Alla an n M Ma atth hiis so on n T Tu urriin ng g? ?
Alan Turing was born in London on 25 June 1912. His father was an administrator in India, while his mother came from an Anglo-Irish family who were also long involved in the subcontinent. He was brought up, as were many of the children of the British Raj, by various foster parents. He entered Sherborne School in 1924, going up to Kings College, Cambridge in 1930. His biographer, Andrew Hodges (1983), emphasizes Turing’s intellectual isolation during his school days. As an enthusiast for science and mathematics in a school which aimed to place its best pupils in Oxford and Cambridge to study classics, he was something of an outsider.
All this changed at Cambridge. His academic successes were crowned by a Fellowship at Kings in 1935. His personal life matured around his realization of his homosexuality, unstigmatized in the college of such men as J. M. Keynes. His mathematical studies turned increasingly to fundamental questions of the logic underlying mathematical enquiries. The German mathematician David Hilbert had posed the issue clearly – could there be a method by which the provability of any mathematical assertion whatever could be decided? To answer this question Turing began his famous analysis of what a purely mechanical process of reason-ing would require. The result was his description of an abstract machine and the rules for its use that would represent all possible formal mathematical procedures.
The Turing machine became the theoretical foundation of all real computing machines. He was able to use the specification of the Universal Turing Machine, a machine that would itself specify the rules for specific calculations, to show that there were necessarily unsolvable problems in mathematics.
In 1936, Turing enrolled as a graduate student at Princeton, completing his work on Hilbert’s problem, to some extent in isolation. The leading mathemati-cian at Princeton at that time was Alonzo Church, who published his solution to the Hilbert problem just before the publication of Turing’s solution, which had been mysteriously delayed. There is little doubt of Turing’s priority. He returned to Cambridge in 1938, somewhat disillusioned with academic life.
At the outbreak of war, he officially took up work as a cryptographer or code breaker at the newly established centre at Bletchley Park, though he had been secretly at work for the government already. The organization at Bletchley grew very rapidly, each group specializing in one of the cryptographic problems posed
by the German codes. Turing himself set to work on breaking the code generated by a German coding device, the Enigma machine. His success in discovering the working of the first version of the machine was short-lived. A new and more com-plex version of Enigma was developed. Eventually the workings of this machine too were mastered, using the same methodology that Turing had developed for the original ‘Enigma problem’. However, he had already become interested in using digital electronic devices for performing large numbers of calculations very quickly, whatever the problem to which they were relevant. He led the engineer-ing group that produced a machine to decode the latest Enigma encryptions. Even before the war ended Turing was thinking of how to use these advances to build a real electronic version of his Universal Turing Machine, a mechanism that would perform all possible computations.
He realized that the key to the advancement of electronic computing was to input the programme into the machine, as well the data on which it was to operate, rather than laboriously reprogramming it for every new type of computation. The neces-sary hardware was slow to materialize, until the Americans announced their plan to develop an advanced electronic computer, EDVAC. It was still primitive relative to Turing’s proposed machine, since it did not store its own programmes. Eventually the British effort did get underway with ACE, the Advanced Computing Engine. The Manchester machine was designed to store its own programmes, giving it the capa-bility to switch from one task to another, without any additional electronic compo-nents. Here we have the first hint of a parallel with the way human beings think.
We do not have to learn a new technique every time we encounter a slightly differ-ent problem. The ACE machine was never built. The implemdiffer-entation of Turing’s idea of the stored programme was pushed forward by others at Manchester, and in 1948 the first such machine was switched on. Deeply disillusioned by what he regarded as inexcusable ‘messing about’, Turing did not even attempt to publish any of his papers or lectures of this time. It has only recently been realized how far he had already moved beyond the then state of the art.
From the late 1940s, Turing’s interests moved away from computation and pro-gramming towards mathematical chemistry, and the problem of the genesis of chemical structures. In 1951 he published a paper that was to prove seminal in this field too.
Throughout the period, he had been continuing to work for the government intelligence services on code breaking. The Cold War had begun and a new array of encryption systems made their appearance. To his dismay, his security clear-ance was revoked. Though always discrete, he had never concealed his sexual ori-entation. In Britain at that time homosexuality was not only a statutory crime, but in the atmosphere of suspicion, the threats of espionage and the possibility of blackmail it had come to be perceived as a threat to the security of the nation.
As a homosexual, Turing was vulnerable. However, worse was to come. In 1952, he was arrested in connection with an alleged relationship with a young man who had burgled his house. The details of this extraordinary story can be found in Hodges’s (1983) biography. Turing escaped prison by submitting to the humiliating
alternative of a course of hormone treatment. The episode, a shameful disgrace to Britain, ended with his suicide. He killed himself by eating an apple into which he had injected strychnine.
What sort of man was this, that he could be at the forefront of so many inno-vations and yet, in his lifetime, receive not only so little recognition, but also so little support when the exigencies of wartime had passed? Personal reticence and dislike of the limelight is one strand in the story. The isolation inevitable in those days of those whose sexuality did not fit the official paradigm is probably another.
W
Wh ha att d diid d h he e cco on nttrriib bu utte e? ?
A certain amount of detail will be necessary to appreciate the relation between the Turing Machine, actual computers and the project of computational psychology.
Turing’s imaginary machine consists of an endless tape which runs through a read and write head. The head can erase whatever symbol is already printed there and print a 0 or 1 instead. Turing proved that all possible computations could be performed by the machine, if its operations were controlled by following a finite set of rules in a finite number of steps. For example, a step in a calculation might require that the tape be moved three places through the head; if a 1 is found there it is erased and replaced by a 0; if a 0, no action is to be taken. Numbers are represented in the binary code as strings of 0s and 1s. Thus, we have the natural numbers written as the sequence 0, 1, 10, 11, 100, and so on instead of the familiar base 10 representation as 0, 1, 2, 3, 4 and so on. However, binary strings need not stand for numbers. They can be assigned other kinds of meanings. In the ASCII code each letter of the alphabet is assigned a binary number. If we can find computable functions to represent relations among whatever non-numerical objects the strings represent, for example the spelling conventions of English, we can use the power of the computer to simulate any cognitive process for which such a function can be found. We could construct a programme that would check the spelling of English words. Classifying is a cognitive procedure of profound psychological importance. If a way of classifying could be expressed as a set of computable functions we would have the basis for simulating the cognitive process of ordering things into kinds. We would need to define a relation ‘is a’ linking a description of an individual with a description of a kind or species to express ‘Tweetie is a bird’ formally. It might be based on the requirement that the attributes of the species be among the attributes of the individual. Thus classifying Tweetie as a bird could be accomplished if among Tweetie’s attributes were the defining properties of the kind ‘bird’. The relevant groups of attributes could be represented by strings of binary digits (‘bits’) and a computational rule for comparing them worked out.
By 1950, Turing believed he had put in train everything needed to build a think-ing machine, a kind of artificial brain. This encouraged him to try to brthink-ing out the link between psychology and computation in a thought experiment. Here was the famous Turing Test. A human being, let us call this person ‘the operator’, sits at a