acts on information or a further instruction that is incorrect, then the code will probably fail to instruct the computer in the way the programmer intended. However, comments by a programmer that do not form part of the instructions cannot necessarily be considered to be part of the code.
considering computer evidence given by an expert is to consider the input. That may be so but it cannot be exclusive to expert computer evidence. Of course it was said that the best evidence of in-put and out-put material is in the print-outs of such material.1
1 [1997] ScotCS 1, 898–900, sub nom Elf Enterprise Caledonia Ltd v London Bridge Engineering Limited [1997] ScotCS 1, 2.
5.11 Based on this categorization, Professor Ormerod noted that some types of computer-generated representations do not infringe the hearsay rule.1 If a computer carries out the instructions of the program that has been written by humans to create such data, it may be right to suggest that such data are probably accurate without the need to test whether they are correct. But if the time as noted by a clock on a camera linked to an ATM is to be offered into evidence to link the accused to the murder of the person whose card was used in the ATM, then the time as data will have to be adduced as to its truth, as in the case of Liser v Smith,2 and there will be a need to validate the clock, and verify the time and date set by a human being.3
1 Although he accepts that s 129 of the Criminal Justice Act 2003 may need to be considered. For a commentary on s 129, see John R Spencer, Hearsay Evidence in Criminal Proceedings (Hart 2008) ch 3.
2 254 F.Supp.2d 89 (D.D.C. 2003).
3 As noted by Colin Tapper: ‘Reform on the law of evidence in relation to the output from computers’
(1995) 3 Intl J L & Info Tech 79, 85 fn 44.
5.12 To the same end, Professor Smith distinguished between the types of representations that the code in a device can make,1 and argued that where the computer is instructed to perform certain functions, many of which are performed in a mechanical way (such as the addition of the time and date on an email), in such circumstances the computer is producing real evidence, not hearsay. In illustrating the point he was making, Professor Smith gave a number of examples where evidence is not hearsay.2 One example was that of Six’s thermometer (commonly known as a maximum minimum thermometer), which he referred to as an instrument and not a machine. This is correct. The thermometer provides three readings: the current temperature, and the highest and the lowest temperatures reached since it was last reset. A human being can give evidence of his observation of the precise location of the mercury against the scale at a given time and date. The witness might be challenged as to the truthfulness of his recollection without calling into question the accuracy of the instrument. Such evidence will not be hearsay. Alternatively, the precision of the scale on the thermometer might be open to scrutiny, in which case it will be necessary to have the instrument tested by an appropriately qualified expert.3
1 J C Smith, ‘The admissibility of statements by computer’ [1981] Crim LR 387.
2 Smith, ‘The admissibility of statements by computer’ 387, 390.
3 This was also discussed by Penelope A Pengilley, ‘Machine information: is it hearsay?’ (1982) 13 Melbourne University Law Review 617, 625.
5.13 Further examples considered by Professor Smith included a camera that records an image, a tape recorder that records sound, and a radar speedmeter that records the speed of a vehicle. In 1981, each of these machines was mechanical in construction, with the exception of the radar speedmeter, which also incorporated components that were instruments. None of the examples involved devices controlled by software written by human beings. Although it is possible to alter the image from a camera or
the sound from a tape recording, or for a human being to lie about the reading from a radar speedmeter, nevertheless the evidence from such devices would not be hearsay.
5.14 In respect of software, Professor Smith indicated that a programmer may make mistakes (errors are common, for which see the chapter on ‘reliability’), but mistakes can also be made when deciding the scale on a thermometer. He went on to suggest that
‘[t]his consideration goes to weight rather than admissibility. In any event it certainly has nothing to do with the hearsay rule’.1
1 Smith, ‘The admissibility of statements by computer’ 387, 390. One answer to this issue has been proposed by Professor Pattenden – that s 129(1) of the Criminal Justice Act 2003 be replaced ‘with a single test of admissibility for all factual representations that are not in substance the statement of a person but “machinespeak”, that is, those whose content is the outcome of creating machine- processing’, for which see Rosemary Pattenden, ‘Machinespeak: section 129 of the Criminal Justice Act 2003’ [2010] Crim LR 623, 636–7; Professor Pattenden discusses the conflicting opinions relating to s 129(1) in detail.
5.15 Professor Seng proposed an analysis in 1997:
Computers which are used as data processing devices can be classified into the following categories: devices which accept human-supplied input and produce output, self-contained data processing devices which obtain input or take recordings from the environment without human intervention, and a hybrid of the two.1
1 Daniel Seng, ‘Computer output as evidence’ (1997) 130 Sing JLS 173.
5.16 Steven Teppler also accepted that it is possible to categorize data into three, treating digital data as hearsay:
(i) The memorandum ‘created’ by a human.
(ii) Digital data generated in part with human assistance.
(iii) Digital data generated without a human being.1 1 Teppler, ‘Testable reliability’, 235–40.
5.17 Teppler has also suggested that a ‘fourth potential category, for which there has been no judicial analysis, has recently emerged as a consequence of computer programs that “listen and respond” to questions in natural language and with a “voice”
that closely mimics a “real” human’.1 1 Teppler, ‘Testable reliability’, 235.
5.18 The authors of Archbold have also divided digital data into three categories:
(i) Where the device is used as a processor of data.
(ii) Where the software records data where there is no human input involved.
(ii) Where there is data recorded and processed by software that has been entered by a person, directly or indirectly.1
1 James Richardson (ed), Archbold: Criminal Pleading, Evidence and Practice 2016 (64th rev edn, Sweet & Maxwell 2016) paras 9-11–9-14.
5.19 It is proposed that the three categories outlined by Professor Seng, Steven Teppler and the authors of Archbold be slightly amended to read as follows:
(i) Content written by one or more people (that is, where the device is used as a processor of data).
(ii) Records generated by the software that have not had any input from a human.
(iii) Records comprising a mix of human input and calculations generated by software.
Each of these categories is discussed below.