• Tidak ada hasil yang ditemukan

Challenging ‘reliability’

Dalam dokumen Electronic Evidence (Halaman 189-192)

6.118 When seeking to challenge the underlying software of a computer or computer- like device, lawyers frequently have great difficulty in overcoming the presumption that a machine is working properly, although general assertions about the failure of software code are often made without providing any foundation for the allegations. This problem is compounded when a party refuses to deliver up relevant evidence, usually citing confidentiality as the reason for the refusal, and relying on the presumption that a computer is ‘reliable’. In such circumstances, it is difficult to convince a judge to order the disclosure of relevant data.

6.119 Yet, paradoxically, it is a well-known fact in the industry that software could hardly be said to be ‘reliable’. As noted by Steyn J in Eurodynamic Systems Plc v General Automation Ltd:

… The expert evidence convincingly showed that it is regarded as acceptable practice to supply computer programmes (including system software) that contain errors and bugs. The basis of the practice is that, pursuant to his support obligation (free or chargeable as the case may be), the supplier will correct errors and bugs that prevent the product from being properly used.1

1 (6 September 1988, not reported), QBD, 1983 D 2804, [5.a].

6.120 This view is reinforced by Professor Matt Blaze:

It is a regrettable (and yet time-tested) paradox that our digital systems have largely become more vulnerable over time, even as almost every other aspect of the technology has (often wildly) improved.

Modern digital systems are so vulnerable for a simple reason: computer science does not yet know how to build complex, large-scale software that has reliably correct behaviour.1 This problem has been known, and has been a central focus of computing research, since the dawn of programmable computing. As new technology allows us to build larger and more complex systems (and to connect them together over the internet), the problem of software correctness becomes exponentially more difficult.2 [Footnote 2 is at this point, and is reproduced below]

Footnote 2:

That is, the number of software defects in a system typically increases at a rate far greater than the amount of code added to it. So adding new features to a system that makes it twice as large generally has the effect of making far more than twice as vulnerable. This is because each new software component or feature operates not just in isolation, but potentially interacts with everything else in the system, sometimes in unexpected ways that can be exploited. Therefore, smaller and simpler systems are almost always more secure and reliable, and best practices in security favour systems [that have] the most limited functionality possible.3

1 It should be noted that computer scientists have invented many ways to achieve this, and some companies use these methods to prove mathematically that their systems cannot fail at runtime – but the software will be running on a computer with unreliable hardware, other firmware and software and user interfaces, which might mean that the program might be ‘right’, but when interacting with the other components, can lead to a lethal failure. Also, we need to be aware that what is being proved is not that the systems do what is desired, but that the systems meet a formal statement of the requirements.

The original requirements cannot be themselves be proved to be correct, or that the formal software requirements meet the constraints of the real world. There are limits to what formal methods can do, and those limits are not widely acknowledged. See B Littlewood and L Strigini, ‘Validation of ultrahigh dependability for software-based systems’ (1993) 36 Communications of the ACM 69, available at

<http://openaccess.city.ac.uk/1251/1/CACMnov93.pdf>.

2 It is not clear whether ‘exponentially’ means that the rate of growth is proportional to the amount present, or whether the word is used loosely to mean ‘growing rapidly’.

3 Matt Blaze, Testimony to the Subcommittee on Information Technology hearing, ‘Encryption Technology and Potential U.S. Policy Responses’ on Wednesday, April 29, 2015 at 2:00pm, available at <http://democrats.oversight.house.gov/legislation/hearings/subcommittee-on-information- technology-hearing-encryption-technology-and>.

6.121 Lawrence Bernstein and C. M. Yuhas also acknowledged this observation:1 Software developers know that their systems can exhibit unexpected, strange behaviour, including crashes or hangs, when small operational differences are introduced.2 These may be the result of new data, execution of code in new sequences or exhaustion of some computer resource such as buffer space, memory, hash function overflow space or processor time.

1 Lawrence Bernstein and C M Yuhas, ‘Design constraints that make software trustworthy’ IEEE Reliability Society 2008 Annual Technology Report 3, available at <http://rs.ieee.org/tech-activities/42- letters-in-reliability-annual-technology-reports>.

2 This is a consequence of discrete complexity, or digital complexity.

6.122 This section aims to provide a broad outline of the problems relating to computers and computer-like devices by different industries, and to illustrate the importance of software and how there may be times when the output of a computer may not necessarily be ‘reliable’ and is therefore not to be trusted. Software code should be open to scrutiny, and should not necessarily share the benefit of a presumption of

‘reliability’ that is incapable of being effectively challenged.1

1 Ken Chasse, ‘Electronic records as documentary evidence’ (2007) 6 Canadian Journal of Law and Technology 141 refers to the need for a ‘system integrity test’.

6.123 One of the problems with understanding the role of the presumption is that people fail to distinguish software from computer systems. Computers are merely devices that are remarkable in that they can be turned to do many tasks rather than being limited to a single purpose. In order to perform a useful purpose, they must be

instructed by software. A computer and its software together can be taken to form a system. No machine is ‘reliable’ or ‘unreliable’ in an absolute sense. Machines may be more or less reliable. The term ‘reliable’ in everyday use is an abbreviation of what in technical terms is ‘reliable enough for the intended purpose’. All machines have some probability of failing, so none is ‘reliable’ in the sense that one can rely on it without any doubt, while many are reliable enough (their probability of failing to perform correctly at any one use is small enough) to be worth using. The problem with using the word ‘reliable’ as though reliability were a binary quality, is that we risk taking it to mean ‘reliable enough’ without allowing of the fact that what is ‘enough’ depends on the use to which we put the machine, or rather, its outputs. For instance, a machine may be reliable enough to be worthwhile in everyday use, and yet not reliable enough to use as evidence in a specific case. The speedometer in a motor car may be reliable enough to use as an aid for driving at reasonable speed, because this level of reliability is not necessarily the same level of reliability that should be required in order to use not a matter of whether the instrument is ‘reliable’, but of ‘how reliable’ it is. It follows that lay people are not aware of the inherent design faults, and trust their personal experience to reassure themselves that computers are ‘reliable’ machines. Yet lay users experience problems with devices regularly, which illustrates the failure of lay people to grasp that ‘reliability’ and software code are impossible to guarantee.1

1 David Harel, Computers Ltd. What They Really Can’t Do (Oxford University Press 2003); see also Neumann, Computer Related Risks, and his website, which is continually updated: <www.csl.sri.com/

users/neumann/insiderisks.html>; see also the list of software failures on the website of Nachum Dershowitz, School of Computer Science, Tel Aviv University, at <www.cs.tau.ac.il/~nachumd/horror.

html>.

6.124 Lay people are not the only people to make this mistake. This may be illustrated by the judicial assertion that computers are ‘reliable’ because there are more computers.

Villanueva JAD made just such an assertion without providing any evidence to sustain his claim that computers are ‘presumed reliable’ in the case of Hahnemann University Hospital v Dudnick:

Clearly, the climate of the use of computers in the mid-1990’s is substantially different from that of the 1970’s. In the 1970’s, computers were relatively new, were not universally used and had no established standard of reliability. Now, computers are universally used and accepted, have become part of everyday life and work and are presumed reliable.

1 292 N.J.Super. 11, 678 A.2d 266 (N.J.Super.A.D. 1996), 268.

6.125 This observation by Villanueva JAD was made in the same year as the failure of the software that caused the Ariane 5 rocket to be destroyed shortly after take-off.

6.126 That computers are deemed to be ‘reliable’ because they are used more frequently is a poor substitute for a rigorous understanding of the nature of computers and their software. However, it is accepted that long-term use can be an important element of justified trust in a software system. This comes about because there might be a long history of valuable and seemingly error-free use, but also because the long- term user typically gets to know the idiosyncrasies of the system.

Dalam dokumen Electronic Evidence (Halaman 189-192)