TL;DR Updating Legal Presumptions for Computer Reliability must happen if we are to have justice!
Background
The ‘Horizon’ Scandal in the UK was a major miscarriage of justice:
‘Horizon’ was a faulty computer system, produced by Fujitsu. The Post Office had lobbied the British Government to reverse the burden of proof so that courts assumed that computer systems were reliable until proven otherwise. This made it very difficult for sub-postmasters – small-business franchise owners – to defend themselves in court.
This shocking miscarriage of justice was based on an equally shocking presumption. One that anyone with a background in software development would find ridiculous.
Introduction
Legal experts warn that failure to immediately update laws regarding computer reliability could lead to a recurrence of scandals like the Horizon case. Critics argue that the current presumption of computer reliability shifts the burden of proof in criminal cases, potentially compromising fair trials.
The Presumption of Computer Reliability
English and Welsh law assume computers to be reliable unless proven otherwise, a principle criticized for its reversal of the burden of proof. Stephen Mason, a leading barrister in electronic evidence, emphasizes the unfairness of this presumption, stating it impedes individuals from challenging computer-generated evidence.
It is also patently unrealistic. As I explain in my article on the Principles of Safe Software Development, there are numerous examples of computer systems going wrong:
- Drug Infusion Pumps,
- The NASA Mars Polar Lander,
- The Airbus A320 accident at Warsaw,
- Boeing 777 FADEC malfunction,
- Patriot Missile Software Problem in Gulf War II, and many more…
Making software dependable or safe requires enormous effort and care.
Historical Context and the Horizon Scandal
Dating back to an old common law principle, presuming the reliability of mechanical systems, the UK Post Office also lobbied to have the principle applied to digital systems. The implications of this change became evident during the Horizon scandal, where flawed computer evidence led to wrongful accusations against post office operators. Repealing a 1984 act further weakened safeguards against unreliable computer evidence, exacerbating the issue.
International Influence and Legal Precedents
The influence of English common law extends internationally, perpetuating the presumption of computer reliability in legal systems worldwide. Mason highlights cases from various countries supporting this standard, underscoring its global impact.
Modern Challenges and the Rise of AI
Advancements in AI technology intensify the need to reevaluate legal presumptions. Noah Waisberg, CEO of Zuva, warns against assuming the infallibility of AI systems, which operate probabilistically and may lack consistency.
This poses significant challenges in relying on AI-generated evidence for criminal convictions.
Section 5: Proposed Legal Reforms
James Christie is a software consultant, who co-authored recommendations for an update to the UK law. He proposes two-stage reforms to address the issue.
First, evidence providers must demonstrate responsible development and management of their systems, including disclosure of known bugs. Second, if unable to do so, providers must justify why these shortcomings do not affect the evidence’s reliability.
The Reality of Software Development
First of all, we need to understand how mistakes made in software can lead to failures and ultimately accidents.
Errors in Software Development
This is illustrated well by this standard BS 5760. We see that during development people, either on their own or using tools make mistakes. That’s inevitable. And there will be many mistakes in the software – as we will see. These mistakes can lead to faults or defects being present in the software. Again, inevitably, some of them get through.
If we jump over the fence, the software is now in use. All these faults are in the software but they lie hidden. Until that is, some revealing mechanism comes along and triggers them. That revealing mechanism might be a change in the environment and operator scenario or changing inputs that maybe the software is seeing from sensors.
That doesn’t mean that a failure is inevitable because lots of errors don’t lead to failures that matter. But some do. And that is how we get from mistakes to false or defects in the software to run time errors.
What Happens to Errors in Software Products?
A long time ago (1984!), a very well-known paper in the IBM Journal of Research looked at how long it took faults in IBM operating system software to become failures for the first time. We are not talking about cowboys producing software on the web that may or may not work okay, or people in their bedrooms producing apps. We’re talking about a very sophisticated product here that it was in use all around the world.
Yet, what Adams found was that lots of software faults took more than 5,000 operating years to be revealed. He found that more than 90% of faults in the software would take longer than 50 years to become failures.
There are two things that Adams’s work tells us.
First, in any significant piece of software, there is a huge reservoir of faults waiting to be revealed. So if people start telling you that their software contains no defects or faults, either they’re dumb enough to believe that or they think you are. What we see in reality is that even in a very high-quality software product, there are a lot of latent defects.
Second, many of them – the vast majority of them – will take a long, long time to reveal themselves. Testing will not reveal them. Using Beta versions will not reveal them. Fifty years of use will not reveal them. They’re still there.
[This Section is a short extract from my course Principles of Safe Software Development.]
Conclusion
Legal experts stress the urgency of updating laws to reflect the fallibility of computers, crucial for ensuring fair trials and preventing miscarriages of justice. The UK Ministry of Justice acknowledges the need for scrutiny, pending the outcome of the Horizon inquiry, signaling a potential shift towards addressing issues of computer reliability in the legal framework.
Hopefully, the legal people will come to realize what software engineers have known for a long time. Software reliability is difficult to achieve and must be demonstrated.