Categories
Functional Safety software safety

Updating Legal Presumptions for Computer Reliability

TL;DR Updating Legal Presumptions for Computer Reliability must happen if we are to have justice!

Background

The ‘Horizon’ Scandal in the UK was a major miscarriage of justice:

Between 1999 and 2015, over 900 sub postmasters were convicted of theft, fraud and false accounting based on faulty Horizon data, with about 700 of these prosecutions carried out by the Post Office. Other sub postmasters were prosecuted but not convicted, forced to cover Horizon shortfalls with their own money, or had their contracts terminated. The court cases, criminal convictions, imprisonments, loss of livelihoods and homes, debts and bankruptcies, took a heavy toll on the victims and their families, leading to stress, illness, family breakdown, and at least four suicides.

Wikipedia, British Post Office scandal

‘Horizon’ was a faulty computer system, produced by Fujitsu.  The Post Office had lobbied the British Government to reverse the burden of proof so that courts assumed that computer systems were reliable until proven otherwise.  This made it very difficult for sub-postmasters – small-business franchise owners – to defend themselves in court.

A 1984 act of parliament ruled that computer evidence was only admissible if it could be shown that the computer was used and operating properly. But that act was repealed in 1999, just months before the first trials of the Horizon system began. When post office operators were accused of having stolen money, the hallucinatory evidence of the Horizon system was deemed sufficient proof. Without any evidence to the contrary, the defendants could not force the system to be tested in court and their loss was all but guaranteed.

Alex Hern writing in The Guardian in January 2024.

This shocking miscarriage of justice was based on an equally shocking presumption.  One that anyone with a background in software development would find ridiculous. 

Introduction 

Legal experts warn that failure to immediately update laws regarding computer reliability could lead to a recurrence of scandals like the Horizon case. Critics argue that the current presumption of computer reliability shifts the burden of proof in criminal cases, potentially compromising fair trials.

The Presumption of Computer Reliability

English and Welsh law assume computers to be reliable unless proven otherwise, a principle criticized for its reversal of the burden of proof. Stephen Mason, a leading barrister in electronic evidence, emphasizes the unfairness of this presumption, stating it impedes individuals from challenging computer-generated evidence.

It is also patently unrealistic.  As I explain in my article on the Principles of Safe Software Development, there are numerous examples of computer systems going wrong:

  • Drug Infusion Pumps,
  • The NASA Mars Polar Lander,
  • The Airbus A320 accident at Warsaw,
  • Boeing 777 FADEC malfunction,
  • Patriot Missile Software Problem in Gulf War II, and many more…

Making software dependable or safe requires enormous effort and care.

Historical Context and the Horizon Scandal

Dating back to an old common law principle, presuming the reliability of mechanical systems, the UK Post Office also lobbied to have the principle applied to digital systems. The implications of this change became evident during the Horizon scandal, where flawed computer evidence led to wrongful accusations against post office operators. Repealing a 1984 act further weakened safeguards against unreliable computer evidence, exacerbating the issue.

International Influence and Legal Precedents

The influence of English common law extends internationally, perpetuating the presumption of computer reliability in legal systems worldwide. Mason highlights cases from various countries supporting this standard, underscoring its global impact.

“[The Law] says, for the person who’s saying ‘there’s something wrong with this computer’, that they have to prove it. Even if it’s the person accusing them who has the information.”

Stephen Mason

Modern Challenges and the Rise of AI

Advancements in AI technology intensify the need to reevaluate legal presumptions. Noah Waisberg, CEO of Zuva, warns against assuming the infallibility of AI systems, which operate probabilistically and may lack consistency.

With a traditional rules-based system, it’s generally fair to assume that a computer will do as instructed. Of course, bugs happen, meaning it would be risky to assume any computer program is error-free…Machine-learning-based systems don’t work that way. They are probabilistic … you shouldn’t count on them to behave consistently – only to work in line with their projected accuracy…It will be hard to say that they are reliable enough to support a criminal conviction.

Noah Waisberg

This poses significant challenges in relying on AI-generated evidence for criminal convictions.

Section 5: Proposed Legal Reforms

James Christie is a software consultant, who co-authored recommendations for an update to the UK law.  He proposes two-stage reforms to address the issue.

The first would require providers of evidence to show the court that they have developed and managed their systems responsibly, and to disclose their record of known bugs … If they can’t … the onus would then be on the provider of evidence to show the court why none of these failings or problems affect the quality of evidence, and why it should still be considered reliable.

James Christie

First, evidence providers must demonstrate responsible development and management of their systems, including disclosure of known bugs. Second, if unable to do so, providers must justify why these shortcomings do not affect the evidence’s reliability.

The Reality of Software Development

First of all, we need to understand how mistakes made in software can lead to failures and ultimately accidents.

Errors in Software Development

This is illustrated well by this standard BS 5760. We see that during development people, either on their own or using tools make mistakes. That’s inevitable. And there will be many mistakes in the software – as we will see. These mistakes can lead to faults or defects being present in the software. Again, inevitably, some of them get through.

BS 5760-8:1998. Reliability of systems, equipment and components. Guide to assessment of the reliability of systems containing software

If we jump over the fence, the software is now in use. All these faults are in the software but they lie hidden. Until that is, some revealing mechanism comes along and triggers them. That revealing mechanism might be a change in the environment and operator scenario or changing inputs that maybe the software is seeing from sensors.

That doesn’t mean that a failure is inevitable because lots of errors don’t lead to failures that matter. But some do. And that is how we get from mistakes to false or defects in the software to run time errors.

What Happens to Errors in Software Products?

A long time ago (1984!), a very well-known paper in the IBM Journal of Research looked at how long it took faults in IBM operating system software to become failures for the first time. We are not talking about cowboys producing software on the web that may or may not work okay, or people in their bedrooms producing apps. We’re talking about a very sophisticated product here that it was in use all around the world.

Yet, what Adams found was that lots of software faults took more than 5,000 operating years to be revealed. He found that more than 90% of faults in the software would take longer than 50 years to become failures.

‘Optimizing Preventive Service of Software Products’ Edward N. Adams, IBM Journal of Research and Development, 1984, Vol 28, Iss. 1

There are two things that Adams’s work tells us.

First, in any significant piece of software, there is a huge reservoir of faults waiting to be revealed. So if people start telling you that their software contains no defects or faults, either they’re dumb enough to believe that or they think you are. What we see in reality is that even in a very high-quality software product, there are a lot of latent defects.

Second, many of them – the vast majority of them – will take a long, long time to reveal themselves. Testing will not reveal them. Using Beta versions will not reveal them. Fifty years of use will not reveal them. They’re still there.

[This Section is a short extract from my course Principles of Safe Software Development.]

Conclusion

Legal experts stress the urgency of updating laws to reflect the fallibility of computers, crucial for ensuring fair trials and preventing miscarriages of justice. The UK Ministry of Justice acknowledges the need for scrutiny, pending the outcome of the Horizon inquiry, signaling a potential shift towards addressing issues of computer reliability in the legal framework.

Hopefully, the legal people will come to realize what software engineers have known for a long time.  Software reliability is difficult to achieve and must be demonstrated.

Leave a Reply

Your email address will not be published. Required fields are marked *