Categories
Blog software safety

Software Safety Principle 4

Software Safety Principle 4 is the third in a new series of six blog posts on Principles of Software Safety Assurance. In it, we look at the 4+1 principles that underlie all software safety standards. (The previous post in the series is here.)

We outline common software safety assurance principles that are evident in software safety standards and best practices. You can think of these guidelines as the unchanging foundation of any software safety argument because they hold across projects and domains.

The principles serve as a guide for cross-sector certification and aid in maintaining comprehension of the “big picture” of software safety issues while evaluating and negotiating the specifics of individual standards.

Principle 4: Hazardous Software Behaviour

The fourth software safety principle is:

Principle 4: Hazardous behaviour of the software shall be identified and mitigated.

‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York.

Software safety requirements imposed on a software design can capture the high-level safety requirements’ intent. However, this does not ensure that all of the software’s potentially dangerous behaviors have been considered. Because of how the software has been created and built, there will frequently be unanticipated behaviors that cannot be understood through a straightforward requirements decomposition. These risky software behaviors could be caused by one of the following:

  1. Unintended interactions and behaviors brought on by software design choices; or
  2. Systematic mistakes made when developing software.

On 1 August 2005, a Boeing Company 777-200 aircraft, registered 9M-MRG, was being operated on a scheduled international passenger service from Perth to Kuala Lumpur, Malaysia. The crew experienced several frightening and contradictory cockpit indications.

This incident illustrates the issues that can result from unintended consequences of software design. Such incidents could only be foreseen through a methodical and detailed analysis of potential software failure mechanisms and their repercussions (both on the program and external systems). Putting safeguards in place to address potential harmful software behavior is possible if it has been found. However, doing so requires us to examine the potential impact of software design decisions.

Not all dangerous software behavior will develop as a result of unintended consequences of the software design. As a direct result of flaws made during the software design and implementation phases, dangerous behavior may also be seen. Seemingly minor development mistakes can have serious repercussions.

It’s important to stress that this is not a problem with software quality in general. We exclusively focus on faults that potentially result in dangerous behavior for software safety assurance. As a result, efforts can be concentrated on lowering systematic errors in areas where they might have an impact on safety.

Since systematically establishing direct hazard causality for every error may not be possible in practice, it may be preferable for a while to accept what is regarded as best practice. However, the justification for doing so ought to at the very least be founded on knowledge from the software safety community on how the particular problem under consideration has led to safety-related accidents. 

To guarantee that adequate rigor is applied to their development, it is also crucial to identify the most crucial components of the software design. Any software behavior that may be risky must be recognized and stopped if there we are to be confident that the software will always behave safely.

Software Safety Principle 4: End of Part 3 (of 6)

This blog post is derived from ‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York. The original paper is available for free here. I was privileged to be taught safety engineering by Tim Kelly, and others, at the University of York. I am pleased to share their valuable work in a more accessible format.

Meet the Author

My name’s Simon Di Nucci. I’m a practicing system safety engineer, and I have been, for the last 25 years; I’ve worked in all kinds of domains, aircraft, ships, submarines, sensors, and command and control systems, and some work on rail air traffic management systems, and lots of software safety. So, I’ve done a lot of different things!

Learn more about this subject in my course ‘Principles of Safe Software’ here. The next post in the series is here.