Categories
Blog software safety

Software Safety Principle 4

Software Safety Principle 4 is the third in a new series of six blog posts on Principles of Software Safety Assurance. In it, we look at the 4+1 principles that underlie all software safety standards.

We outline common software safety assurance principles that are evident in software safety standards and best practices. You can think of these guidelines as the unchanging foundation of any software safety argument because they hold true across projects and domains.

The principles serve as a guide for cross-sector certification and aid in maintaining comprehension of the “big picture” of software safety issues while evaluating and negotiating the specifics of individual standards.

Principle 4: Hazardous Software Behaviour

The fourth software safety principle is:

Principle 4: Hazardous behaviour of the software shall be identified and mitigated.

‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York.

Software safety requirements imposed on a software design can capture the high-level safety requirements’ intent. However, this does not ensure that all of the software’s potentially dangerous behaviors have been considered. Because of how the software has been created and built, there will frequently be unanticipated behaviors that cannot be understood through a straightforward requirements decomposition. These risky software behaviors could be caused by one of the following:

  1. Unintended interactions and behaviors brought on by software design choices; or
  2. Systematic mistakes made when developing software.

On 1 August 2005, a Boeing Company 777-200 aircraft, registered 9M-MRG, was being operated on a scheduled international passenger service from Perth to Kuala Lumpur, Malaysia. The crew experienced several frightening and contradictory cockpit indications.

This incident illustrates the issues that can result from unintended consequences of software design. Such incidents could only be foreseen through a methodical and detailed analysis of potential software failure mechanisms and their repercussions (both on the program and external systems). Putting safeguards in place to address potential harmful software behavior is possible if it has been found. However, doing so requires us to examine the potential impact of software design decisions.

Not all dangerous software behavior will develop as a result of unintended consequences of the software design. As a direct result of flaws made during the software design and implementation phases, dangerous behavior may also be seen. Seemingly minor development mistakes can have serious repercussions.

It’s important to stress that this is not a problem with software quality in general. We exclusively focus on faults that potentially result in dangerous behavior for the purposes of software safety assurance. As a result, efforts can be concentrated on lowering systematic errors in areas where they might have an impact on safety.

Since systematically establishing direct hazard causality for every error may not be possible in practice, it may be preferable for a while to accept what is regarded as best practice. However, the justification for doing so ought to at the very least be founded on knowledge from the software safety community on how the particular problem under consideration has led to safety-related accidents. 

To guarantee that adequate rigor is applied to their development, it is also crucial to identify the most crucial components of the software design. Any software behavior that may be risky must be recognized and stopped if there we are to be confident that the software will always behave safely.

Software Safety Principle 4: End of Part 3 (of 6)

This blog post is derived from ‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York. The original paper is available for free here. I was privileged to be taught safety engineering by Tim Kelly, and others, at the University of York. I am pleased to share their valuable work in a more accessible format.

If you found this blog article helpful then please leave a review, below. If you have a private question or comments then please connect here.

Categories
Blog software safety

Principles of Software Safety Assurance

This is the first in a new series of blog posts on Principles of Software Safety Assurance. In it, we look at the 4+1 principles that underlie all software safety standards.

We outline common software safety assurance principles that are evident in software safety standards and best practices. You can think of these guidelines as the unchanging foundation of any software safety argument because they hold true across projects and domains.

The principles serve as a guide for cross-sector certification and aid in maintaining comprehension of the “big picture” of software safety issues while evaluating and negotiating the specifics of individual standards.

In this first of six blog posts, we introduce the subject and the First Principle.

Introduction

Software assurance standards have increased in number along with the use of software in safety-critical applications. There are now several software standards, including the cross-domain ‘functional safety’ standard IEC 61508, the avionics standard DO-178B/C, the railway application CENELEC-50128, and the automotive application ISO26262. (The last two are derivatives of IEC 61508.)

Unfortunately, there are significant discrepancies in vocabulary, concepts, requirements, and suggestions among these standards. It could seem like there is no way out of this.

However, the common software safety assurance principles that can be observed from both these standards and best practices are few (and manageable). These concepts are presented here together with their justification and an explanation of how they relate to current standards.

These ideas serve as the unchanging foundation of any software safety argument since they hold true across projects and domains. Of course, accepting these principles does not exempt one from adhering to domain-specific norms. However, they:

  • Provide a reference model for cross-sector certification; and
  • Aid in maintaining comprehension of the “big picture” of software safety issues;
  • While analysing and negotiating the specifics of individual standards.

Software Safety Principles

Principle 1: Requirements Validity

The first software safety assurance principle is:

Principle 1: Software safety requirements shall be defined to address the software contribution to system hazards.

‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York.

The evaluation and reduction of risks are crucial to the design of safety-critical systems. When specific environmental factors come into play, system-level dangers like unintentional braking release in cars and the absence of stall warnings in aircraft can result in accidents. Although conceptual, software can implement system control or monitoring features that increase these risks (e.g. software implementing antilock braking or aircraft warning functions).

Typically, the system safety assessment process uses safety analysis methodologies like Fault Tree Analysis or Hazard and Operability (HAZOP) Studies to pinpoint how software, along with other components like sensors, actuators, or power sources, can contribute to risks.  The results of these methodologies ought to influence the formulation of safety requirements and their distribution among software components.

It is crucial to remember that software is now considered a black box, utilized to enable specific functions, and with limited visibility into how these functions are implemented. The risk from some system hazards can rise to unacceptable levels if hazardous software failures are not identified and suitable safety standards are not defined and applied.

Examples of software requirements not being adequately defined – and the effects thereof – were reported by the US Federal Drug Authority (FDA).

Simply put, software is a fundamental enabling technology employed in safety-critical systems. Assessing the ways in which software might increase system risks should be a crucial component of the overall system safety process. The definition of safety standards to minimize hazardous software contributions that are discovered in a safety process addresses these contributions.

It is critical that these contributions are described in a clear and testable way, namely by identifying the exact types of software failures that can result in risks. If not, we run the risk of creating generic software safety requirements—or even just correctness requirements—that don’t take into account the specific hazardous failure modes that have an impact on the system’s safety.

Principles of Software Safety Assurance: End of Part 1 (of 6)

This blog post is derived from ‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York. The original paper is available for free here. I was privileged to be taught safety engineering by Tim Kelly, and others, at the University of York. I am pleased to share their valuable work in a more accessible format.

If you found this blog article helpful then please leave a review, below. If you have a private question or comments then please connect here.