Categories
Blog software safety

Software Safety Assurance

Software Safety Assurance is the fourth in a new series of six blog posts on Principles of Software Safety Assurance. In them, we look at the 4+1 principles that underlie all software safety standards.

We outline common software safety assurance principles that are evident in software safety standards and best practices. You can think of these guidelines as the unchanging foundation of any software safety argument because they hold true across projects and domains.

The principles serve as a guide for cross-sector certification and aid in maintaining comprehension of the “big picture” of software safety issues while evaluating and negotiating the specifics of individual standards.

Software Assurance = Justified Confidence

[The original authors referred to Principle 4+1 as ‘confidence’, but this term is not well recognized, so I have used ‘assurance’. The two terms are related. Both terms get us to ask: how much safety is enough? This is also the topic addressed in my blog post on Proportionality.]

Principle 4+1:

The confidence established in addressing the software safety principles shall be commensurate to the contribution of the software to system risk.

‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York.

All safety-related software systems must adhere to the four aforementioned principles. To prove that each of the guiding principles has been established for the software, evidence must be presented.

Depending on the characteristics of the software system itself, the dangers that are present, and the principle that is being shown, the proof may take many different forms. The strength and quantity of the supporting evidence will determine how confidently or assuredly the premise is established.

Therefore, it’s crucial to confirm that the level of trust developed is always acceptable. This is frequently accomplished by making sure that the level of confidence attained corresponds to the contribution the software makes to system risk. This strategy makes sure that the areas that lower safety risk the most receive the majority of attention (when producing evidence).

This method is extensively used today. Many standards employ concepts like integrity or assurance levels to describe the amount of confidence needed in a certain software function.

Examples

The flight control system for the Boeing 777 airplane is a Fly-By-Wire (FBW) system … The Primary Flight Computer (PFC) is the central computation element of the FBW system. The triple modular redundancy (TMR) concept also applies to the PFC architectural design. Further, the N-version dissimilarity issue is integrated to the TMR concept.

Details are given of a ‘special case procedure’ within the principles’ framework which has been developed specifically to handle the particular problem of the assessment of software-based protection systems. The application of this ‘procedure’ to the Sizewell B Nuclear Power Station computer-based primary protection system is explained.

Suitability of Evidence

Once the essential level of confidence has been established, it is crucial to be able to judge whether it has been reached. Several factors must be taken into account when determining the degree of confidence with which each principle is put into practice.

The suitability of the evidence should be taken into consideration first. The constraints of the type of evidence being used must be considered too. These restrictions will have an impact on the degree of confidence that can be placed in each sort of evidence with regard to a certain principle.

Examples of these restrictions include the degree of test coverage that can be achieved, the precision of the models employed in formal analysis approaches, or the subjectivity of review and inspection. Most techniques have limits on what they can achieve.

Due to these limitations, it could be necessary to combine diverse types of evidence to reach the required degree of confidence in any one of the principles. The reliability of each piece of evidence must also be taken into account. This takes into account the degree of confidence in the item of evidence’s capacity to perform as expected.

This is also frequently referred to as evidence rigor or evidence integrity. The rigorousness of the technique employed to produce the evidence item determines its reliability. The primary variables that will impact trustworthiness are Tools, Personnel, Methodology, Level of Audit and Review, and Independence.

The four software safety principles will never change. However, there is a wide range of trust in how those principles are developed. We now know that a determination must be made regarding the degree of assurance required for any given system’s principles to be established. We now have our guiding principle.

Since it affects how the previous four principles are put into practice, this concept is also known as Principle 4+1.

Software Safety Assurance: End of Part 4 (of 6)

This blog post is derived from ‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York. The original paper is available for free here. I was privileged to be taught safety engineering by Tim Kelly, and others, at the University of York. I am pleased to share their valuable work in a more accessible format.

If you found this blog article helpful then please leave a review, below. If you have a private question or comments then please connect here.

Categories
Blog software safety

Principles of Software Safety Assurance

This is the first in a new series of blog posts on Principles of Software Safety Assurance. In it, we look at the 4+1 principles that underlie all software safety standards.

We outline common software safety assurance principles that are evident in software safety standards and best practices. You can think of these guidelines as the unchanging foundation of any software safety argument because they hold true across projects and domains.

The principles serve as a guide for cross-sector certification and aid in maintaining comprehension of the “big picture” of software safety issues while evaluating and negotiating the specifics of individual standards.

In this first of six blog posts, we introduce the subject and the First Principle.

Introduction

Software assurance standards have increased in number along with the use of software in safety-critical applications. There are now several software standards, including the cross-domain ‘functional safety’ standard IEC 61508, the avionics standard DO-178B/C, the railway application CENELEC-50128, and the automotive application ISO26262. (The last two are derivatives of IEC 61508.)

Unfortunately, there are significant discrepancies in vocabulary, concepts, requirements, and suggestions among these standards. It could seem like there is no way out of this.

However, the common software safety assurance principles that can be observed from both these standards and best practices are few (and manageable). These concepts are presented here together with their justification and an explanation of how they relate to current standards.

These ideas serve as the unchanging foundation of any software safety argument since they hold true across projects and domains. Of course, accepting these principles does not exempt one from adhering to domain-specific norms. However, they:

  • Provide a reference model for cross-sector certification; and
  • Aid in maintaining comprehension of the “big picture” of software safety issues;
  • While analysing and negotiating the specifics of individual standards.

Software Safety Principles

Principle 1: Requirements Validity

The first software safety assurance principle is:

Principle 1: Software safety requirements shall be defined to address the software contribution to system hazards.

‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York.

The evaluation and reduction of risks are crucial to the design of safety-critical systems. When specific environmental factors come into play, system-level dangers like unintentional braking release in cars and the absence of stall warnings in aircraft can result in accidents. Although conceptual, software can implement system control or monitoring features that increase these risks (e.g. software implementing antilock braking or aircraft warning functions).

Typically, the system safety assessment process uses safety analysis methodologies like Fault Tree Analysis or Hazard and Operability (HAZOP) Studies to pinpoint how software, along with other components like sensors, actuators, or power sources, can contribute to risks.  The results of these methodologies ought to influence the formulation of safety requirements and their distribution among software components.

It is crucial to remember that software is now considered a black box, utilized to enable specific functions, and with limited visibility into how these functions are implemented. The risk from some system hazards can rise to unacceptable levels if hazardous software failures are not identified and suitable safety standards are not defined and applied.

Examples of software requirements not being adequately defined – and the effects thereof – were reported by the US Federal Drug Authority (FDA).

Simply put, software is a fundamental enabling technology employed in safety-critical systems. Assessing the ways in which software might increase system risks should be a crucial component of the overall system safety process. The definition of safety standards to minimize hazardous software contributions that are discovered in a safety process addresses these contributions.

It is critical that these contributions are described in a clear and testable way, namely by identifying the exact types of software failures that can result in risks. If not, we run the risk of creating generic software safety requirements—or even just correctness requirements—that don’t take into account the specific hazardous failure modes that have an impact on the system’s safety.

Principles of Software Safety Assurance: End of Part 1 (of 6)

This blog post is derived from ‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York. The original paper is available for free here. I was privileged to be taught safety engineering by Tim Kelly, and others, at the University of York. I am pleased to share their valuable work in a more accessible format.

If you found this blog article helpful then please leave a review, below. If you have a private question or comments then please connect here.