Categories
Mil-Std-882E Safety Analysis software safety

Functional Hazard Analysis

In this full-length (40-minute) session, The Safety Artisan looks at Functional Hazard Analysis, or FHA, which is Task 208 in Mil-Std-882E. FHA analyses software, complex electronic hardware, and human interactions. We explore the aim, description, and contracting requirements of this Task, and provide extensive commentary on it. (We refer to other lessons for special techniques for software safety and Human Factors.)

Functional Hazard Analysis: Context

So how do we analyze software safety?

Before we even start, we need to identify those system functions that may impact safety. We can do this by performing a Functional Failure Analysis (FFA) of all system requirements that might credibly lead to human harm.

An FFA looks at functional requirements (the system should do ‘this’ or ‘that’) and examines what could go wrong. What if:

  • The function does not work when needed?
  • The function works when not required?
  • The function works incorrectly? (There may be more than one version of this.)

(A variation of this technique is explained here.)

If the function could lead to a hazard then it is marked for further analysis. This is where we apply the FHA, Task 208.

Functional Hazard Analysis: The Lesson

This is the seven-minute demo; the full version is 40 minutes long.

Topics: Functional Hazard Analysis

  • Task 208 Purpose;
  • Task Description;
  • Update & Reporting
  • Contracting; and
  • Commentary.

Transcript: Functional Hazard Analysis

Click here for the Transcript

Introduction

Hello, everyone, and welcome to the Safety Artisan; Home of Safety Engineering Training. I’m Simon and today we’re going to be looking at how you analyse the safety of functions of complex hardware and software. We’ll see what that’s all about in just a second.

Functional Hazard Analysis

I’m just going to get to the right page. This, as you can see, functional hazard analysis is Task 208 in Mil. Standard 882E.

Topics for this Session

What we’ve got for today: we have three slides on the purpose of functional hazard analysis, and these are all taken from the standard. We’ve got six slides of task description. That’s the text from the standard plus we’ve got two tables that show you how it’s done from another part of the standard, not from Task 208. Then we’ve got update and recording, another two slides. Contracting, two slides. And five slides of commentary, which again include a couple of tables to illustrate what we’re talking about.

Functional Purpose HA #1

What we’re going to talk about is, as I say, functional hazard analysis. So, first of all, what’s the purpose of it? And in classic 882 style, Task 208 is to perform this functional hazard analysis on a system or subsystem or more than one. Again, as with all the other tasks, it’s used to identify and classify system functions and the safety consequences of functional failure or malfunction. In other words, hazards.

Now, I should point out at this stage that the standard is focused on malfunctions of the system. The truth is in the real world, that lots of software-intensive systems have been involved in accidents that have killed lots of people, even when they’re functioning as intended. That’s one of the short-sightedness of this Mil. Standard is that it focuses on failure. The idea that if something is performing as specified, that either the specification might be wrong or there might be some disconnect between what the system is doing and what the human expects – The way the standard is written just doesn’t recognize that. So, it’s not very good in that respect. However, bearing that in mind, let’s carry on with looking at the task.

Functional HA Purpose #2

We’re going to look at these consequences in terms of severity – severity only, we’ll come back to that – for the purpose of identifying what they call safety-critical functions, safety-critical items, safety-related functions, and safety-related items. And a quick word on that, I hate the term ‘safety-critical’ because it suggests a sort of binary “Either it’s safety-critical. Yes. Or it’s not safety-critical. No.” And lots of people take that to mean if it’s “safety-critical, no,” then it’s got nothing to do with safety. They don’t recognize that there’s a sort of a sliding scale between maximum safety criticality and none whatsoever. And that’s led to a lot of bad thinking and bad behaviour over the years where people do everything they can to pretend that something isn’t safety-related by saying, “Oh, it’s not safety-critical, therefore we don’t have to do anything.” And that kind of laziness kills people is the short answer.

Anyway, moving on. So, we’ve got these SCFs, SCIs, SRFs, SRIs and they’re supposed to be allocated or mapped to a system design architecture. The presumption in this – the assumption in this task is that we’re doing early – We’ll see that later – and that system design, system architecture, is still up for grabs. We can still influence it. Often that is not the case these days. This standard was written many years ago when the military used to buy loads of bespoke equipment and have it all developed from new. That doesn’t happen anymore so much in the military and it certainly doesn’t happen in many other walks of life – But we’ll talk about how you deal with the realities later. And they’re allocating these functions and these items of interest to hardware, software and human interfaces. And I should point out, when we’re talking about all that, all these things are complex. Software is complex, human is complex, and we’re talking about complex hardware. So, we’re talking about components where you can’t just say, “Oh, it’s got a reliability of X, and that’s how often it goes wrong” because those type of simple components that are only really subject to random failure, that’s not what we’re talking about here. We’re talking about complex stuff where we’re talking about systematic failure dominating over random, simple hardware failure. So, that’s the focus of this task and what we’re talking about. That’s not explained in the standard, but that’s what’s going on.

Functional HA Purpose #3

Now, our third slide on purpose; so we use the FHA to identify consequences of malfunction or functional failure, lack of function. As I said just now, we need to do this as early as possible in the systems engineering process to enable us to influence the design. Of course, this is assuming that there is a systems engineering process – that’s not always the case. We’ll talk about that at the end as well. And we’re going to identify and document these functions and items and allocate and it says partition them in the software design architecture. When we say partition, that’s jargon for separate them into independent functions. We’ll see the value of that later on. Then we’re going to identify requirements and constraints to put on the design team to say, “To achieve this allocation in this partitioning, this is what you must do and this is what you must not do”. So again, the assumption is we’re doing this early. There’s a significant amount of bespoke design yet to be done.

(End of Transcription for the 7-minute demo.)

Then What?

Once the FFA has identified the required ‘Level or Rigor’, we need to translate that into a suitable software development standard. This might be:

  • RTCA DO-178C (also know as ED-12C) for civil aviation;
  • The US Joint Software System Safety Engineering Handbook (JSSEH) for military systems;
  • IEC 61508 (functional safety) for the process industry;
  • CENELEC-50128 for the rail industry; and
  • ISO 26262 for automotive applications.

Such standards use Safety Integrity Levels (SILs) or Development Assurance Levels (DALs) to enforce appropriate Levels of Rigor. You can learn about those in my course Principles of Safe Software Development.

End

Categories
Blog software safety

Software Safety Assurance and Standards

This post, Software Safety Assurance and Standards, is the fifth in a series of six blog posts on Principles of Software Safety Assurance. In it, we look at the 4+1 principles that underlie all software safety standards.

We outline common software safety assurance principles that are evident in software safety standards and best practices. You can think of these guidelines as the unchanging foundation of any software safety argument because they hold true across projects and domains.

The principles serve as a guide for cross-sector certification and aid in maintaining comprehension of the “big picture” of software safety issues while evaluating and negotiating the specifics of individual standards.

Relationship to Existing Software Safety Standards

The ideas of software safety assurance discussed in this article are not explicit in most software safety standards, though they are typically present. However, by concentrating only on adherence to the letter of these standards, software developers using these standards are likely to lose sight of the primary goals (e.g. through box-ticking). We look at manifestations of each of the Principles in some of the most popular software safety standards below – IEC 61508, ISO 26262, and DO 178C.

Principle 1

IEC 61508 and ISO 26262 both demonstrate how hazard analysis at the system level and software safety criteria have been linked. High-level requirements that address system requirements assigned to software to prevent system risks must be defined, according to DO-178C. Particularly when used in conjunction with companion standard ARP 4754, this addresses Principle 1.

[In military aviation, I’m used to seeing Do-178 used in conjunction with Mil-Std-882. This also links hazard analysis to software safety requirements, although perhaps not as thoroughly as ARP 4754.]

Principle 2

Traceability in software needs is always required. The standards also place a strong emphasis on the software requirements’ iterative validation.

Specific examples of requirements decomposition models are provided by DO-178C and ISO26262. Capturing the justification for the required traceability is an area where standards frequently fall short (a crucial aspect of Principle 2).

What is particularly lacking is a focus on upholding the purpose of the software safety rules. Richer types of traceability that take the requirements’ purpose into account rather than just syntactic ones at various phases of development are needed for this.

Principle 3

The basis of the software safety standards is guidance on requirement satisfaction. Although there are distinct disparities in the advised methods of pleasure, this principle is generally thoroughly addressed (for example DO-178 traditionally placed a strong emphasis on testing).

[Def Stan 00-55 places more emphasis on proof, not just testing. However, this onerous software safety standard has fallen out of fashion.]

Principle 4

This requires that the absence of mistakes introduced during the software lifetime be demonstrated. Aspects of this principle can be seen in the standards. However, of all the standards, the software hazard analysis part receives the least attention.

[N.B. The combination of Mil-Std-882E and the Joint Software Systems Safety Engineering Handbook places a lot of emphasis on this aspect.]

The standards imply that system-level safety analysis is a process. The purpose of software development is to prove that requirements, including safety requirements assigned to software, as produced by system-level procedures, are correct. At later phases of the development process, these criteria are refined and put into practice without explicitly applying software hazard analysis.

There is no specific requirement in DO 178C to identify “emerging” safety risks during software development, but it does permit recognized safety issues to be transmitted back to the system level.

Principle 4+1

All standards share the idea of modifying the software assurance strategy in accordance with “risk.” However, there are significant differences in how the software’s criticality is assessed. IEC 61508 establishes a Safety Integrity Level based on the probability delta in risk reduction, DO-178B emphasizes severity, and ISO 26262 adds the idea of the vehicle’s controllability. At various levels of criticality, the suggested strategies and processes vary greatly as well.

[ The Mil-Std-882E approach is to set a ‘level of rigor’ for software development. This uses a combination of mishap severity and the reliance placed on the software to set the level.]

Software Safety Assurance and Standards: End of Part 5 (of 6)

This blog post is derived from ‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York. The original paper is available for free here. I was privileged to be taught safety engineering by Tim Kelly, and others, at the University of York. I am pleased to share their valuable work in a more accessible format.

If you found this blog article helpful then please leave a review, below. If you have a private question or comments then please connect here.