So, how do we identify and analyze functional hazards? I’ve seen a lot of projects and programs. We’re great at doing the physical hazards, but not so good at the functional hazards.
So, when I talk about physical and functional hazards, the physical stuff, I think we’re probably all very familiar with them. They’re all to do with energy and toxicity.
Physical Hazards
So with energy, it might be fire, it might be electric shock. Potential energy, the potential energy of someone at height, or something falling. The impact of the kinetic energy. And then of course, in terms of toxicity, we’ve got hazardous chemicals, which we have to deal with. And then we’ve got biological hazards, plus smoke and toxic gasses, often from fires. Or chemical reactions.
So those are your physical hazards. As I said, we tend to be good at dealing with those. We’re used to dealing with that stuff. And most projects I’ve been on have been pretty good at identifying and analyzing that stuff. Not so for functional hazards.
Functional Hazards
I’ve been on lots of projects still today where functional hazards are just ignored completely or they’re only dealt with partially. So let’s explain what I mean about functional hazards. What we’re talking about is where a system is required to do something to perform some function. For example, cars move. They start, they move and they stop, hopefully.
Loss of Function
But what happens when those functions go wrong? What happens when we don’t get the function when we need it? The brakes fail on your car, for example. And so that’s a fairly obvious one. When functional hazards are looked at, it’s usually the functional failures that get attention.
But if that is the obvious failure mode, the less obvious failure modes tend to be more dangerous and there are the two.
Other Functional Failure Modes
So what happens if things work when they shouldn’t? What if you’re driving along on a road or the motorway, perhaps at high speed, and your brakes slam on for no apparent reason? Perhaps there is somebody behind you. Do you have a collision or do you lose control on the road and crash?
What if the function works, but it works incorrectly? For example, you turn the temperature down but instead, it goes up. Or you steer to the left, but instead, your vehicle goes to the right.
What if a display shows the wrong information? If you’re in a plane, maybe you’ve got an altimeter that tells you how high you are. It would be dangerous if the altimeter told you that you were level or climbing, but you were descending towards the ground. Yeah, we’ve had lots of that kind of accident.
So there’s an overview of what I mean by physical and functional hazards.
The Webinar: Identify and Analyze Functional Hazards
In this video, I look at Functional Hazard Analysis with Mil-Std-882E (FHA, which is Task 208 inMil-Std-882E). FHA analyses software, complex electronic hardware, and human interactions. I explore the aim, description, and contracting requirements of this Task, and provide extensive commentary on it. (I refer to other lessons for special techniques for software safety and Human Factors.)
This video, and the related webinar ‘Identify & Analyze Functional Hazards’, deal with an important topic. Programmable electronics and software now run so much of our modern world. They control many safety-related products and services. If they go wrong, they can hurt people.
I’ve been working with software-intensive systems since 1994. Functional hazards are often misunderstood, or overlooked, as they are hidden. However, the accidents that they can cause are very real. If you want to expand your analysis skills beyond just physical hazards, I will show you how.
Before we even start, we need to identify those system functions that may impact safety. We can do this by performing a Functional Failure Analysis (FFA) of all system requirements that might credibly lead to human harm.
An FFA looks at functional requirements (the system should do ‘this’ or ‘that’) and examines what could go wrong. What if:
The function does not work when needed?
The function works when not required?
The function works incorrectly? (There may be more than one version of this.)
(A variation of this technique is explained here.)
If the function could lead to a hazard then it is marked for further analysis. This is where we apply the FHA, Task 208.
Functional Hazard Analysis: The Lesson
Topics: Functional Hazard Analysis
Task 208 Purpose;
Task Description;
Update & Reporting
Contracting; and
Commentary.
Transcript: Functional Hazard Analysis
Introduction
Hello, everyone, and welcome to the Safety Artisan; Home of Safety Engineering Training. I’m Simon and today we’re going to be looking at how you analyze the safety of functions of complex hardware and software. We’ll see what that’s all about in just a second.
Functional Hazard Analysis
I’m just going to get to the right page. This, as you can see, functional hazard analysis is Task 208 in Mil. Standard 882E.
Topics for this Session
What we’ve got for today: we have three slides on the purpose of functional hazard analysis, and these are all taken from the standard. We’ve got six slides of task description. That’s the text from the standard plus we’ve got two tables that show you how it’s done from another part of the standard, not from Task 208. Then we’ve got update and recording, another two slides. Contracting, two slides. And five slides of commentary, which again include a couple of tables to illustrate what we’re talking about.
Functional Purpose HA #1
What we’re going to talk about is, as I say, functional hazard analysis. So, first of all, what’s the purpose of it? In classic 882 style, Task 208 is to perform this functional hazard analysis on a system or subsystem or more than one. Again, as with all the other tasks, we use it to identify and classify system functions and the safety consequences of functional failure or malfunction. In other words, hazards.
Now, I should point out at this stage that the standard is focused on malfunctions of the system. In the real world, lots of software-intensive systems cause accidents that have killed people, even when they’re functioning as intended. That’s one of the shortcomings of this Military Standard – it focuses on failure. But even if something performs as specified, either:
The specification might be wrong, or
The system might do something that the human operator does not expects.
Mil-Std-882E just doesn’t recognize that. So, it’s not very good in that respect. However, bearing that in mind, let’s carry on with looking at the task.
Functional HA Purpose #2
We’re going to look at these consequences in terms of severity – severity only, we’ll come back to that – to identify what they call safety-critical functions, safety-critical items, safety-related functions, and safety-related items. And a quick word on that, I hate the term ‘safety-critical’ because it suggests a sort of binary “Either it’s safety-critical. Yes. Or it’s not safety-critical. No.” And lots of people take that to mean if it’s “safety-critical, no,” then it’s got nothing to do with safety. They don’t recognize that there’s a sliding scale between maximum safety criticality and none whatsoever. And that’s led to a lot of bad thinking and bad behavior over the years where people do everything they can to pretend that something isn’t safety-related by saying, “Oh, it’s not safety-critical, therefore we don’t have to do anything.” And that kind of laziness kills people.
Anyway, moving on. So, we’ve got these SCFs, SCIs, SRFs, SRIs and they’re supposed to be allocated or mapped to a system design architecture. The presumption in this – the assumption in this task is that we’re doing early – We’ll see that later – and that system design, system architecture, is still up for grabs. We can still influence it.
COTS and MOTS Software
Often that is not the case these days. This standard was written many years ago when the military used to buy loads of bespoke equipment and have it all developed from new. That doesn’t happen anymore so much in the military and it certainly doesn’t happen in many other walks of life – But we’ll talk about how you deal with the realities later.
And they’re allocating these functions and these items of interest to hardware, software, and human interfaces. And I should point out, when we’re talking about all that, all these things are complex. Software is complex, human is complex, and we’re talking about complex hardware. So, we’re talking about components where you can’t just say, “Oh, it’s got a reliability of X, and that’s how often it goes wrong” because those types of simple components are only really subject to random failure, that’s not what we’re talking about here.
We’re talking about complex stuff where we’re talking about systematic failure dominating over random, simple hardware failure. So, that’s the focus of this task and what we’re talking about. That’s not explained in the standard, but that’s what’s going on.
Functional HA Purpose #3
Now, our third slide is on purpose; so, we use the FHA to identify the consequences of malfunction, functional failure, or lack of function. As I said just now, we need to do this as early as possible in the systems engineering process to enable us to influence the design. Of course, this is assuming that there is a system engineering process – that’s not always the case. We’ll talk about that at the end as well.
Also, we’re going to identify and document these functions and items and allocate and it says to partition them in the software design architecture. When we say partition, that’s jargon for separating them into independent functions. We’ll see the value of that later on. Then we’re going to identify requirements and constraints to put on the design team to say, “To achieve this allocation in this partitioning, this is what you must do and this is what you must not do”. So again, the assumption is we’re doing this early. There’s a significant amount of bespoke design yet to be done….
Then What?
Once the FFA has identified the required ‘Level or Rigor’, we need to translate that into a suitable software development standard. This might be:
RTCA DO-178C (also know as ED-12C) for civil aviation;
The US Joint Software System Safety Engineering Handbook (JSSEH) for military systems;
My name’s Simon Di Nucci. I’m a practicing system safety engineer, and I have been, for the last 25 years; I’ve worked in all kinds of domains, aircraft, ships, submarines, sensors, and command and control systems, and some work on rail air traffic management systems, and lots of software safety. So, I’ve done a lot of different things!
Our website uses cookies to provide you with the best experience. By continuing to use our website, you agree to our use of cookies. For more information, read our Privacy Policy on the "About" Page.